forum_id
stringlengths 9
20
| forum_title
stringlengths 3
179
| forum_authors
sequencelengths 0
82
| forum_abstract
stringlengths 1
3.52k
| forum_keywords
sequencelengths 1
29
| forum_decision
stringclasses 22
values | forum_pdf_url
stringlengths 39
50
| forum_url
stringlengths 41
52
| venue
stringclasses 46
values | year
stringdate 2013-01-01 00:00:00
2025-01-01 00:00:00
| reviews
sequence |
---|---|---|---|---|---|---|---|---|---|---|
By0ANxbRW | DNN Model Compression Under Accuracy Constraints | [
"Soroosh Khoram",
"Jing Li"
] | The growing interest to implement Deep Neural Networks (DNNs) on resource-bound hardware has motivated innovation of compression algorithms. Using these algorithms, DNN model sizes can be substantially reduced, with little to no accuracy degradation. This is achieved by either eliminating components from the model, or penalizing complexity during training. While both approaches demonstrate considerable compressions, the former often ignores the loss function during compression while the later produces unpredictable compressions. In this paper, we propose a technique that directly minimizes both the model complexity and the changes in the loss function. In this technique, we formulate compression as a constrained optimization problem, and then present a solution for it. We will show that using this technique, we can achieve competitive results. | [
"DNN Compression",
"Weigh-sharing",
"Model Compression"
] | Reject | https://openreview.net/pdf?id=By0ANxbRW | https://openreview.net/forum?id=By0ANxbRW | ICLR.cc/2018/Conference | 2018 | {
"note_id": [
"HJlSjw_ez",
"Hk6K4Rwlf",
"rk6muOPxG",
"HyCcrJTHf"
],
"note_type": [
"official_review",
"official_review",
"official_review",
"decision"
],
"note_created": [
1511713592238,
1511675013118,
1511651365204,
1517249941650
],
"note_signatures": [
[
"ICLR.cc/2018/Conference/Paper577/AnonReviewer1"
],
[
"ICLR.cc/2018/Conference/Paper577/AnonReviewer3"
],
[
"ICLR.cc/2018/Conference/Paper577/AnonReviewer2"
],
[
"ICLR.cc/2018/Conference/Program_Chairs"
]
],
"structured_content_str": [
"{\"title\": \"The manuscript presents a compression method for DNN, but I cannot find significant differences from and advantages over existing deep compression approaches. Besides, the experiments are not persuasive.\", \"rating\": \"3: Clear rejection\", \"review\": \"1. This paper proposes a deep neural network compression method by maintaining the accuracy of deep models using a hyper-parameter. However, all compression methods such as pruning and quantization also have this concern. For example, the basic assumption of pruning is to discard subtle parameters has little impact on feature maps thus the accuracy of the original network can be preserved. Therefore, the novelty of the proposed method is somewhat weak.\\n\\n2. There are a lot of new algorithms on compressing deep neural networks such as [r1][r2][r3]. However, the paper only did a very simple investigation on related works.\\n[r1] CNNpack: packing convolutional neural networks in the frequency domain.\\n[r2] LCNN: Lookup-based Convolutional Neural Network.\\n[r3] Xnor-net: Imagenet classification using binary convolutional neural networks.\\n\\n3. Experiments in the paper were only conducted on several small datasets such as MNIST and CIFAR-10. It is necessary to employ the proposed method on benchmark datasets to verify its effectiveness, e.g., ImageNet.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"The paper is clearly unqualified for publication in the current stage.\", \"rating\": \"3: Clear rejection\", \"review\": \"The paper addresses an interesting problem of DNN model compression. The main idea is to combine the approaches in (Han et al., 2015) and (Ullrich et al., 2017) to get a loss value constrained k-means encoding method for network compression. An iterative algorithm is developed for model optimization. Experimental results on MNIST, CIFAR-10 and SVHN are reported to show the compression performance.\\n\\nThe reviewer would expect papers submitted for review to be of publishable quality. However, this manuscript is not polished enough for publication: it has too many language errors and imprecisions which make the paper hard to follow. In particular, there is no clear definition of problem formulation, and the algorithms are poorly presented and elaborated in the context.\", \"pros\": [\"The network compression problem is of general interest to ICLR audience.\"], \"cons\": [\"The proposed approach follows largely the existing work and thus its technical novelty is weak.\", \"Paper presentation quality is clearly below the standard.\", \"Empirical results do not clearly show the advantage of the proposed method over state-of-the-arts.\"], \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Promising Idea, Confusing Writing, Key Experiment Missing\", \"rating\": \"4: Ok but not good enough - rejection\", \"review\": \"1. Summary\\n\\nThis paper introduced a method to learn a compressed version of a neural network such that the loss of the compressed network doesn't dramatically change.\\n\\n\\n2. High level paper\\n\\n- I believe the writing is a bit sloppy. For instance equation 3 takes the minimum over all m in C but C is defined to be a set of c_1, ..., c_k, and other examples (see section 4 below). This is unfortunate because I believe this method, which takes as input a large complex network and compresses it so the loss in accuracy is small, would be really appealing to companies who are resource constrained but want to use neural network models.\\n\\n\\n3. High level technical\\n\\n- I'm confused at the first and second lines of equation (19). In the first line, shouldn't the first term not contain \\\\Delta W ? In the second line, shouldn't the first term be \\\\tilde{\\\\mathcal{L}}(W_0 + \\\\Delta W) ?\\n- For CIFAR-10 and SVHN you're using Binarized Neural Networks and the two nice things about this method are (a) that the memory usage of the network is very small, and (b) network operations can be specialized to be fast on binary data. My worry is if you're compressing these networks with your method are the weights not treated as binary anymore? Now I know in Binarized Neural Networks they keep a copy of real-valued weights so if you're just compressing these then maybe all is alright. But if you're compressing the weights _after_ binarization then this would be very inefficient because the weights won't likely be binary anymore and (a) and (b) above no longer apply.\\n- Your compression ratio is much higher for MNIST but your accuracy loss is somewhat dramatic, especially for MNIST (an increase of 0.53 in error nearly doubles your error and makes the network worse than many other competing methods: http://rodrigob.github.io/are_we_there_yet/build/classification_datasets_results.html#4d4e495354). What is your compression ratio for 0 accuracy loss? I think this is a key experiment that should be run as this result would be much easier to compare with the other methods.\\n- Previous compression work uses a lot of tricks to compress convolutional weights. Does your method work for convolutional layers?\\n- The first paper to propose weight sharing was not Han et al., 2015, it was actually:\\nChen W., Wilson, J. T., Tyree, S., Weinberger K. Q., Chen, Y. \\\"Compressing Neural Networks with the Hashing Trick\\\" ICML 2015\\nAlthough they did not learn the weight sharing function, but use random hash functions.\\n\\n\\n4. Low level technical\\n\\n- The end of Section 2 has an extra 'p' character\\n- Section 3.1: \\\"Here, X and y define a set of samples and ideal output distributions we use for training\\\" this sentence is a bit confusing. Here y isn't a distribution, but also samples drawn from some distribution. Actually I don't think it makes sense to talk about distributions at all in Section 3.\\n- Section 3.1: \\\"W is the learnt model...\\\\hat{W} is the final, trained model\\\" This is unclear: W and \\\\hat{W} seem to describe the same thing. I would just remove \\\"is the learnt model and\\\"\\n\\n\\n5. Review summary\\n\\nWhile the trust-region-like optimization of the method is nice and I believe this method could be useful for practitioners, I found the paper somewhat confusing to read. This combined with some key experimental questions I have make me think this paper still needs work before being accepted to ICLR.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"decision\": \"Reject\", \"title\": \"ICLR 2018 Conference Acceptance Decision\", \"comment\": \"Proposed network compression method offers limited technical novelty over existing approaches, and empirical evaluations do not clearly demonstrate an advantage over current state-of-the-art.\\nPaper presentation quality also needs to be improved.\"}"
]
} |
B1EGg7ZCb | Autonomous Vehicle Fleet Coordination With Deep Reinforcement Learning | [
"Cane Punma"
] | Autonomous vehicles are becoming more common in city transportation. Companies will begin to find a need to teach these vehicles smart city fleet coordination. Currently, simulation based modeling along with hand coded rules dictate the decision making of these autonomous vehicles. We believe that complex intelligent behavior can be learned by these agents through Reinforcement Learning.In this paper, we discuss our work for solving this system by adapting the Deep Q-Learning (DQN) model to the multi-agent setting. Our approach applies deep reinforcement learning by combining convolutional neural networks with DQN to teach agents to fulfill customer demand in an environment that is partially observ-able to them. We also demonstrate how to utilize transfer learning to teach agents to balance multiple objectives such as navigating to a charging station when its en-ergy level is low. The two evaluations presented show that our solution has shown hat we are successfully able to teach agents cooperation policies while balancing multiple objectives. | [
"Deep Reinforcement Learning",
"mult-agent systems"
] | Reject | https://openreview.net/pdf?id=B1EGg7ZCb | https://openreview.net/forum?id=B1EGg7ZCb | ICLR.cc/2018/Conference | 2018 | {
"note_id": [
"HyqzL3Ogz",
"ryPMUk6BG",
"rJJxcdqeM",
"Hy73csVeG"
],
"note_type": [
"official_review",
"decision",
"official_review",
"official_review"
],
"note_created": [
1511732754427,
1517250063459,
1511848423453,
1511467691457
],
"note_signatures": [
[
"ICLR.cc/2018/Conference/Paper1103/AnonReviewer3"
],
[
"ICLR.cc/2018/Conference/Program_Chairs"
],
[
"ICLR.cc/2018/Conference/Paper1103/AnonReviewer2"
],
[
"ICLR.cc/2018/Conference/Paper1103/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"Interesting problem and approach, but not ready for ICLR\", \"rating\": \"3: Clear rejection\", \"review\": \"In this paper, the authors define a simulated, multi-agent \\u201ctaxi pickup\\u201d task in a GridWorld environment. In the task, there are multiple taxi agents that a model must learn to control. \\u201cCustomers\\u201d randomly appear throughout the task and the taxi agents receive reward for moving to the same square as a customer. Since there are multiple customer and taxi agents, there is a multi-agent coordination problem. Further, the taxi agents have \\u201cbatteries\\u201d, which starts at a positive number, ticks down by one on each time step and a large negative reward is given if this number reaches zero. The battery can be \\u201crecharged\\u201d by moving to a \\u201ccharge\\u201d tile.\\n\\nCooperative multi-agent problem solving is an important problem in machine learning, artificial intelligence, and cognitive science. This paper defines and examines an interesting cooperative problem: Assignment and control of agents to move to certain squares under \\u201cphysical\\u201d constraints. The authors propose a centralized solution to the problem by adapting the Deep Q-learning Network model. I do not know whether using a centralized network where each agent has a window of observations is a novel algorithm. The manuscript itself makes it difficult to assess (more on this later). If it were novel, it would be an incremental development. They assess their solution quantitatively, demonstrating their model performs better than first, a simple heuristic model (I believe de-centralized Dijkstra\\u2019s for each agent, but there is not enough description in the manuscript to know for sure), and then, two other baselines that I could not figure out from the manuscript (I believe it was Dijkstra\\u2019s with two added rules for when to recharge).\\n\\nAlthough the manuscript has many positive aspects to it, I do not believe it should be accepted for the following reasons. First, the manuscript is poorly written, to the point where it has inhibited my ability to assess it. Second, given its contribution, the manuscript is better suited for a conference specific to multi-agent decision-making. There are a few reasons for this. 1) I was not convinced that deep Q-learning was necessary to solve this problem. The manuscript would be much stronger if the authors compared their method to a more sophisticated baseline, for example having each agent be a simple Q-learner with no centralization or \\u201cdeepness\\u201d. This would solve another issue, which is the weakness of their baseline measure. There are many multi-agent techniques that can be applied to the problem that would have served as a better baseline. 2) Although the problem itself is interesting, it is a bit too applied and specific to the particular task they studied than is appropriate for a conference with as broad interests as ICLR. It also is a bit simplistic (I had expected the agents to at least need to learn to move the customer to some square rather than get reward and move to the next job from just getting to the customer\\u2019s square). Can you apply this method to other multi-agent problems? How would it compare to other methods on those problems? \\n\\nI encourage the authors to develop the problem and method further, as well as the analysis and evaluation.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"decision\": \"Reject\", \"title\": \"ICLR 2018 Conference Acceptance Decision\", \"comment\": \"The reviewers agree that the manuscript is below the acceptance threshold at ICLR. Many points of criticism were evident in the reviewer comments, including small artificial test domain, no new methods introduced, poor writing in some places, and dubious need for DeepRL in this domain. The reviews mentioned a number of constructive comments to improve the paper, and we hope this will provide useful guidance for the authors to rewrite and resubmit to a future venue.\"}",
"{\"title\": \"The authors apply a previous algorithm named MADQN to the fleet management problem. Simulation results are not convincing and I have some questions regarding the partial observability.\", \"rating\": \"4: Ok but not good enough - rejection\", \"review\": \"The main contribution of the paper seems to be the application to this problem, plus minor algorithmic/problem-setting contributions that consist in considering partial observability and to balance multiple objectives. On one hand, fleet management is an interesting and important problem. On the other hand, although the experiments are well designed and illustrative, the approach is only tested in a small 7x7 grid and 2 agents and in a 10x10 grid with 4 agents. In spirit, these simulations are similar to those in the original paper by M. Egorov. Since the main contribution is to use an existing algorithm to tackle a practical application, it would be more interesting to tweak the approach until it is able to tackle a more realistic scenario (mainly larger scale, but also more realistic dynamics with traffic models, real data, etc.).\\n\\nSimulation results compare MADQN with Dijkstra's algorithm as a baseline, which offers a myopic solution where each agent picks up the closest customer. Again, since the main contribution is to solve a specific problem, it would be worthy to compare with a more extensive benchmark, including state of the art algorithms used for this problem (e.g., heuristics and metaheuristics). \\n\\nThe paper is clear and well written. There are several minor typos and formatting errors (e.g., at the end of Sec. 3.3, the authors mention Figure 3, which seems to be missing, also references [Egorov, Maxim] and [Palmer, Gregory] are bad formatted). \\n\\n\\n-- Comments and questions to the authors:\\n\\n1. In the introduction, please, could you add references to what is called \\\"traditional solutions\\\"?\\n\\n2. Regarding the partial observability, each agent knows the location of all agents, including itself, and the location of all obstacles and charging locations; but it only knows the location of customers that are in its vision range. This assumption seems reasonable if a central station broadcasts all agents' positions and customers are only allowed to stop vehicles in the street, without ever contacting the central station; otherwise if agents order vehicles in advance (e.g., by calling or using an app) the central station should be able to communicate customers locations too. On the other hand, if no communication with the central station is allowed, then positions of other agents may be also partial observable. In other words, the proposed partial observability assumption requires some further motivation. Moreover, in Sec. 4.3, it is said that agents can see around them +10 spaces away; however, experiments are run in 7x7 and 10x10 grid worlds, meaning that the agents are able to observe the grid completely.\\n\\n3. The fact that partial observability helped to alleviate the credit-assignment noise caused by the missing customer penalty might be an artefact of the setting. For instance, since the reward has been designed arbitrarily, it could have been defined as giving a penalty for those missing customers that are at some distance of an agent.\\n\\n4. Please, could you explain the last sentence of Sec. 4.3 that says \\\"The drawback here is that the agents will not be able to generalize to other unseen maps that may have very different geographies.\\\" In particular, how is this sentence related to partial observability?\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"no top-tier conference paper\", \"rating\": \"3: Clear rejection\", \"review\": \"This paper proposes to use deep reinforcement learning to solve a multiagent coordination task. In particular, the paper introduces a benchmark domain to model fleet coordination problems as might be encountered in taxi companies. \\n\\nThe paper does not really introduce new methods, and as such, this paper should be seen more as an application paper. I think that such a paper could have merits if it would really push the boundary of the feasible, but I do not think that is really the case with this paper: the task still seems quite simplistic, and the empirical evaluation is not convincing (limited analysis, weak baselines). As such, I do not really see any real grounds for acceptance.\\n\\nFinally, there are also many other weaknesses. The paper is quite poorly written in places, has poor formatting (citations are incorrect and half a bibtex entry is inlined), and is highly inadequate in its treatment of related work. For instance, there are many related papers on:\\n\\n-taxi fleet management (e.g., work by Pradeep Varakantham)\\n \\n-coordination in multi-robot systems for spatially distributed tasks (e.g., Gerkey and much work since)\\n\\n-scaling up multiagent reinforcement learning and multiagent MDPs (Guestrin et al 2002, Kok & Vlassis 2006, etc.)\\n\\n-dealing with partial observability (work on decentralized POMDPs by Peshkin et al, 2000, Bernstein, Amato, etc.)\\n\\n-multiagent deep RL has been very active last 1-2 years. E.g., see other papers by Foerster, Sukhbataar, Omidshafiei\\n\\n\\nOverall, I see this as a paper which with improvements could make a nice workshop contribution, but not as a paper to be published at a top-tier venue.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
ByCPHrgCW | Deep Learning Inferences with Hybrid Homomorphic Encryption | [
"Anthony Meehan",
"Ryan K L Ko",
"Geoff Holmes"
] | When deep learning is applied to sensitive data sets, many privacy-related implementation issues arise. These issues are especially evident in the healthcare, finance, law and government industries. Homomorphic encryption could allow a server to make inferences on inputs encrypted by a client, but to our best knowledge, there has been no complete implementation of common deep learning operations, for arbitrary model depths, using homomorphic encryption. This paper demonstrates a novel approach, efficiently implementing many deep learning functions with bootstrapped homomorphic encryption. As part of our implementation, we demonstrate Single and Multi-Layer Neural Networks, for the Wisconsin Breast Cancer dataset, as well as a Convolutional Neural Network for MNIST. Our results give promising directions for privacy-preserving representation learning, and the return of data control to users.
| [
"deep learning",
"homomorphic encryption",
"hybrid homomorphic encryption",
"privacy preserving",
"representation learning",
"neural networks"
] | Reject | https://openreview.net/pdf?id=ByCPHrgCW | https://openreview.net/forum?id=ByCPHrgCW | ICLR.cc/2018/Conference | 2018 | {
"note_id": [
"Sy52MUdgG",
"ryMApotmG",
"BJxS1hFXG",
"rJhC64Olf",
"HkCG-X5lG",
"S174cGs7z",
"SyDOU16Sz",
"B1p3soF7z"
],
"note_type": [
"official_review",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_comment",
"decision",
"official_comment"
],
"note_created": [
1511707313611,
1514941898182,
1514942264413,
1511701971758,
1511825686146,
1515035179292,
1517250158726,
1514941365187
],
"note_signatures": [
[
"ICLR.cc/2018/Conference/Paper255/AnonReviewer2"
],
[
"ICLR.cc/2018/Conference/Paper255/Authors"
],
[
"ICLR.cc/2018/Conference/Paper255/Authors"
],
[
"ICLR.cc/2018/Conference/Paper255/AnonReviewer3"
],
[
"ICLR.cc/2018/Conference/Paper255/AnonReviewer1"
],
[
"ICLR.cc/2018/Conference/Paper255/Authors"
],
[
"ICLR.cc/2018/Conference/Program_Chairs"
],
[
"ICLR.cc/2018/Conference/Paper255/Authors"
]
],
"structured_content_str": [
"{\"title\": \"Review of \\\"Deep Learning Inferences with Hybrid Homomorphic Encryption\\\"\", \"rating\": \"4: Ok but not good enough - rejection\", \"review\": \"The paper presents a means of evaluating a neural network securely using homomorphic encryption. A neural network is already trained, and its weights are public. The network is to be evaluated over a private input, so that only the final outcome of the computation-and nothing but that-is finally learned.\", \"the_authors_take_a_binary_circuit_approach\": \"they represent numbers via a fixed point binary representation, and construct circuits of secure adders and multipliers, based on homomorphic encryption as a building block for secure gates. This allows them to perform the vector products needed per layer; two's complement representation also allows for an \\\"easy\\\" implementation of the ReLU activation function, by \\\"checking\\\" (multiplying by) the complement of the sign bit. The fact that multiplication often involves public weights is used to speed up computations, wherever appropriate. A rudimentary experimental evaluation with small networks is provided.\\n\\nAll of this is somewhat straightforward; a penalty is paid by representing numbers via fixed point arithmetic, which is used to deal with ReLU mostly. This is somewhat odd: it is not clear why, e.g., garbled circuits where not used for something like this, as it would have been considerably faster than FHE.\\n\\nThere is also a work in this area that the authors do not cite or contrast to, bringing the novelty into question; please see the following papers and references therein:\\n\\nGILAD-BACHRACH, R., DOWLIN, N., LAINE, K., LAUTER, K., NAEHRIG, M., AND WERNSING, J. Cryptonets: Applying neural networks to encrypted data with high throughput and accuracy. In Proceedings of The 33rd International Conference on Machine Learning (2016), pp. 201\\u2013210.\", \"secureml\": \"A System for Scalable Privacy-Preserving Machine Learning\\nPayman Mohassel and Yupeng Zhang\\n\\nSHOKRI, R., AND SHMATIKOV, V. Privacy-preserving deep learning. In\\nProceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security (2015), ACM, pp. 1310\\u20131321.\\n\\nThe first paper is the most related, also using homomorphic encryption, and seems to cover a superset of the functionalities presented here (more activation functions, a more extensive analysis, and faster decryption times). The second paper uses arithmetic circuits rather than HE, but actually implements training an entire neural network securely.\", \"minor_details\": \"The problem scenario states that the model/weights is private, but later on it ceases to be so (weights are not encrypted).\\n\\n\\\"Both deep learning and FHE are relatively recent paradigms\\\". Deep learning is certainly not recent, while Gentry's paper is now 7 years old.\\n\\n\\\"In theory, this system alone could be used to compute anything securely.\\\" This is informal and incorrect. Can it solve the halting problem?\\n\\n\\\"However in practice the operations were incredibly slow, taking up to 30 minutes in some cases.\\\" It is unclear what operations are referred to here.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Reply to AnonReviewer2\", \"comment\": \"Thank you for your detailed review!\\n\\nWe chose not to use a garbled circuit approach for our work, because this would reveal at least in part, the structure of the model. Part of our problem scenario is that the server does not wish to reveal the model to the client, and by extension the model\\u2019s structure.\\n\\nWe do make comparisons between our work and Cryptonets, under the reference \\u201cDowlin et al. (2016)\\u201d. It is fair to compare our paper with theirs, since they share a common goal. Their paper discusses three activation functions: sigmoid, ReLU and square. They do not attempt to implement sigmoid and ReLU, and instead use the square activation exclusively. Their paper presents the disadvantages to the square activation, in particular the unbounded derivative, making training difficult and limiting the depth of any model using this approach. It is also one of the most expensive operations in their network, because they must multiply two ciphertexts together. We have updated the \\u201cActivation Functions\\u201d subsection of the design section, to discuss the square activation in more detail.\\n\\nBecause we use binary circuits, our approach can exactly replicate the ReLU activation, and a piecewise linear approximation of sigmoid. We implement both of these, however we did not feel that it was necessary to implement the square activation, because this was used in Cryptonets as a replacement for ReLU, to solve a problem unique to arithmetic circuits.\\n\\nWe did not include decryption times in our results, because they were executing in less than a microsecond. Because our system only requires decryption at the very end of the process, it is a negligible cost compared to overall execution time. We have updated the results section to better clarify why we did not include these measurements.\\n\\nWe did not feel that securely training a neural network, such as with SecureML, would be of benefit for our problem scenario. If a model is securely trained, all weights are restricted to those clients which provided training data, leading to a different scenario where the server hosts the model structure, the client provides the training data, and neither party has access to the weights. If the server chose to give the weights to the client, then the client could reconstruct the model and run it in plaintext, removing the need for the server. \\nSimilarly with \\u201cPrivacy-preserving deep learning\\u201d, their goal is to have multiple parties collaboratively train a model, without revealing their respective training data. This is also leads to a different scenario, where each client has a local model. \\nWe have added a short \\u201cPrivacy-Preserving Model Training\\u201d subsection to the background section, to reference these works and better clarify why we do not consider model training.\\n\\nIt is fair to challenge the novelty of our work. As discussed, there have been a number of works which implement neural networks, and secure client inputs. However we feel that under our problem scenario, where the server does not wish to reveal the model to the client, and the client does not wish to reveal the input to the server, our approach is novel because it permits important functionality that is not present in Cryptonets, and allows the server to keep its model completely private, unlike SecureML and \\u201cPrivacy-preserving deep learning\\u201d. We have updated the problem scenario, to hopefully prevent any ambiguity over what our goals were for this work.\", \"to_address_the_minor_details\": \"By \\u201cweight privacy\\u201d, the intended message was that under our problem scenario, the server does not have to reveal the model structure or weights to the client. While they could encrypt their weights in our framework, it would substantially slow down operations as shown in our comparison between hybrid and ciphertext multipliers. We suggest that the weights are unencrypted, but are kept internal to the server. We have updated the problem scenario to more clearly state that the server does not reveal the model or weights to the client, as opposed to the server explicitly securing the weights.\\n\\n\\\"Both deep learning and FHE are relatively recent paradigms\\\". It is reasonable to consider deep learning and fully homomorphic encryption to be old paradigms. We have changed this sentence to \\u201cBoth deep learning and FHE have seen significant advances in the past ten years\\u201d, reflecting that this work is built upon advances in the past decade.\\n\\n\\\"In theory, this system alone could be used to compute anything securely.\\\" Indeed their system would not solve the halting problem! We have changed this to \\u201ccompute any arithmetic circuit\\u201d.\\n\\n\\\"However in practice the operations were incredibly slow, taking up to 30 minutes in some cases.\\\" We were referring to the time needed to run one bootstrapping operation, using an early implementation of Gentry\\u2019s FHE scheme. We have now clarified and referenced this.\"}",
"{\"title\": \"Reply to AnonReviewer3\", \"comment\": \"Thank you for your constructive review!\\n\\nIt is fair to challenge our claims that \\u201cthere has been no complete implementation of established deep learning approaches\\u201d, because there have been some implementations of deep learning models whereby a server can perform inference, including SecureML, Cryptonets [A] and MiniONN [C]. With this in mind, it is important that we clarify our problem scenario. The server does not want to reveal the model to the client, and the client does not want to reveal the input to the server. While all of the given approaches secure the client input, only Cryptonets and our paper secure the model structure from a client, who may wish to reverse-engineer the model. MiniONN proposes obfuscating the model to alleviate this issue, but an implementation of this for arbitrary architectures is not given, would not be trivial, and would increase the number of client-server exchanges. \\n\\nBecause SecureML and MiniONN are related works, we have updated the background section in our paper to discuss these works. We have also updated the problem scenario to more clearly explain what we consider to be a \\u201ccomplete implementation\\u201d.\\n\\nWe agree that for a shallow model using message packing, leveled FHE could be faster than binary FHE, and conversely a leveled FHE would become impractical for sufficiently deep models. \\n\\nIt is also important to note that Cryptonets uses the square activation instead of ReLU, and they present some disadvantages to this approach, in particular the unbounded derivative, making training difficult and limiting model depth. The square activation is also one of the most expensive operations in their network, because two ciphertexts must be multiplied. \\nMiniONN can perform ReLU, but it does not use FHE, leading to other tradeoffs as discussed.\\n\\nWe have updated the end of the results section, to better clarify the comparison between our work and Cryptonets, with regards to model size, depth and efficiency.\\n\\nWe considered our circuits efficient in that they were much faster using a hybrid approach, compared to using only ciphertexts, and also that they allowed for an simpler implementation by abstracting plaintext, ciphertext and hybrid adders into a single unit. We have updated the \\u201cHybrid Homomorphic Encryption\\u201d subsection of the design section, to better clarify that this is why we considered our approach efficient and simple.\\n\\nWe have also updated the abstract, with the intention of toning down our claims, by using the language \\u201cno complete implementation of common deep learning operations, for arbitrary model depths, using homomorphic encryption\\u201d, and \\u201cefficiently implementing many deep learning functions with bootstrapped homomorphic encryption\\u201d. This should more cleanly cover the advantages of our work, compared to related literature, to the best of our knowledge.\"}",
"{\"title\": \"Deep Learning Inferences with Hybrid Homomorphic Encryption\", \"rating\": \"4: Ok but not good enough - rejection\", \"review\": \"Summary:\\nThis paper proposes a framework for private deep learning model inference using FHE schemes that support fast bootstrapping.\\nThe main idea of this paper is that in the two-party computation setting, in which the client's input is encrypted while the server's deep learning model is plain.\\nThis \\\"hybrid\\\" argument enables to reduce the number of necessary bootstrapping, and thus can reduce the computation time.\\nThis paper gives an implementation of adder and multiplier circuits and uses them to implement private model inference.\", \"comments\": \"1. I recommend the authors to tone down their claims. For example, the authors mentioned that \\\"there has been no complete implementation of established deep learning approaches\\\" in the abstract, however, the authors did not define what is \\\"complete\\\". Actually, the SecureML paper in S&P'17 should be able to privately evaluate any neural networks, although at the cost of multi-round information exchanges between the client and server.\\n\\nAlso, the claim that \\\"we show efficient designs\\\" is very thin to me since there are no experimental comparisons between the proposed method and existing works. Actually, the level FHE can be very efficient with a proper use of message packing technique such as [A] and [C]. For a relatively shallow model (as this paper has used), level FHE might be faster than the binary FHE.\\n\\n2. I recommend the author to compare existing adder and multiplier circuits with your circuits to see in what perspective your design is better. I think the hybrid argument (i.e., when one input wire is plain) is a very common trick that used in the circuit design field, such as garbled circuit [B], to reduce the depth of the circuit. \\n\\n3. I appreciate that optimizations such as low-precision and point-wise convolution are discussed in this paper. Such optimizations are very common in deep learning field while less known in the field of security.\\n\\n[A]: Dowlin et al. Cryptonets: Applying neural networks to encrypted data with high throughput and accuracy.\\n[B]: V. Kolesnikov et al. Improved garbled circuit: free xor gates and applications. \\n[C]: Liu et al. Oblivious Neural Network Predictions via MiniONN transformations.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"This paper proposes a hybrid Homomorphic encryption system that is well suited for privacy-sensitive data inference applications with the deep learning paradigm. The research methodology is well organized, its rationale well explained and supports the stated problem resolution, the obtained results are interesting and the paper is well written (commendable).\", \"rating\": \"4: Ok but not good enough - rejection\", \"review\": \"This paper proposes a hybrid Homomorphic encryption system that is well suited for privacy-sensitive data inference applications with the deep learning paradigm.\\nThe paper presents a well laid research methodology that shows a good decomposition of the problem at hand and the approach foreseen to solve it. It is well reflected in the paper and most importantly the rationale for the implementation decisions taken is always clear.\\n\\nThe results obtained (as compared to FHEW) seem to indicate well thought off decisions taken to optimize the different gates' operations as clearly explained in the paper. For example, reducing bootstrapping operations by two-complementing both the plaintext and the ciphertext, whenever the number of 1s in the plain bit-string is greater than the number of 0s (3.4/Page 6).\\n\\nResult interpretation is coherent with the approach and data used and shows a good understanding of the implications of the implementation decisions made in the system and the data sets used.\\nOverall, fine work, well organized, decomposed, and its rationale clearly explained. The good results obtained support the design decisions made.\\nOur main concern is regarding thorough comparison to similar work and provision of comparative work assessment to support novelty claims.\", \"nota\": [\"In Figure 4/Page 4: AND Table A(1)/B(0), shouldn't A And B be 0?\", \"Unlike Figure 3/Page 3, in Figure 2/page 2, shouldn't operations' precedence prevail (No brackets), therefore 1+2*2=5?\"], \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Changes made to paper\", \"comment\": \"This comment provides a summary of all changes we have made to the paper.\\n\\nWe have fixed the AND table in Figure 4.\\n\\nWe have added brackets to Figure 2, and updated Figure 3 to show the Homomorphic Encryption process more clearly.\\n\\nWe have updated the \\u201cActivation Functions\\u201d subsection of the design section, to discuss the square activation in more detail.\\n\\nWe have updated the results section to better clarify why we did not include decryption timings.\\n\\nWe have updated the problem scenario to reduce ambiguity, to more clearly state that the server does not reveal the model or weights to the client (as opposed to the server explicitly securing the weights), and to more clearly explain what we consider to be a \\u201ccomplete implementation\\u201d.\\n\\nWe have added a short \\u201cPrivacy-Preserving Model Training\\u201d subsection to the background section, to reference some related works, and better clarify why we do not consider model training.\\n\\nWe have added a short \\u201cPrivacy-Preserving Deep Learning\\u201d subsection to the background section, to reference some works which do not use homomorphic encryption, and the trade-offs which result from this.\\n\\nWe have changed \\\"Both deep learning and FHE are relatively recent paradigms\\\" to \\u201cBoth deep learning and FHE have seen significant advances in the past ten years\\u201d, reflecting that this work is built upon advances in the past decade.\\n\\nFor the sentence \\\"In theory, this system alone could be used to compute anything securely.\\\" We have changed the end to \\u201ccompute any arithmetic circuit\\u201d, better reflecting what Gentry's cryptosystem does.\\n\\nFor the sentence \\\"However in practice the operations were incredibly slow, taking up to 30 minutes in some cases.\\\" We have clarified that this was for the bootstrapping operation in an implementation of Gentry's cryptosystem, and added a reference.\\n\\nWe have updated the end of the results section, to better clarify the comparison between our work and Cryptonets, with regards to model size, depth and efficiency.\\n\\nWe have updated the \\u201cHybrid Homomorphic Encryption\\u201d subsection of the design section, to explain that this hybrid approach is why we consider our approach to be efficient and simple.\\n\\nWe have also updated the abstract, using the language \\u201cno complete implementation of common deep learning operations, for arbitrary model depths, using homomorphic encryption\\u201d, and \\u201cefficiently implementing many deep learning functions with bootstrapped homomorphic encryption\\u201d. This should more cleanly cover the advantages of our work, compared to related literature, to the best of our knowledge.\"}",
"{\"decision\": \"Reject\", \"title\": \"ICLR 2018 Conference Acceptance Decision\", \"comment\": \"While the reviewers all seem to think this is interesting and basically good work, the Reviewers are consistent and unanimous in rejecting the paper.\\nWhile the authors did provide a thorough rebuttal, the original paper did not meet the criteria and the reviewers have not changed their scores.\"}",
"{\"title\": \"Reply to AnonReviewer1\", \"comment\": \"Thank you for your positive comments!\\n\\nTo clarify, our work can extend both TFHE, FHEW, or any system implementing Fully Homomorphic Encryption over binary. As part of our results, we compare TFHE and FHEW, to help show that advances in this field will continue to benefit our work directly, since we can support any new system with minimal effort. We have updated the start of the design section in our paper, to make this statement more carefully.\", \"to_address_the_notes\": \"We have fixed the AND table in Figure 4, thank you for pointing this out.\\n\\nFor Figure 2, we meant to show the operations getting applied in a linear order, but indeed 1+2*2=5. We have added brackets to Figure 2, and updated Figure 3 to show the process more clearly. Thank you again for finding this.\"}"
]
} |
HyrCWeWCb | Trust-PCL: An Off-Policy Trust Region Method for Continuous Control | [
"Ofir Nachum",
"Mohammad Norouzi",
"Kelvin Xu",
"Dale Schuurmans"
] | Trust region methods, such as TRPO, are often used to stabilize policy optimization algorithms in reinforcement learning (RL). While current trust region strategies are effective for continuous control, they typically require a large amount of on-policy interaction with the environment. To address this problem, we propose an off-policy trust region method, Trust-PCL, which exploits an observation that the optimal policy and state values of a maximum reward objective with a relative-entropy regularizer satisfy a set of multi-step pathwise consistencies along any path. The introduction of relative entropy regularization allows Trust-PCL to maintain optimization stability while exploiting off-policy data to improve sample efficiency. When evaluated on a number of continuous control tasks, Trust-PCL significantly improves the solution quality and sample efficiency of TRPO. | [
"Reinforcement learning"
] | Accept (Poster) | https://openreview.net/pdf?id=HyrCWeWCb | https://openreview.net/forum?id=HyrCWeWCb | ICLR.cc/2018/Conference | 2018 | {
"note_id": [
"H1ccXfmeG",
"ByDPYkUxG",
"Hk772U6XM",
"B1tQ10rVG",
"S1gBXyaSM",
"HkF_6L6Qz",
"BJ--aUT7M",
"rJ-4JL_Vf",
"H11zfWQZf"
],
"note_type": [
"official_review",
"official_review",
"official_comment",
"official_comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review"
],
"note_created": [
1511363474362,
1511549278763,
1515183131269,
1515736864737,
1517249335648,
1515183473179,
1515183353413,
1515900713157,
1512407559284
],
"note_signatures": [
[
"ICLR.cc/2018/Conference/Paper558/AnonReviewer3"
],
[
"ICLR.cc/2018/Conference/Paper558/AnonReviewer1"
],
[
"ICLR.cc/2018/Conference/Paper558/Authors"
],
[
"ICLR.cc/2018/Conference/Paper558/AnonReviewer2"
],
[
"ICLR.cc/2018/Conference/Program_Chairs"
],
[
"ICLR.cc/2018/Conference/Paper558/Authors"
],
[
"ICLR.cc/2018/Conference/Paper558/Authors"
],
[
"ICLR.cc/2018/Conference/Paper558/Authors"
],
[
"ICLR.cc/2018/Conference/Paper558/AnonReviewer2"
]
],
"structured_content_str": [
"{\"title\": \"It might be useful but looks like an incremental work. The technical presentation is not quite clear.\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"The paper extends softmax consistency by adding in a relative entropy term to the entropy regularization and applying trust region policy optimization instead of gradient descent. I am not an expert in this area. It is hard to judge the significance of this extension.\\n\\nThe paper largely follows the work of Nachum et al 2017. The differences (i.e., the claimed novelty) from that work are the relative entropy and trust region method for training. However, the relative entropy term added seems like a marginal modification. Authors claimed that it satisfies the multi-step path consistency but the derivation is missing.\\n\\nI am a bit confused about the way trust region method is used in the paper. Initially, problem is written as a constrained optimization problem (12). It is then converted into a penalty form for softmax consistency. Finally, the Lagrange parameter is estimated from the trust region method. In addition, how do you get the Lagrange parameter from epsilon?\\n\\nThe pseudo code of the algorithm is missing. It would be much clearer if a detailed description of the algorithmic procedure is given.\\n\\nHow is the performance of Trust-PCL compared to PCL?\", \"confidence\": \"1: The reviewer's evaluation is an educated guess\"}",
"{\"title\": \"Good paper\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"Clarity\\nThe paper is well-written and clear. \\n\\nOriginality\\nThe paper proposes a path consistency learning method with a new combination of entropy regularization and relative entropy. The paper leverages a novel method in determining the coefficient of relative entropy. \\n\\nSignificance\\n- Trust-PCL achieves overall competitive with state-of-the-art external implementations.\\n- Trust-PCL (off-policy) significantly outperform TRPO in terms of data efficiency and final performance. \\n- Even though the paper claims Trust-PCL (on-policy) is close to TRPO, the initial performance of TRPO looks better in HalfCheetah, Hopper, Walker2d and Ant. \\n- Some ablation studies (e.g., on entropy regularization and relative entropy) and sensitivity analysis on parameters (e.g. \\\\alpha and update frequency on \\\\phi) would be helpful.\", \"pros\": [\"The paper is well-written and clear.\", \"Competitive with state-of-the-art external implementations\", \"Significant empirical advantage over TRPO.\", \"Open source codes.\"], \"cons\": [\"No ablation studies.\"], \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Response\", \"comment\": \"We thank the reviewer for carefully reading the details of the paper; we greatly appreciate it.\", \"r1\": \"\\\"Some ablation studies (e.g., on entropy regularization and relative entropy) and sensitivity analysis on parameters (e.g. \\\\alpha and update frequency on \\\\phi) would be helpful.\\\"\\nSection 5.2.1 of the the paper shows the effect of changing \\\\epsilon on the performance. As discussed in Section 4.3, the value of \\\\epsilon directly determines \\\\lambda, the coefficient of relative entropy. The main contribution of the paper is stabilizing off-policy training via a suitable trust region constraint and hence, \\\\epsilon and \\\\lambda are the key hyper-parameters. However, we have expanded Section 5.2.1 to include anecdotal experience regarding the values of \\\\tau and the degree of off/on-policy (determined by \\\\beta, \\\\alpha, P).\"}",
"{\"title\": \"Improved. But still not good enough.\", \"comment\": \"The revised paper has made improvements. I thus raise my score a bit. However there are still some issues:\\n\\n\\\"Our paper does not present a policy gradient method\\\" <- This is obviously untrue.\\n\\n\\\"Unfortunately, TRPO is restricted to the use of on-policy data\\\" <- There is no such restrictions. Actually, the proposed Trust-PCL does NOT deal with off-policy data, but only that the authors believe that it can handle off-policy data and thus feed it with such data. This is the same for TRPO. Off-policy data can also be fed to TRPO to see how well TRPO works, which is a must baseline.\\n\\n\\\"We included 6 standard benchmarks in the paper\\\" <- They are the simplest ones. \\n\\nThe major concern is the unclear relationship between the methodology of entropy regularization and entropy constraint and the goal of off-policy learning.\"}",
"{\"title\": \"ICLR 2018 Conference Acceptance Decision\", \"comment\": \"This paper adapts (Nachum et al 2017) to continuous control via TRPO. The work is incremental (not in the dirty sense of the word popular amongst researchers, but rather in the sense of \\\"building atop a closely related work\\\"), nontrivial, and shows empirical promise. The reviewers would like more exploration of the sensitivity of the hyper-parameters.\", \"decision\": \"Accept (Poster)\"}",
"{\"title\": \"Response\", \"comment\": \"R3: \\\"The paper largely follows the work of Nachum et al 2017. The differences (i.e., the claimed novelty) from that work are the relative entropy and trust region method for training. However, the relative entropy term added seems like a marginal modification.\\\"\\n\\nThe extension of the work of Nachum et al. by including relative entropy is novel and significant because it enables applying softmax consistency to difficult continuous control tasks. Nachum et al (2017) only evaluated PCL on simple discrete control tasks, and without including the additional trust region term, we were not able to obtain promising results. Our results achieve state-of-the-art in continuous control by substantially outperforming TRPO. Other than the introduction of relative entropy as an implicit trust region constraint, the technique described in Section 4.3 is novel and plays a key role in the success of Trust-PCL.\", \"r3\": \"\\\"How is the performance of Trust-PCL compared to PCL?\\u201d\\n\\nPCL is equivalent to Trust-PCL with \\\\epsilon = infinity or \\\\lambda = 0. Section 5.2.1 shows the effect of different values of \\\\epsilon on the results of Trust-PCL. It is clear that as \\\\epsilon increases, the solution quality of Trust-PCL quickly degrades. We found that PCL (corresponding to an even larger \\\\epsilon) is largely ineffective on the difficult continuous control tasks considered in the paper. This shows the significance of the new technique over the original PCL.\"}",
"{\"title\": \"Response\", \"comment\": \"R2: \\\"This paper presents a policy gradient method that employs entropy regularization and entropy constraint at the same time\\u2026 \\\"\\n\\nOur paper does not present a policy gradient method. Rather, we show that the optimal policy for an expected reward objective regularized with entropy and relative entropy satisfies a set of path-wise consistencies. Then, we propose an off-policy algorithm to implicitly train towards this objective.\", \"r2\": \"\\\"Thus it would be more interesting to see experiments on more complex benchmark problems like humanoids.\\\"\\n\\nWe included 6 standard benchmarks in the paper, including: Acrobot, Half Cheetah, Swimmer, Hopper, Walker2D, and Ant. On all of the environments our Trust-PCL (off-policy) algorithm outperforms TRPO in both final reward and sample efficiency. We believe these experiments are enough to demonstrate the promise of the approach.\"}",
"{\"title\": \"Clarifications\", \"comment\": \"These comments continue to reveal some fundamental misunderstandings we should clarify.\", \"r2\": [\"\\\"We included 6 standard benchmarks in the paper\\\" <- They are the simplest ones.\", \"The tasks we included cover a wide range of difficulties. The results show significant advantages on harder tasks such as Hopper, Walker2d, and Ant. These tasks are by no means \\u201csimple\\u201d as can be deduced by comparing our results to those in other papers, including many submitted to this year\\u2019s ICLR:\", \"-- https://openreview.net/forum?id=H1tSsb-AW\", \"-- https://openreview.net/forum?id=BkUp6GZRW\", \"-- https://openreview.net/forum?id=HJjvxl-Cb\", \"-- https://openreview.net/forum?id=B1nLkl-0Z\"]}",
"{\"title\": \"Technique is not clear. Contribution is more like incremental. Experiments are insufficient.\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"This paper presents a policy gradient method that employs entropy regularization and entropy constraint at the same time. The entropy regularization on action probability is to encourage the exploration of the policy, while the entropy constraint is to stabilize the gradient.\\n\\nThe major weakness of this paper is the unclear presentation. For example, the algorithm is never fully described, though a handful variants are discussed. How the off-policy version is implemented is missing.\\n\\nIn experiments, why the off-policy version of TRPO is not compared. Comparing the on-policy results, PCL does not show a significant advantage over TRPO. Moreover, the curves of TRPO is so unstable, which is a bit uncommon. \\n\\nWhat is the exploration strategy in the experiments? I guess it was softmax probability. However, in many cases, softmax does not perform a good exploration, even if the entropy regularization is added.\\n\\nAnother issue is the discussion of the entropy regularization in the objective function. This regularization, while helping exploration, do changes the original objective. When a policy is required to pass through a very narrow tunnel of states, the regularization that forces a wide action distribution could not have a good performance. Thus it would be more interesting to see experiments on more complex benchmark problems like humanoids.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
SJvu-GW0b | Graph2Seq: Scalable Learning Dynamics for Graphs | [
"Shaileshh Bojja Venkatakrishnan",
"Mohammad Alizadeh",
"Pramod Viswanath"
] | Neural networks are increasingly used as a general purpose approach to learning algorithms over graph structured data. However, techniques for representing graphs as real-valued vectors are still in their infancy. Recent works have proposed several approaches (e.g., graph convolutional networks), but as we show in this paper, these methods have difficulty generalizing to large graphs. In this paper we propose Graph2Seq, an embedding framework that represents graphs as an infinite time-series. By not limiting the representation to a fixed dimension, Graph2Seq naturally scales to graphs of arbitrary size. Moreover, through analysis of a formal computational model we show that an unbounded sequence is necessary for scalability. Graph2Seq is also reversible, allowing full recovery of the graph structure from the sequence. Experimental evaluations of Graph2Seq on a variety of combinatorial optimization problems show strong generalization and strict improvement over state of the art. | [
"scalable",
"dynamics",
"graphs",
"graphs neural networks",
"general purpose",
"algorithms",
"graph",
"data"
] | Reject | https://openreview.net/pdf?id=SJvu-GW0b | https://openreview.net/forum?id=SJvu-GW0b | ICLR.cc/2018/Conference | 2018 | {
"note_id": [
"SJ67IVFmG",
"HyNr86Ylz",
"HkijNkprf",
"SydlD4FQz",
"SJl9hMKXG",
"SkD9M_NZf",
"H1n6BG_HG",
"SyhZ-YerG",
"HklBwVKmG",
"Hy9gZ2CxM"
],
"note_type": [
"official_comment",
"official_review",
"decision",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_review"
],
"note_created": [
1514911269434,
1511802428219,
1517249699341,
1514911472094,
1514904711755,
1512501902737,
1516934595706,
1516437763987,
1514911544200,
1512124658524
],
"note_signatures": [
[
"ICLR.cc/2018/Conference/Paper777/Authors"
],
[
"ICLR.cc/2018/Conference/Paper777/AnonReviewer3"
],
[
"ICLR.cc/2018/Conference/Program_Chairs"
],
[
"ICLR.cc/2018/Conference/Paper777/Authors"
],
[
"ICLR.cc/2018/Conference/Paper777/Authors"
],
[
"ICLR.cc/2018/Conference/Paper777/AnonReviewer1"
],
[
"ICLR.cc/2018/Conference/Paper777/Authors"
],
[
"ICLR.cc/2018/Conference/Paper777/AnonReviewer2"
],
[
"ICLR.cc/2018/Conference/Paper777/Authors"
],
[
"ICLR.cc/2018/Conference/Paper777/AnonReviewer2"
]
],
"structured_content_str": [
"{\"title\": \"On graph size\", \"comment\": \"We have conducted experiments on graphs of size up to 3200, and will include in our revision. Graph2Seq\\u2019s performance trend continues to hold at this size. We also tried larger graph sizes, but due to the large number of edges we ran into computational and memory issues (25k and 100k size graphs, which have 46 million and 4 billion edges respectively). Even doing greedy algorithms at this scale is computationally hard. As mentioned previously, our test graphs are not sparse and the current test graphs contain a large number of edges (hundreds of thousands to a million). We also reiterate that our training is on graphs of size 15, illustrating a generalization over a factor of 200. Evaluations for maximum independent set and max cut functions have been included in the appendix.\"}",
"{\"title\": \"The motivation for this paper is unclear. The writing is problematic and evaluations are not sufficient.\", \"rating\": \"4: Ok but not good enough - rejection\", \"review\": \"This paper proposes to represent nodes in graphs by time series. This is an interesting idea but the results presented in the paper are very preliminary.\\nExperiments are only conducted on synthetic data with very small sizes.\\nIn Section 5.1, I did not understand the construction of the graph. What means 'all the vertices are disjoint'? Then I do not understand why the vertices of G_i form the optimum.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"decision\": \"Reject\", \"title\": \"ICLR 2018 Conference Acceptance Decision\", \"comment\": \"The reviewers agree that the problem being studied is important and relevant but express serious concerns. I recommend the authors to carefully go through the reviews and significantly scale up their experiments.\"}",
"{\"title\": \"Sequence lengths\", \"comment\": \"We first note that recovering the graph topology from the time-series is not the primary objective of Graph2Seq (we already have the graph as our input, there is no need to recover it). The main goal of Graph2Seq is to provide a representation framework for learning tasks (e.g., classification, optimization), over graphs that are not fixed.\\n\\nSupposing we have a candidate neural network framework (such as Graph2Seq) that can take in arbitrary sized graphs as input, and produce an output. Knowing whether such a framework could work well on graphs of any size is unfortunately a difficult question to answer. In this context, we have included Theorem 1 as a strong conceptual evidence towards the scalability of Graph2Seq. The fact that the entire graph topology can be recovered from the Graph2Seq representation (even if we ignore sample complexity and computation issues) suggests the time-series has enough information to recover the graph in principle. \\n\\nIndeed, there are many ways in which one could represent a graph as a sequence (with potentially shorter sequences). However, the issue with methods involving the adjacency matrix is they require a prior labelling of the graph nodes (to identify the individual rows and columns of the matrix), and it is not clear how to incorporate such labels into the neural network. This is perhaps why the adjacency matrix is itself not used as a representation in the first place, and methods like GCNN are necessary. What we are seeking is a label-free representation.\"}",
"{\"title\": \"Response to reviewers\", \"comment\": \"We thank the reviewers for the helpful comments. Please find our response to the issues raised below.\", \"on_motivation\": \"We are rather puzzled by the comment that the motivations are unclear. Using neural networks for graph structured data is a fast-emerging field and is of topical interest (massive attendance in a recent NIPS workshop on Non-Euclidean deep learning https://nips.cc/Conferences/2017/Schedule?showEvent=8735 serves to illustrate). Our paper directly addresses one of the key open problems in the area: how to design neural networks for graphs that can scale to graph inputs of arbitrary sizes and shapes.\", \"such_a_scalable_solution_may_be_required_for_a_variety_of_reasons\": \"(1) directly training on large instances may not be possible; (2) application specific training can be avoided, and trained algorithms can be used in variety of settings; or (3) a scalable algorithm may be easier to analyze, reason about and can potentially inspire advances in theory CS. Indeed, traditionally algorithms in CS have usually been of this flavor. However, to our best awareness, such an analog in deep learning for graphs has been critically missing.\\n\\nThe combinatorial optimization problems we have used in our evaluations (vertex cover, max cut, max independent set) are also interesting and many recent works (e.g. Bello et al \\u201917, Vinyals et al \\u201915, Dai et al \\u201817) have considered these problems. Moreover, input instances in these problems capture the very essence of what makes representing signals over non-fixed graphs challenging: (i) the input graphs could have arbitrary topology, and (ii) the input graphs could have arbitrary size. The simplicity of these problems (in terms of vertex/edge features) allow us to focus on directly addressing these two scalability issues without worrying about dependencies arising from high-dimensional node/edge attributes.\", \"on_evaluations\": \"We have evaluated graphs of size up to 3200 and will include in our revision. Our test graphs are not sparse, and contain a large number of edges: e.g., a 3200 node Erdos-Renyi graph has 700,000 edges; a 3200 node random bipartite graph has 1.9 million edges. These graph sizes are consistent and well-above the sizes used in the neural networks combinatorial optimization literature (e.g., Learning combinatorial optimization algorithms over graphs, Dai et al, NIPS \\u201917 (up to 1200 nodes); Neural combinatorial optimization with reinforcement learning, Bello et al, \\u201917 (100 nodes); Pointer networks, Vinyals et al, NIPS \\u201915 (50 nodes)). Compared to the recent NIPS spotlight paper by Dai et al (which focuses on similar combinatorial problems), our results illustrate significant generalizations both in graph topology, and graph size.\\n\\nThe space of problems where the graph instances are not fixed is vast, and finding scalable learning representations for these applications remains a grand challenge. To our knowledge, this is also a longer-term project and a one-size-fits-all approach that solves all of those applications may not be possible. In this regard, our work presents an important first-step of recognizing, formalizing and understanding the key challenges involved, and also proposes a promising solution that directly addresses the key issues.\"}",
"{\"title\": \"Unclear motivation and experiments\", \"rating\": \"4: Ok but not good enough - rejection\", \"review\": \"The paper proposes GRAPH2SEQ that represents graphs as infinite time-series of vectors, one for\\neach vertex of the graph and in an invertible representation of a graph. By not having the restriction of representation to a fixed dimension, the authors claims their proposed method is much more scalable. They also define a formal computational model, called LOCAL-Gather that includes GRAPH2SEQ and other classes of GCNN representations, and show that GRAPH2SEQ is capable of computing certain graph functions that fixed-depth GCNNs cannot. They experiment on graphs of size at most 800 nodes to discover minimum vertex cover and show that their method perform much better than GCNNs but is comparable with greedy heuristics for minimum vertex cover.\\n\\nI find the experiments to be hugely disappointing. Claiming that this particular representation helps in scalability and then doing experiment on graphs of extremely small size does not reflect well. It would have been much more desirable if the authors had conducted experiments on large graphs and compare the results with greedy heuristics. Also, the authors need to consider other functions, not only minimum vertex cover. In general, lack of substantial experiments makes it difficult to appreciate the novelty of the work. I am not at all sure, if this representation is indeed useful for graph optimization problems practically.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"our response\", \"comment\": \"(1) Reg. length of sequence:\\n\\nSequence length depends on the graph, and in the worst case is exponential in # of edges. In any case, this is irrelevant to our (empirical) results. In fact, the theoretical fact of exponential dependence in # of edges only makes our empirical results more impressive; we only need to use sequence lengths roughly equal to the diameter of the graph to get our numerical results, which are favorably competitive with the best algorithms (not just prior neural network methods) for the problems studied.\\n\\n(2) Reg. the general remark:\\n\\nThe remark holds true only under deterministic sequence generation. \\n\\nUnder deterministic initialization and evolution, the sequence cannot be used to distinguish even non-isomorphic graphs, as we have showed in the proof of Proposition 1 in the paper. This is a clear limitation of deterministic sequence generation. We point this out in Section 3.1.\\n\\nHowever, if the evolution is random (by adding a random node label or noise), then the sequences are no longer identical even for isomorphic graphs, and as such cannot be used as a test for isomorphism.\"}",
"{\"title\": \"response of reviewer\", \"comment\": \"I read the response and I do not feel I should change my review since mostly my concerns remain.\\n\\nThe authors did not acknowledge that their sequence representation can be exponential length, or if I am mistaken ?\\n\\nAs a general remark, if you could map a graph into a poly-size sequence that is invariant to labeling of the graph nodes and this sequence is invertible (i.e you can use it to reconstruct the graph) then you have solved graph isomorphism. \\nThis is because two graphs would be isomorphic iff their sequences are identical.\"}",
"{\"title\": \"Graph construction\", \"comment\": \"Vertices are disjoint means the vertices do not have any edge between them. Since the vertices of G_o do not have any edge between themselves, selecting the vertices of G_i as a cover will ensure every edge of the graph is covered.\"}",
"{\"title\": \"Interesting idea but limited validation. Also sample complexity may be exponential in graph degree.\", \"rating\": \"4: Ok but not good enough - rejection\", \"review\": \"This paper proposes a novel way of embedding graph structure into a sequence that can have an unbounded length.\\n\\nThere has been a significant amount of prior work (e.g. d graph convolutional neural networks) for signals supported on a specific graph. This paper on the contrary tries to encode the topology of a graph using a dynamical system created by the graph and randomization. \\n\\nThe main theorem is that the created dynamical system can be used to reverse engineer the graph topology for any digraph. \\nAs far as I understood, the authors are doing essentially reverse directed graphical model learning. In classical learning of directed graphical models (or causal DAGs) one wants to learn the structure of a graph from observed data created by this graph inducing conditional independencies on data. This procedure is creating a dynamical system that (following very closely previous work) estimates conditional directed information for every pair of vertices u,v and can find if an edge is present from the observed trajectory. \\nThe recovery algorithm is essentially previous work (but the application to graph recovery is new).\", \"the_authors_state\": \"``Estimating conditional directed information efficiently from samples is itself an active area of research Quinn et al. (2011), but simple plug-in estimators with a standard kernel density estimator will be consistent.''\\n\\nOne thing that is missing here is that the number of samples needed could be exponential in the degrees of the graph. Therefore, it is not clear at all that high-dimensional densities or directed information can be estimated from a number of samples that is polynomial in the dimension (e.g. graph degree).\\n\\nThis is related to the second limitation, that there is no sample complexity bounds presented only an asymptotic statement. \\n\\nOne remark is that there are many ways to represent a finite graph with a sequence that can be decoded back to the graph (and of course if there is no bound on the graph size, there will be no bound on the size of the sequence). For example, one could take the adjacency matrix and sequentially write down one row after the other (perhaps using a special symbol to indicate 'next row'). Many other simple methods can be obtained also, with a size of sequence being polynomial (in fact linear) in the size of the graph. I understand that such trivial representations might not work well with RNNs but they would satisfy stronger versions of Theorem 1 with optimal size. \\nOn the contrary it was not clear how the proposed sequence will scale in the graph size. \\n\\n\\nAnother remark is that it seems that GCNN and this paper solve different problems. \\nGCNNs want to represent graph-supported signals (on a fixed graph) while this paper tries to represent the topology of a graph, which seems different. \\n\\n\\nThe experimental evaluation was somewhat limited and that is the biggest problem from a practical standpoint. It is not clear why one would want to use these sequences for solving MVC. There are several graph classification tasks that try to use the graph structure (as well as possibly other features) see eg the bioinformatics \\nand other applications. Literature includes for example:\\nGraph Kernels by S.V.N. Vishwanathan et al. \\nDeep graph kernels (Yanardag & Vishwanathan and graph invariant kernels (Orsini et al.),\\nwhich use counts of small substructures as features. \\n\\nThe are many benchmarks of graph classification tasks where the proposed representation could be useful but significantly more validation work would be needed to make that case.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
HyXBcYg0b | Residual Gated Graph ConvNets | [
"Xavier Bresson",
"Thomas Laurent"
] | Graph-structured data such as social networks, functional brain networks, gene regulatory networks, communications networks have brought the interest in generalizing deep learning techniques to graph domains. In this paper, we are interested to design neural networks for graphs with variable length in order to solve learning problems such as vertex classification, graph classification, graph regression, and graph generative tasks. Most existing works have focused on recurrent neural networks (RNNs) to learn meaningful representations of graphs, and more recently new convolutional neural networks (ConvNets) have been introduced. In this work, we want to compare rigorously these two fundamental families of architectures to solve graph learning tasks. We review existing graph RNN and ConvNet architectures, and propose natural extension of LSTM and ConvNet to graphs with arbitrary size. Then, we design a set of analytically controlled experiments on two basic graph problems, i.e. subgraph matching and graph clustering, to test the different architectures. Numerical results show that the proposed graph ConvNets are 3-17% more accurate and 1.5-4x faster than graph RNNs. Graph ConvNets are also 36% more accurate than variational (non-learning) techniques. Finally, the most effective graph ConvNet architecture uses gated edges and residuality. Residuality plays an essential role to learn multi-layer architectures as they provide a 10% gain of performance. | [
"graph neural networks",
"ConvNets",
"RNNs",
"pattern matching",
"semi-supervised clustering"
] | Reject | https://openreview.net/pdf?id=HyXBcYg0b | https://openreview.net/forum?id=HyXBcYg0b | ICLR.cc/2018/Conference | 2018 | {
"note_id": [
"r1-tSfqeG",
"HycHDznMM",
"Sy7QPPYxM",
"ryFFX8Yxf",
"rJGLH1arM",
"HyshLz3fz",
"rJ_yvM3fz",
"SyffDMnff",
"BJWd98x4z"
],
"note_type": [
"official_review",
"official_comment",
"official_review",
"official_review",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment"
],
"note_created": [
1511822713456,
1514051393721,
1511778075312,
1511773057124,
1517249866429,
1514051251143,
1514051296082,
1514051337845,
1515379305480
],
"note_signatures": [
[
"ICLR.cc/2018/Conference/Paper340/AnonReviewer3"
],
[
"ICLR.cc/2018/Conference/Paper340/Authors"
],
[
"ICLR.cc/2018/Conference/Paper340/AnonReviewer2"
],
[
"ICLR.cc/2018/Conference/Paper340/AnonReviewer1"
],
[
"ICLR.cc/2018/Conference/Program_Chairs"
],
[
"ICLR.cc/2018/Conference/Paper340/Authors"
],
[
"ICLR.cc/2018/Conference/Paper340/Authors"
],
[
"ICLR.cc/2018/Conference/Paper340/Authors"
],
[
"ICLR.cc/2018/Conference/Paper340/Authors"
]
],
"structured_content_str": [
"{\"title\": \"the relation between the two proposed models is not very clear\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"The paper proposes a new neural network model for learning graphs with arbitrary length, by extending previous models such as graph LSTM (Liang 2016), and graph ConvNets. There are several recent studies dealing with similar topics, using recurrent and/or convolutional architecture. The Related work part of this paper makes a good description of both topics.\\n\\nI would expect the paper elaborate more (at least in a more explicit way) about the relationship between the two models (the proposed graph LSTM and the proposed Gated Graph ConvNets). The authors claim that the innovative of the graph Residual ConvNets architecture, but experiments and the model section do not clearly explain the merits of Gated Graph ConvNets over Graph LSTM. The presentation may raise some misunderstanding. A thorough analysis or explanation of the reasons why the ConvNet-like architecture is better than the RNN-like architecture would be interesting. \\n\\nIn the section of experiments, they compare 5 different methods on two graph mining tasks. These two proposed neural network models seem performing well empirically. \\n\\nIn my opinion, the two different graph neural network models are both suitable for learning graphs with arbitrary length, \\nand both models worth future stuies for speicific problems.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Authors feedback\", \"comment\": \"We are thankful to the reviewer for her/his comments and time. We hope our answers will clarify the importance of this work and the referee will be inclined to improve her/his evaluation score.\", \"q\": \"Minor comments\", \"a\": \"Thank you. We revised the paper accordingly.\"}",
"{\"title\": \"Residual Gated Graph ConvNets\", \"rating\": \"3: Clear rejection\", \"review\": \"The paper proposes an adaptation of existing Graph ConvNets and evaluates this formulation on a several existing benchmarks of the graph neural network community. In particular, a tree structured LSTM is taken and modified. The authors describe this as adapting it to general graphs, stacking, followed by adding edge gates and residuality.\\n\\nMy biggest concern is novelty, as the modifications are minor. In particular, the formulation can be seen in a different way. As I see it, instead of adapting Tree LSTMs to arbitary graphs, it can be seen as taking the original formulation by Scarselli and replacing the RNN by a gated version, i.e. adding the known LSTM gates (input, output, forget gate). This is a minor modification. Adding stacking and residuality are now standard operations in deep learning, and edge-gates have also already been introduced in the literature, as described in the paper.\\n\\nA second concern is the presentation of the paper, which can be confusing at some points. A major example is the mathematical description of the methods. When reading the description as given, one should actually infer that Graph ConvNets and Graph RNNs are the same thing, which can be seen by the fact that equations (1) and (6) are equivalent.\\n\\nAnother example, after (2), the important point to raise is the difference to classical (sequential) RNNs, namely the fact that the dependence graph of the model is not a DAG anymore, which introduces cyclic dependencies. \\n\\nGenerally, a clear introduction of the problem is also missing. What are the inputs, what are the outputs, what kind of problems should be solved? The update equations for the hidden states are given for all models, but how is the output calculated given the hidden states from variable numbers of nodes of an irregular graph?\\n\\nThe model has been evaluated on standard datasets with a performance, which seems to be on par, or a slight edge, which could probably be due to the newly introduced residuality.\", \"a_couple_of_details\": [\"the length of a graph is not defined. The size of the set of nodes might be meant.\", \"at the beginning of section 2.1 I do not understand the reference to word prediction and natural language processing. RNNs are not restricted to NLP and I think there is no need to introduce an application at this point.\", \"It is unclear what does the following sentence means: \\\"ConvNets are more pruned to deep networks than RNNs\\\"?\", \"What are \\\"heterogeneous graph domains\\\"?\"], \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting approach that should be better presented\", \"rating\": \"7: Good paper, accept\", \"review\": \"The authors revised the paper according to all reviewers suggestions, I am satisfied with the current version.\", \"summary\": \"this works proposes to employ recurrent gated convnets to solve graph node labeling problems on arbitrary graphs. It build upon several previous works, successively introducing convolutional networks, gated edges convnets on graphs, and LSTMs on trees. The authors extend the tree LSTMs formulation to perform graph labeling on arbitrary graphs, merge convnets with residual connections and edge gating mechanisms. They apply the 2 proposed models to 3 baselines also based on graph neural networks on two problems: sub-graph matching (expressing the problem of sub-graph matching as a node classification problem), and semi supervised clustering.\", \"main_comments\": \"It would strengthen the paper to also compare all these network learning based approaches to variational ones. For instance, to a spectral clustering method for the semi supervised clustering, or\", \"solving_the_combinatorial_dirichlet_problem_as_in_grady\": [\"random walks for image segmentation, 2006.\", \"The abstract and the conclusion should be revised, they are very vague.\", \"The abstract should be self contained and should not contain citations.\", \"The authors should clarify which problem they are dealing with.\", \"instead of the \\\"numerical result show the performance of the new model\\\", give some numerical results here, otherwise, this sentence is useless.\", \"we propose ... as propose -> unclear: what do you propose?\"], \"minor_comments\": [\"You should make sentences when using references with the author names format. Example: ... graph theory, Chung (1997) -> graph theory by Chung (1997)\", \"As Eq 2 -> As the minimization of Eq 2 (same with eq 4)\", \"Don't start sentences with And, or But\"], \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"decision\": \"Reject\", \"title\": \"ICLR 2018 Conference Acceptance Decision\", \"comment\": \"The authors make an experimental study of the relative merits of RNN-type approaches and graph-neural-network approaches to solving node-labeling problems on graphs. They discuss various improvements in gnn constructions, such as residual connections.\\n\\nThis is a borderline paper. On one hand, the reviewers feel that there is a place for this kind of empirical study, but on the other, there is agreement amongst the reviewers that the paper is not as well written as it could be. Furthermore, some reviewers are worried about the degree of novelty (of adding residual connections to X).\\n\\nI will recommend rejection, but urge the authors to clarify the writing and expand on the empirical study and resubmit.\"}",
"{\"title\": \"Author feedback\", \"comment\": \"We thank the reviewer for her/his time and valuable comments. We hope to clarify any misunderstanding below and show the importance of this work in the field of deep learning on graphs.\", \"q\": \"Analysis why the ConvNet-like architecture is better\", \"a\": \"We do agree such analysis would be important and we would like to carry it out in a future work. However, it is a challenging analysis as the data domain does not hold a nice mathematical structure like Euclidean lattices for images. This will require time and new analysis tools to develop such theory (given also that the standard theory for regular grids/images is still open).\\nIn the meantime, we hope the reviewer considers the rigorous numerical experiments - two fundamental graph experiments with controlled analytical settings (stochastic block models for the graph distributions) that offer a clear conclusion about graph ConvNets, which can be leveraged to build better NNs in the fast-emerging domain of deep learning on graphs.\"}",
"{\"title\": \"Authors feedback 1\", \"comment\": \"We thank the reviewer for her/his time and comments. We provide below specific answers to the questions. We hope the reviewer will update positively her/his decision in view of our answers.\", \"q\": \"An introduction of the problem is missing. What kind of problems should be solved? What are the inputs, the outputs?\", \"a\": \"The general problem we want to solve is learning meaningful representations of graphs with variable length using either ConvNet or RNN architectures. These graph representations can be applied to different tasks such as vertex classification (for graph matching and clustering in this work) and also graph classification, graph regression, graph visualization, and graph generative model.\\nIn this work, inputs are graphs with variable size and outputs are vertex classification vectors of input graphs. We added this answer in the paper.\"}",
"{\"title\": \"Authors feedback 2\", \"comment\": \"Q: Taking original Scarselli and replacing the RNN by LSTM gates\", \"a\": \"Homogeneous graph domains refer to regular lattices and heterogeneous graph domains refer to graphs with complex variable structures like proteins, brain connectivity, gene regulatory network, etc.\", \"q\": \"What are \\\"heterogeneous graph domains\\\"?\"}",
"{\"title\": \"Authors\", \"comment\": \"We would like to thank the referee for her/his time reviewing the revised paper and for improving her/his evaluation score.\"}"
]
} |
BJvVbCJCb | Neural Clustering By Predicting And Copying Noise | [
"Sam Coope",
"Andrej Zukov-Gregoric",
"Yoram Bachrach"
] | We propose a neural clustering model that jointly learns both latent features and how they cluster. Unlike similar methods our model does not require a predefined number of clusters. Using a supervised approach, we agglomerate latent features towards randomly sampled targets within the same space whilst progressively removing the targets until we are left with only targets which represent cluster centroids. To show the behavior of our model across different modalities we apply our model on both text and image data and very competitive results on MNIST. Finally, we also provide results against baseline models for fashion-MNIST, the 20 newsgroups dataset, and a Twitter dataset we ourselves create. | [
"unsupervised learning",
"clustering",
"deep learning"
] | Reject | https://openreview.net/pdf?id=BJvVbCJCb | https://openreview.net/forum?id=BJvVbCJCb | ICLR.cc/2018/Conference | 2018 | {
"note_id": [
"r1l0JIdWz",
"SyUwqSoJf",
"S1RGUy6SM",
"SkrcjjVJM",
"H1flxytxf",
"Hk2HKdrlz",
"rkX7hRmeM",
"SyR-e8uWG",
"ryMv4SZ7M",
"HJyr5Hokf",
"rJmxgLuZG"
],
"note_type": [
"official_comment",
"official_comment",
"decision",
"comment",
"official_review",
"official_review",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment"
],
"note_created": [
1512755143677,
1510853214279,
1517250069901,
1510419341528,
1511743465593,
1511520579695,
1511414811023,
1512755206108,
1514390618003,
1510853175079,
1512755179375
],
"note_signatures": [
[
"ICLR.cc/2018/Conference/Paper186/Authors"
],
[
"ICLR.cc/2018/Conference/Paper186/Authors"
],
[
"ICLR.cc/2018/Conference/Program_Chairs"
],
[
"(anonymous)"
],
[
"ICLR.cc/2018/Conference/Paper186/AnonReviewer1"
],
[
"ICLR.cc/2018/Conference/Paper186/AnonReviewer2"
],
[
"ICLR.cc/2018/Conference/Paper186/AnonReviewer3"
],
[
"ICLR.cc/2018/Conference/Paper186/Authors"
],
[
"ICLR.cc/2018/Conference/Paper186/Authors"
],
[
"ICLR.cc/2018/Conference/Paper186/Authors"
],
[
"ICLR.cc/2018/Conference/Paper186/Authors"
]
],
"structured_content_str": [
"{\"title\": \"Response\", \"comment\": \"Thank you for the insightful comments!\", \"regarding_the_need_for_the_hungarian_algorithm_in_our_model\": \"We only use the Hungarian algorithm during the first two stages of training, i.e. during the warm-up and transition stages. Subsequently, assignment between targets and latent representations is done purely by assigning a target to its nearest latent representation. \\n\\nThe aim early on in training is to (1) pre-train the encoder and decoder networks and (2) have the latent representations distribute (close to) uniformly across the latent space. We ensure this by using the NAT objective which makes the model minimise distances between latent representations and their targets.\\n\\n\\nIn contrast to the above, were we to assign targets to their nearest latent representations from the very beginning, we would risk the model collapsing in on itself as the encoder function would not have learned a stable mapping to latent space. We can corroborate this empirically through the ample runs we made early on whilst working on this paper.\", \"with_regard_to_your_comments_on_the_tuning_of_the_alpha_and_lambda_values\": \"To give an indication of the robustness of our method we plan to include experiments highlighting how much the lambda (and alpha) values affect training on MNIST. In short, we do not tune the value of alpha very much in our experiments (0 at the beginning, then a gradual increase to 1 after some epochs). However drastically different values, for example alpha set to 1 throughout training, do lead to poor results.\", \"with_regards_to_the_text_datasets\": \"We wholeheartedly agree that the baselines used in the text-based experiments are quite weak. Our intention was not to prove that our method is optimal for text clustering; rather, we wanted to show that the technique generalizes across modalities.\\n\\n\\nGiven your feedback, we plan on including the results from models of an identical architecture trained as vanilla autoencoders (AE-k) and k-means using the learned representations of the NATAC model. Additionally, we plan to train a model on the whole of 20-newsgroups and compare the NMI to clustering algorithms that require a predefined number of clusters. Stay tuned, the results should be in within a week.\\n\\nYou rightfully point out the discrepancy between the number of clusters for the model in Table 1 and Table 2. We use the same hyperparameters for both of these experiments, however the model in Table 2 is trained on the whole of MNIST, whereas the model in Table 1 is trained on the train and validation sets only (to allow for evaluation on the test set). We plan to include some experiments which show the variability of our training method (converged number of clusters, NMI score, similarity to clusters in other models.). These are the next priority after including the updated text-dataset results\\n\\nThe question of whether the same number of clusters would be optimal for NATAC-k is an interesting one. Indeed, seeing as many of the clusters in the NATAC models contain very few examples (the \\u2018dead centroids\\u2019), it would be a little unfair to compare using k-means with the same number of clusters. We are currently discussing what a more sensible baseline might be.\"}",
"{\"title\": \"RE: Request for citation\", \"comment\": \"We will certainly compare the results of the paper, along with other related clustering papers in the ICLR review, to our approach.\"}",
"{\"decision\": \"Reject\", \"title\": \"ICLR 2018 Conference Acceptance Decision\", \"comment\": \"The paper proposes an approach to jointly learning a data clustering and latent representation. The main selling point is that the number of clusters need not be pre-specified. However, there are other hyperparameters and it is not clear why trading # clusters for other hyperparameters is a win. The empirical results are not strong enough to overcome these concerns.\"}",
"{\"title\": \"Request for citation\", \"comment\": \"I believe that you should also cite \\u201cLearning Discrete Representations via Information Maximizing Self-Augmented Training\\u201d (ICML 2017) http://proceedings.mlr.press/v70/hu17b.html.\\nThis paper is closely related to your work and is also about unsupervised clustering using deep neural networks.\\nAs far as I know, the proposed method, IMSAT, is the current state-of-the-art method in deep clustering (November 2017). Could you compare your results against their result?\"}",
"{\"title\": \"Interesting work but lack of detailed discussions and thorough quantitative results\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"This paper proposes a neural clustering model following the \\\"Noise as Target\\\" technique. Combining with an reconstruction objective and \\\"delete-and-copy\\\" trick, it is able to cluster the data points into different groups and is shown to give competitive results on different benchmarks.\\n\\nIt is nice that the authors tried to extend the \\\"noise as target\\\" to the clustering problem, and proposed the simple \\\"delete-and-copy\\\" technique to group different data points into clusters. Even tough a little bit ad-hoc, it seems promising based on the experiment results. However, it is unclear to me why it is necessary to have the optimal matching here and why the simple nearest target would not work. After all, the cluster membership is found based on the nearest target in the test stage. \\n\\nAlso, the authors should provide more detailed description regarding the scheduling of the alpha and lambda values during training, and how sensitive it is to the final clustering performance. The authors cited the no requirement of \\\"a predefined number of clusters\\\" as one of the contributions, but the tuning of alpha seems more concerning.\\n\\nI like the authors experimented with different benchmarks, but lack of comparisons with existing deep clustering techniques is definitely a weakness. The only baseline comparison provided is the k-means clustering, but the comparisons were somewhat unfair. For all the text datasets, there were no comparisons with k-means on the features learned from the auto-encoders or clusterings learned from similar number of clusters. The comparisons for the Twitter dataset were even based on character-level with word-level. It is more convincing to show the superiority of the proposed method than existing ones on the same ground.\", \"some_other_issues_regarding_quantitative_results\": [\"In Table 1, there are 152 clusters for 10-d latent space after convergence, but there are 61 clusters for 10-d latent space in Table 2 for the same MNIST dataset. Are they based on different alpha and lambda values?\", \"Why does NATAC perform much better than NATAC-k? Would NATAC-k need a different number of clusters than the one from NATAC? The number of centroids learned from NATAC may not be good for k-means clustering.\", \"It seems like the performance of AE-k is increasing with increase of dimensionality of latent space for Fashion-MNIST. Would AE-k beat NATAC with a different dimensionality of latent space and k?\"], \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"interesting method while less satisfactory results\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"This ms presents a new clustering method which combines deep autoencoder and a recent unsupervised representation learning approach (NAT; Bojanowski and Joujin 2017). The proposed method can jointly learn latent features and the cluster assignments. Then the method is tested in several image and text data sets.\", \"i_have_the_following_concerns\": \"1) The paper is not self-contained. The review of NAT is too brief and makes it too hard to understand the remaining of the paper. Because NAT is a fundamental starting point of the work, it will be nice to elaborate the NAT method to be more understandable.\\n\\n2) Predicting the noise has no guarantee that the data items are better clustered in the latent space. Especially, projecting the data points to a uniform sphere can badly blur the cluster boundaries.\\n\\n3) How should we set the parameter lambda? Is it data dependent?\\n\\n4) The experimental results are a bit less satisfactory:\\na) It is known that unsupervised clustering methods can achieve 0.97 accuracy for MNIST. See for example [Ref1, Ref2, Ref3].\\nb) Figure 3 is not satisfactory. Actually t-SNE on raw MNIST pixels is not bad at all. See https://sites.google.com/site/neighborembedding/mnist\\nc) For 20 Newsgroups dataset, NATAC achieves 0.384 NMI. By contrast, the DCD method in [Ref3] can achieve 0.54.\\n\\n5) It is not clear how to set the number of clusters. More explanations are appreciated.\\n\\n[Ref1] Zhirong Yang, Tele Hao, Onur Dikmen, Xi Chen, Erkki Oja. Clustering by Nonnegative Matrix Factorization Using Graph Random Walk. In NIPS 2012.\\n[Ref2] Xavier Bresson, Thomas Laurent, David Uminsky, James von Brecht. Multiclass Total Variation Clustering. In NIPS 2013.\\n[Ref3] Zhirong Yang, Jukka Corander and Erkki Oja. Low-Rank Doubly Stochastic Matrix Decomposition for Cluster Analysis. Journal of Machine Learning Research, 17(187): 1-25, 2016.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"overall algorithm is somewhat heuristic\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"This paper presents an algorithm for clustering using DNNs. The algorithm essentially alternates over two steps: a step that trains the DNN to predict random targets, and another step that reassigns the targets based on the overall matching with the DNN outputs. The second step also shrinks the number of targets over time to achieve clustering. Intuitively, the randomness in target may achieve certain regularization effect.\", \"my_concerns\": \"1. There is no analysis on what the regularization effect is. What advantage does the proposed algorithm offer to an user that a more deterministic algorithm cannot?\\n2. The delete-and-copy step also introduces randomness, and since the algorithm removes targets over time, it is not clear if the algorithm consistently optimizes one objective throughout. Without a consistent objective function, the algorithm seems somewhat heuristic.\\n3. Due to the randomness from multiple operations, the experiments need to be run multiple times, and see if the output clustering is sensitive to it. If it turns out the algorithm is quite robust to the randomness, it is then an interesting question why this is the case.\\n4. Does the Hungarian algorithm used for matching scales to much larger datasets?\\n5. While the algorithm empirically improve over k-means, I believe at this point combinations of DNN with classical clustering algorithms already exist and comparisons with such stronger baselines are missing. The authors have listed a few related algorithms in the last paragraph on page 1. I think the following one is also relevant:\\n-- Law et al. Deep spectral clustering learning. ICML 2015.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Response\", \"comment\": \"Thank you for the insightful comments!\", \"regarding_the_points_you_have_made\": \"1&2: In NATAC, the targets are uniformly, randomly sampled from a unit-sphere (one for each example in the dataset). With the large size of the datasets in our experiments (between 20 and 70 thousand) the randomly sampled targets should very closely approximate a uniform distribution on the sphere. In the warm-up stage of training, we do not utilize the delete-and-copy mechanism, meaning that the initial objective is to both autoencode examples _and_ uniformly map the latent representations onto a unit sphere. Therefore, the latent representations serve as a lossy compression of the input data whilst incentivised to be spread uniformly over a unit sphere. \\n\\nAlthough the one of the objectives in the warm-up stage of training is to have a uniformly distributed latent representations, there will always be inconsistencies: As the reconstruction loss \\u2018encourages\\u2019 similar examples to have similar latent representations, the distribution of the latent representations will be denser in some regions and more sparse in others. \\n\\nIn the transition and clustering stages of training, we then gradually perturb the distribution of the targets to more closely match the imperfect distribution of the latent representations (we use the heuristic delete-and-copy mechanism), and also allow for targets to agglomerate.\", \"the_randomness_comes_from_two_sources\": \"One being the initial random assignment of examples in the dataset X to the latent targets Y and training in mini-batches. As mentioned before, the warm-up stage of training is responsible for finding a good assignment of the targets to the input examples, and the targets are a very close approximation of a uniform distribution of points on a sphere.\\n\\nThe delete-and-copy function can be seen as a way of gradually re-aligning the distribution of the targets more closely to the distribution of the latent representations made by the encoder. The randomness comes from the fact that, instead of deterministically, we randomly choose which targets to delete in training. However, the delete-and-copy mechanism is only stochastic during the transition-phase of training - after which we delete-and-copy with a probability of 1 (alpha = 1).\\n\\nIndeed, our delete-and-copy method is a randomized algorithm. Intuitively, what it tries to achieve is gradually remove targets from the less dense areas (and clone targets in the denser areas; in turn, this allows the latent representation more freedom (by making the constraints easier to meet), so it is easier to reconstruct the example from its latent representation. \\n\\nThe random effect of shifting targets may help avoid overfitting (memorizing certain locations in latent space \\u2018off-by-heart\\u2019). In a way it is reminiscent to VAEs, where instead the latent representation is perturbed before it is passed to the decoder. We will include a brief discussion about this intuition in the paper - thank you for raising this!\\n\\nWe do not know whether it is possible to achieve similar results with a deterministic perturbation algorithm. However, our delete-and-copy method has outperformed any of the deterministic heuristics we have tried (e.g. simple rules like removing one target from the least populated area and cloning one target in the most dense area). It is a very interesting question for future research to see whether more elaborate heuristics, and in particular deterministic ones, can yield results better than we\\u2019ve obtained using our simple delete-and-copy randomized rule. \\n\\nWe are not aware other neural methods that do not require a set number of clusters (with the one exception being another paper submitted to this ICLR), so we were unable to comment on the difference between this method and more-deterministic methods. \\n\\n\\n3. We found the outcome of training a model with similar hyperparameters to be fairly similar in outcome. We plan on showing the variability of training the best performing model on 20 Newsgroups (as these models are quick to train) to empirically show this in the paper.\", \"4\": \"The hungarian method itself runs in O(N^3) complexity - significantly more efficient than a brute-force search (O(n!)). This would mean computing the optimal assignments over the whole dataset would be expensive. However, we train using mini-batches, meaning that the hungarian algorithm is only computed on a batch of data, not the whole dataset. This means that the runtime of a single forward/backward pass of a model does not change wrt the size of the dataset, as the batch size remains constant.\", \"5\": \"Thank you for bringing this paper to your attention, we will certainly mention this in our revision. Unfortunately, the paper does not report NMI on the datasets we do, so we are unable to compare performance with our method.\"}",
"{\"title\": \"Changes To The Paper\", \"comment\": [\"We have made some changes and additions to the paper during this rebuttal/discussion period. Our main changes are to add further experiments to demonstrate the robustness of the NATAC training method, and to add more baselines to our text-based experiments. In full, we have:\", \"Added other clustering methods into the MNIST comparison table.\", \"Updated the 20news NATAC results - we have found some slightly better performing hyperparameters.\", \"Included NATAC-k and AE-k Results for both the 20 Newsgroups and Twitter Datasets.\", \"Included a Comparison table with some other clustering algorithms for 20 Newsgoups, we perform competitively although our model converges on significantly more clusters.\", \"Added an experiment to empirically show that the NATAC training method is fairly stable wrt final NMI and converged number of clusters. We only had the time to train multiple runs of models on 20 Newsgroups - so we were unable to officially comment on the other datasets.\", \"Added an experiment to show how the end performance of a model changes with increasing amounts of pre-training.\", \"Added an experiment to show that changes in the value of lambda (increase/decrease by a power of 10) do not greatly affect the end performance of the model.\", \"Altered NATAC training so it is a bit more intuitive to understand.\", \"Added more discussion about the NAT training framework.\", \"Made some small edits in the introduction and conclusion.\", \"Redone the Twitter dataset experiments. We used slightly different hyper-parameters which converged to fewer clusters.\"]}",
"{\"title\": \"Including Related Work\", \"comment\": [\"Since submission, several papers have come to our attention which we would like to include and discuss:\", \"Learning Discrete Representations via Information Maximizing Self-Augmented Training\", \"Deep Continuous Clustering (ICLR 2018 submission)\", \"Spectralnet Spectral Clustering Using Deep Neural Networks (ICLR 2018 submission)\", \"Learning Latent Representations In Neural Networks For Unsupervised Clustering Through Pseudo Supervision And Graph Based Activity Regularization (ICLR 2018 submission)\", \"Additionally, some of the above report higher NMI scores than our model (although they require a set number of clusters). We will adapt the paper respectively.\"]}",
"{\"title\": \"Response\", \"comment\": \"Thank you for the helpful comments!\", \"regarding_the_points_that_you_have_made\": \"\", \"1\": \"The NAT training framework aims to match each latent representation to a unique target in the latent space. By doing this, the model learns a mapping to latent space in which the distribution of the latent representations of the dataset very closely matches the distribution of the noise-targets. This is done by jointly learning the encoder function (parameterized by a neural network) and also learning the assignment of examples to their best-fitting target in latent space.\\n\\nAs we do not know the ideal assignment at the beginning of training, we randomly assign each example a noise-target at the beginning of training. During training, we progressively re-assign labels to different examples in the dataset so as to minimise the total distance between latent representations their corresponding targets (we call these optimal assignments). To find the optimal assignments, we can compute the distance from every latent representation to each target in the dataset and use the hungarian algorithm to find the optimal assignment of latent representation to target. However, finding the optimal assignment for an entire dataset of latent representations and noise-targets would be very expensive (indeed the hungarian algorithm has O(n^3) complexity), so instead we train using randomly selected batches from the dataset (we use a batch size of 100). For a batch of latent representations and targets, calculating the optimal assignments is feasible, and also means we can train NAT models similarly to other deep learning models (i.e. mini-batch SGD).\\n\\nWe agree that the section discussing the NAT training framework is quite brief. We decided to remove a lot of discussion to reduce the paper down to 8 pages. We plan to include more discussion on the NAT framework in an upcoming revision.\", \"2\": \"The NATAC model does rely on some heuristics, which means we do not have analytic guarantees to this method. We set the latent space of our model to be the surface of a d-dimensional sphere (similarly to the work of Bojanowski and Joulin). Although this might be less expressive than an un-normalized latent space, we found that placing both the noise targets and the latent representations on the manifold is empirically much more effective for training (see Appendix C.1).\", \"3\": \"The value of lambda and alpha do change during training. There is some discussion in the appendix about how exactly we set these values. In these experiments, we used some trial-and-error to find fitting values. We aim to include some experiments showcasing how the clustering algorithm behaves with different values for lambda (and alpha). Expect an update to the paper including this soon.\", \"4\": \"Thank you for bringing these papers to our attention. We will certainly include these in our revision. We believe a key contribution of the paper is that our method does not need a prior number of clusters - as real-world use cases for clustering usually have no prior knowledge of the true number of clusters in the data. However it is clear that several methods of unsupervised clustering (which do require a given number of clusters) outperform our method on MNIST, which we will mention in our revision.\\nRegarding 4 c) - Note that the evaluation of NATAC in our paper is different to that in the DCD paper: The NMI scores we report for the experiments are taken from the test set of 20 Newsgroups after training on the train set (using the \\u2018bydate\\u2019 version of the dataset). The DCD paper cites 20K training examples for it\\u2019s train set - suggesting that the clustering was performed using both the train and test set, with NMI reported on the whole dataset. If that is the case, we will report NMI values for our method trained in this way, and compare the results to those mentioned in the paper.\", \"5\": \"During training, the model successively agglomerates examples to the same centroid by deleting an example\\u2019s assigned target and instead assigning the example a copy of another target (using the delete-and-copy mechanism). At the same time, the model is also trying to optimize to the auxiliary objective, by having as little reconstruction error as possible.\\nThis means that, at some point, the model does not delete any more centroids during training, as agglomerating any more points would incur a huge reconstruction loss penalty. Therefore the model converges onto the number of clusters during training. We discuss convergence of the model in section 2.3 (Implementation Details).\"}"
]
} |
H1eJxngCW | DuoRC: Towards Complex Language Understanding with Paraphrased Reading Comprehension | [
"Amrita Saha",
"Rahul Aralikatte",
"Mitesh M. Khapra",
"Karthik Sankaranarayanan"
] | We propose DuoRC, a novel dataset for Reading Comprehension (RC) that motivates several new challenges for neural approaches in language understanding beyond those offered by existing RC datasets. DuoRC contains 186,089 unique question-answer pairs created from a collection of 7680 pairs of movie plots where each pair in the collection reflects two versions of the same movie - one from Wikipedia and the other from IMDb - written by two different authors. We asked crowdsourced workers to create questions from one version of the plot and a different set of workers to extract or synthesize corresponding answers from the other version. This unique characteristic of DuoRC where questions and answers are created from different versions of a document narrating the same underlying story, ensures by design, that there is very little lexical overlap between the questions created from one version and the segments containing the answer in the other version. Further, since the two versions have different level of plot detail, narration style, vocabulary, etc., answering questions from the second version requires deeper language understanding and incorporating background knowledge not available in the given text. Additionally, the narrative style of passages arising from movie plots (as opposed to typical descriptive passages in existing datasets) exhibits the need to perform complex reasoning over events across multiple sentences. Indeed, we observe that state-of-the-art neural RC models which have achieved near human performance on the SQuAD dataset, even when coupled with traditional NLP techniques to address the challenges presented in DuoRC exhibit very poor performance (F1 score of 37.42% on DuoRC v/s 86% on SQuAD dataset). This opens up several interesting research avenues wherein DuoRC could complement other Reading Comprehension style datasets to explore novel neural approaches for studying language understanding. | [
"reading comprehension",
"question answering"
] | Invite to Workshop Track | https://openreview.net/pdf?id=H1eJxngCW | https://openreview.net/forum?id=H1eJxngCW | ICLR.cc/2018/Conference | 2018 | {
"note_id": [
"B10R0csWf",
"r1sou2q-z",
"Syshhw54f",
"B1Ja-l9gM",
"rJhYEy6Sz",
"Syiy9hqZG",
"H127KnqZf",
"rkCi3T3lG",
"SJzXOG9eG"
],
"note_type": [
"official_comment",
"official_comment",
"official_comment",
"official_review",
"decision",
"official_comment",
"official_comment",
"official_review",
"official_review"
],
"note_created": [
1512971989850,
1512913058951,
1516039347268,
1511813559615,
1517249668354,
1512913379551,
1512913187879,
1512000678303,
1511823386081
],
"note_signatures": [
[
"ICLR.cc/2018/Conference/Paper378/Authors"
],
[
"ICLR.cc/2018/Conference/Paper378/Authors"
],
[
"ICLR.cc/2018/Conference/Paper378/AnonReviewer1"
],
[
"ICLR.cc/2018/Conference/Paper378/AnonReviewer3"
],
[
"ICLR.cc/2018/Conference/Program_Chairs"
],
[
"ICLR.cc/2018/Conference/Paper378/Authors"
],
[
"ICLR.cc/2018/Conference/Paper378/Authors"
],
[
"ICLR.cc/2018/Conference/Paper378/AnonReviewer1"
],
[
"ICLR.cc/2018/Conference/Paper378/AnonReviewer2"
]
],
"structured_content_str": [
"{\"title\": \"Addressing AnonReviewer3's comments\", \"comment\": \"Thank you for the encouraging words and appreciating the usefulness of the dataset\"}",
"{\"title\": \"Addressing AnonReviewer1's comments\", \"comment\": \"We would like to thank the reviewer for the valuable comments and would take this opportunity to address the specific comments/questions raised.\\n\\n\\n1. Comparison between MovieQA and ParaphraseRC \\n\\nMore details of this comparative analysis is given below in a separate comment titled \\u201cCOMPARATIVE STATS FOR MOVIEQA AND DUORC\\u201d\\ni) MovieQA, like the SQUAD RC Dataset, also suffers from a high lexical overlap between QA pairs and the passage. In particular, the percentage of Questions where both question & answer entities were found in the plot is only 12% in ParaphraseRC whereas it is 65-68% in MovieQA (over the Train and Valid Splits). Similarly, the percentage of questions where question entities were found in the plot is only 47% in ParaphraseRC while its 57-60% in MovieQA. \\nii) Scale of the Data: ParaphraseRC is 6.7 times of MovieQA (in terms of QA pairs)\\niii) Multiple Sentence Inferencing: Both ParaphraseRC and MovieQA require inferencing over 2-3 sentences on an average to answer the questions\\n\\n\\n2. Distribution of Questions exhibiting the challenges of DuoRC (For more details on each of these points please see the separate comment below, titled \\\"COMPARATIVE STATS FOR MOVIEQA AND DUORC\\\")\", \"challenge_1___low_lexical_overlap_between_question_and_plot\": \"For ParaphraseRC, 47% of the questions have some meaningful overlap with the plot (and on an avg. only 21% of the query entities or noun/verb phrases are present in the plot)\", \"challenge_2___questions_requiring_common_sense_knowledge\": \"These are possibly the ones which don\\u2019t have any direct textual overlap between the question/answer and the plot content. In Paraphrase RC 88% of the questions require external knowledge to bridge the gap.\", \"challenge_3___questions_requiring_multiple_sentence_inferencing\": \"On an average answering a question from the ParaphraseRC plot requires inferencing over 2-3 sentences (please see the \\\"num_sentences_req_for_inferencing\\\" stats below in the \\\"COMPARATIVE STATS FOR MOVIEQA AND DUORC\\\" section).\", \"challenge_4___questions_that_require_answers_to_be_generated_and_not_just_extracted_from_the_passage\": \"37% of the Questions are \\u201csynthesized\\u201d by AMT workers after reading the ParaphraseRC plot\\n Challenge 5 - Questions that are \\u201cNot Answerable\\u201d: 13% of questions could not be answered by AMT workers based on that plot\", \"challenge_6___non_factoid_questions\": \"Apart from the factual questions we also have 7% how/why/justify/describe type non-factoid questions, 6% boolean/count questions and 1% cloze questions.\\n\\n\\n3. How to evaluate \\u201cNon Answerable\\u201d Questions\\n\\nYes, the correct answer to a question in our dataset is either: i) a text snippet directly taken from the plot, or ii) a text \\u201csynthesized\\u201d by the annotator based on the plot, or iii) the question is \\u201cNot Answerable\\u201d from the plot. Therefore, for each question a model can either: a) predict the likely span containing the answer and/or generate the answer from it, or b) make a prediction as \\u201cNot Answerable\\u201d (for example, \\u201cNo Span\\u201d output from the BiDAF model). We can separately benchmark the accuracy of any model over the subset of questions which are marked as \\u201cNot Answerable\\u201d.\\n\\n\\n4. Evaluating on Paraphrase RC is better when trained on Self RC as opposed to when trained on Paraphrase RC. \\n\\nWe thank the reviewer for pointing out the mistake in the Discussion section that \\u201ctraining on one dataset and evaluating on the other results in a drop in the performance.\\u201d is indeed not true in the case where the model is trained on SelfRC and evaluated on ParaphraseRC. We believe this is because learning with the ParaphraseRC is more difficult given the wide range of challenges in this dataset. However, in our setup, instead of replacing the training data, SelfRC, with ParaphraseRC (which drops the test performance on both SelfRC and ParaphraseRC), if we augment the training data SelfRC with ParaphraseRC, the test performance infact improves slightly indicating that ParaphraseRC also helps to an extent. We will correct this observation in the updated version of the paper.\\n\\n\\n5. In the third phase of data collection (Paraphrase RC), was waiting for 2-3 weeks the only step taken in order to ensure that the workers for this stage are different from those in stage 2, or was something more sophisticated implemented which did not allow a worker who has worked in stage 2 to be able to participate in stage 3?\\n\\n No, this was the only step that was taken. But given the scale of movies (~8K movies) over a diverse set of genres, languages, etc and the global AMT worker base, hopefully this step was sufficient to remove any chance of bias.\"}",
"{\"title\": \"Post-rebuttal evaluation\", \"comment\": \"After reading the authors' responses to the concerns raised by me and my fellow reviewers, I would recommend acceptance of this paper because it presents a new dataset which presents challenges worth pushing for.\"}",
"{\"title\": \"Useful dataset for reading comprehension\", \"rating\": \"7: Good paper, accept\", \"review\": \"This paper presents a useful dataset for testing reading comprehension while avoiding significant lexical overlap between question and document. The paper rightly mentions that existing reading comprehension datasets (e.g. SQuAD) where the current methods are already performing at the human level largely due to large lexical overlap between question and document. The authors have devised a clever way to create a reading comprehension dataset without a lot of lexical overlap by using parallel plots of movies from Wikipedia and IMDB.\\n\\nThis paper contributes a useful new dataset that fixes some of the shortcomings of existing reading comprehension datasets where the task is made easier by lexical overlap. The authors also present an analysis of the data by applying one of the SOTA techniques on SQuAD to this data. They also analyze the effect of various span-identification steps and preprocessing steps on the performance. Overall, this paper contributes a useful new dataset that can be quite useful for reading comprehension.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"ICLR 2018 Conference Acceptance Decision\", \"comment\": \"This is a (question answering) dataset paper with some baseline models.\\n\\nThe evaluation metric seems far from ideal and not quite ready for prime-time yet. They use F1 and Exact Match - these metrics make sense for extractive question answering systems, they don't make sense IMO for abstractive systems where the answer can be generated by the model (BLEU-type eval metrics seem more appropriate).\\n\\nI therefore recommend this work for the workshop track.\", \"decision\": \"Invite to Workshop Track\"}",
"{\"title\": \"Addressing AnonReviewer2's comments\", \"comment\": \"We would like to thank the reviewer for the valuable comments and would take this opportunity to address the specific comments/questions raised.\\n\\n1. in the abstract, authors mentioned the workers set one only takes care of creating questions from version one of the plots, and workers set two is in charge of generating answers from another version of plots. However, in bullet 2 of section 3, it seems that the workers set one is also required to answer the questions in selfRC. Is there any mistake in the description of the abstract?\", \"response\": \"We do not prune irrelevant documents but irrelevant segments (sentences or paragraphs) from the given plot based on semantic relation between the words in the plot segments and the question. This preprocessing step is elaborated in the Subsection titled \\u201cAdditional NLP pre-processing\\u201d in section 4.\"}",
"{\"title\": \"COMPARATIVE STATS FOR MOVIEQA AND DUORC\", \"comment\": \"To compare MovieQA and DuoRC, we extracted the following statistics. First we extracted entities (which includes named entities and noun or verb phrases) in the question and answer. Then we located sentences in the plot containing these entities. Next, for each question entity located in a sentence, we find the closest sentences containing the answer entities. From this we derive two things\\n----\\u201cavg_distance_in_words\\u201d or \\\"avg_distance_in_sentences\\\" below means the average distance (in terms of words/sentences) between the occurrence of the question entities and closest occurrence of the answer entities. \\n----\\\"num_sentences_req_for_inferencing\\\", i.e. total number of sentences required to cover all the question and answer entities (only considering the closest occurrence of sentence containing answer entities to the sentence containing question entities)\", \"for_movieqa_valid\": \"avg_distance_in_words 20.6 words\\navg_distance_in_sentences 1.69 sentences\\nnum_sentences_req_for_inferencing 2.28 sentences\\n% Qs where both question & answer entities were found in the plot: 1287/1958 i.e. 65.7%\\n% Qs where only question entities found in the plot: 1126/1958 i.e. 57.5% (Percentage length of LCS (Longest Common Subsequence of non-stop words) between query and plot w.r.t question length: 25% of the query)\", \"for_movieqa_train\": \"avg_distance_in_words 20.75 words\\navg_distance_in_sentences 1.66 sentences \\nnum_sentences_req_for_inferencing 2.33 sentences\\n% Qs where both question & answer entities were found in the plot: 6737/9848 i.e. 68.4%, \\n% Qs where only question entities found in the plot: 5912/9848 i.e. 60% (Percentage length of LCS (Longest Common Subsequence of non-stop words) between query and plot w.r.t question length: 25% of the query)\", \"for_paraphraserc\": \"avg_distance_in_words 45.3 words\\navg_distance_in_sentences 2.7 sentences\\nnum_sentences_req_for_inferencing 2.47 sentences\\n% Qs where both question & answer entities were found in the plot: 12294/100316 i.e. 12%, \\n% Qs where only question entities found in the plot: 47198/100316 i.e. 47% (Percentage length of LCS (Longest Common Subsequence of non-stop words) between query and plot w.r.t question length: 21% of the query)\", \"for_selfrc\": \"avg_distance_in_words 13.4 words\\navg_distance_in_sentences 1.34 sentences \\nnum_sentences_req_for_inferencing 1.51 sentences\\n% Qs where both question & answer entities were found in the plot: 50423/85773 i.e. 58.7%, \\n% Qs where only question entities found in the plot: 54371/85773 i.e. 63.3% (Percentage length of LCS (Longest Common Subsequence of non-stop words) between query and plot w.r.t question length: 38% of the query)\"}",
"{\"title\": \"Need some more analysis / clarifications.\", \"rating\": \"7: Good paper, accept\", \"review\": \"Summary:\\nThe paper proposes a new dataset for reading comprehension, called DuoRC. The questions and answers in the DuoRC dataset are created from different versions of a movie plot narrating the same underlying story. The DuoRC dataset offers the following challenges compared to the existing reading comprehension (RC) datasets \\u2013 1) low lexical overlap between questions and their corresponding passages, 2) requires use of common-sense knowledge to answer the question, 3) requires reasoning across multiples sentences to answer the question, 4) consists of those questions as well that cannot be answered from the given passage. The paper experiments with two types of models \\u2013 1) a model which only predicts the span in a document and 2) a model which generates the answer after predicting the span. Both these models are built off of an existing model on SQuAD \\u2013 the Bidirectional Attention Flow (BiDAF) model. The experimental results show that the span based model performs better than the model which generates the answers. But the accuracy of both the models is significantly lower than that of their base model (BiDAF) on SQuAD, demonstrating the difficulty of the DuoRC dataset.\", \"strengths\": \"1.\\tThe data collection process is interesting. The challenges in the proposed dataset as outlined in the paper seem worth pushing for.\\n2.\\tThe paper is well written making it easy to follow.\\n3.\\tThe experiments and analysis presented in the paper are insightful.\", \"weaknesses\": \"1.\\tIt would be good if the paper can throw some more light on the comparison between the existing MovieQA dataset and the proposed DuoRC dataset, other than the size.\\n2.\\tThe dataset is motivated as consisting of four challenges (described in the summary above) that do not exist in the existing RC datasets. However, the paper lacks an analysis on what percentage of questions in the proposed dataset belong to each category of the four challenges. Such an analysis would helpful to accurately get an estimate of the proportion of these challenges in the dataset.\\n3.\\tIt is not clear from the paper how should the questions which are unanswerable be evaluated. As in, what should be the ground-truth answer against which the answers should such questions be evaluated. Clearly, string matching would not work because a model could say \\u201cdon\\u2019t know\\u201d whereas some other model could say \\u201cunanswerable\\u201d. So, does the training data have a particular string as the ground truth answer for such questions, so that a model can just be trained to spit out that particular string when it thinks it can\\u2019t answer the questions? \\n4.\\tOne of the observations made in the paper is that \\u201ctraining on one dataset and evaluating on the other results in a drop in the performance.\\u201d However, in table 4, evaluating on Paraphrase RC is better when trained on Self RC as opposed to when trained on Paraphrase RC. This seems to be in conflict with the observation drawn in the paper. Could authors please clarify this? Also, could authors please throw some light on why this might be happening?\\n5.\\tIn the third phase of data collection (Paraphrase RC), was waiting for 2-3 weeks the only step taken in order to ensure that the workers for this stage are different from those in stage 2, or was something more sophisticated implemented which did not allow a worker who has worked in stage 2 to be able to participate in stage 3?\\n6.\\tTypo: Dataset section, phrases --> phases\", \"overall\": \"The challenges proposed in the DuoRC dataset are interesting. The paper is well written and the experiments are interesting. However, there are some questions (as mentioned in the Weaknesses section) which need to be clarified before I can recommend acceptance for the paper.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Useful dataset for reading comprehension\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"1) This paper proposes a new dataset for Reading Comprehension (RC). Different from other existing RC datasets, the authors claim that this new dataset requires background and common-sense knowledge, and across sentences reasoning in order to answer the questions correctly.\\n\\nOverall, I think this dataset is very useful for RC. The collection process is also carefully designed to reduce the lexical overlap between question and answer pairs.\\n\\n2) I have the questions as follows:\\ni) in the abstract, authors mentioned the workers set one only takes care of creating questions from version one of the plots, and workers set two is in charge of generating answers from another version of plots. However, in bullet 2 of section 3, it seems that the workers set one is also required to answer the questions in selfRC. Is there any mistake in the description of the abstract?\\n\\nii) What is the standard for creating the questions? I noticed that the time and location information was used to generate questions sometime, but sometimes these kinds of questions are ignored.\\n\\niii) Why the SelfRC is about QA pairs but for ParaphraseRC, you need to include documents? \\n\\niv) What is the average length of the answers in both ParaphraseRC and SelfRC? I found that the answers are usually very short, which is more like factoid QA. It would be great if the authors could design some non-factoid QA pairs which require more reasoning and background knowledge. \\n\\nv) During NLP pre-processing (section 4), how do you prune the irrelevant documents?\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
H1uR4GZRZ | Stochastic Activation Pruning for Robust Adversarial Defense | [
"Guneet S. Dhillon",
"Kamyar Azizzadenesheli",
"Zachary C. Lipton",
"Jeremy D. Bernstein",
"Jean Kossaifi",
"Aran Khanna",
"Animashree Anandkumar"
] | Neural networks are known to be vulnerable to adversarial examples. Carefully chosen perturbations to real images, while imperceptible to humans, induce misclassification and threaten the reliability of deep learning systems in the wild. To guard against adversarial examples, we take inspiration from game theory and cast the problem as a minimax zero-sum game between the adversary and the model. In general, for such games, the optimal strategy for both players requires a stochastic policy, also known as a mixed strategy. In this light, we propose Stochastic Activation Pruning (SAP), a mixed strategy for adversarial defense. SAP prunes a random subset of activations (preferentially pruning those with smaller magnitude) and scales up the survivors to compensate. We can apply SAP to pretrained networks, including adversarially trained models, without fine-tuning, providing robustness against adversarial examples. Experiments demonstrate that SAP confers robustness against attacks, increasing accuracy and preserving calibration. | [
"adversarial examples",
"mixed strategy",
"sap",
"stochastic activation",
"vulnerable",
"perturbations",
"real images",
"imperceptible"
] | Accept (Poster) | https://openreview.net/pdf?id=H1uR4GZRZ | https://openreview.net/forum?id=H1uR4GZRZ | ICLR.cc/2018/Conference | 2018 | {
"note_id": [
"HJvA3yQQG",
"SJFnpOYxM",
"ryrXQ4wyz",
"ry5D1Z5xf",
"rkk5517Qf",
"Sk947JaBM",
"HJRkw1X7f",
"B1DQ5J77z"
],
"note_type": [
"official_comment",
"official_review",
"official_review",
"official_review",
"official_comment",
"decision",
"official_comment",
"official_comment"
],
"note_created": [
1514499279064,
1511783856703,
1510585116729,
1511817058299,
1514498694977,
1517249330405,
1514497765980,
1514498590575
],
"note_signatures": [
[
"ICLR.cc/2018/Conference/Paper828/Authors"
],
[
"ICLR.cc/2018/Conference/Paper828/AnonReviewer1"
],
[
"ICLR.cc/2018/Conference/Paper828/AnonReviewer2"
],
[
"ICLR.cc/2018/Conference/Paper828/AnonReviewer3"
],
[
"ICLR.cc/2018/Conference/Paper828/Authors"
],
[
"ICLR.cc/2018/Conference/Program_Chairs"
],
[
"ICLR.cc/2018/Conference/Paper828/Authors"
],
[
"ICLR.cc/2018/Conference/Paper828/Authors"
]
],
"structured_content_str": [
"{\"title\": \"Response AnonReviewer2\", \"comment\": \"We thank Reviewer2 for thorough comments. We are glad that the reviewer appreciated the value of an adversarial defense technique that can be applied post-hoc and our exposition. We reply to specific points below:\\n\\n1. You are correct, the defender does not know the what policy any actual adversary will use, only what the optimal adversary might do, thus the objective of minimizing the worst-case performance. We are improving the draft to be clearer in this regard.\\n\\n2. Regarding: \\u201cIn section 4, I do not understand why the adversary uses only the sign and not also the value of the estimated gradient.\\u201d: The reason why we are considering only the sign is because we cap the infinity norm of the adversarial perturbation. This leads to taking a step of equal size in each input dimension and thus the gradient magnitude does not come into play. This approach is standard in the recent academic study of adversarial examples and follows work by Goodfellow et al. (2014), which showed that imperceptible adversarial examples could be produced efficiently in this manner.. \\n\\nOne motivation for considering the infinity norm (vs L2 or L1) for constraining the size of an adversarial perturbation is that it accords more closely with perceptual similarity. For example, it\\u2019s possible to devise a perturbation with small L2 norm that is perceptually obvious because it moves a small group of pixels a large amount. \\n\\nNaturally, a stronger adversary might pursue an iterative approach rather than making one large perturbation. To this end, we are currently running experiments with iterative attacks and the initial results are promising - SAP continues to significantly outperform the dense model. We will add these results to the paper when they are ready.\\n\\n3. We are grateful for the reviewer\\u2019s suggestions for improving the exposition and are currently working to revise the draft in accordance with these recommendations. To start, we have improved some of the (previously) confusing language that might have failed to distinguish between the optimal adversary and some arbitrary adversary which may not apply the optimal perturbation.\"}",
"{\"title\": \"Simple and yet effective method against adversarial attack post traning.\", \"rating\": \"7: Good paper, accept\", \"review\": \"This paper propose a simple method for guarding trained models against adversarial attacks. The method is to prune the network\\u2019s activations at each layer and renormalize the outputs. It\\u2019s a simple method that can be applied post-training and seems to be effective.\\n\\nThe paper is well written and easily to follow. Method description is clear. The analyses are interesting and done well. I am not familiar with the recent work in this area so can not judge if they compare against SOTA methods but they do compare against various other methods.\\n\\nCould you elaborate more on the findings from Fig 1.c Seems that the DENSE model perform best against randomly perturbed images. Would be good to know if the authors have any intuition why is that the case.\\n\\nThere are some interesting analysis in the appendix against some other methods, it would be good to briefly refer to them in the main text.\\n\\nI would be interested to know more about the intuition behind the proposed method. It will make the paper stronger if there were more content arguing analyzing the intuition and insight that lead to the proposed method.\\n\\nAlso would like to see some notes about computation complexity of sampling multiple times from a larger multinomial.\\n\\nAgain I am not familiar about different kind of existing adversarial attacks, the paper seem to be mainly focus on those from Goodfellow et al 2014. Would be good to see the performance against other forms of adversarial attacks as well if they exist.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"overall, this paper presents a practical method to prevent a classifier from adversarial examples, which can be applied in addition to adversarial training. The presentation could be improved.\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"This paper investigates a new approach to prevent a given classifier from adversarial examples. The most important contribution is that the proposed algorithm can be applied post-hoc to already trained networks. Hence, the proposed algorithm (Stochastic Activation Pruning) can be combined with algorithms which prevent from adversarial examples during the training.\\n\\nThe proposed algorithm is clearly described. However there are issues in the presentation.\\n\\nIn section 2-3, the problem setting is not suitably introduced.\", \"in_particular_one_sentence_that_can_be_misleading\": \"\\u201cGiven a classifier, one common way to generate an adversarial example is to perturb the input in direction of the gradient\\u2026\\u201d\\nYou should explain that given a classifier with stochastic output, the optimal way to generate an adversarial example is to perturb the input proportionally to the gradient. The practical way in which the adversarial examples are generated is not known to the player. An adversary could choose any policy. The only thing the player knows is the best adversarial policy.\\n\\nIn section 4, I do not understand why the adversary uses only the sign and not also the value of the estimated gradient. Does it come from a high variance? If it is the case, you should explain that the optimal policy of the adversary is approximated by \\u201cfast gradient sign method\\u201d. \\n\\nIn comparison to dropout algorithm, SAP shows improvements of accuracy against adversarial examples. SAP does not perform as well as adversarial training, but SAP could be used with a trained network. \\n\\nOverall, this paper presents a practical method to prevent a classifier from adversarial examples, which can be applied in addition to adversarial training. The presentation could be improved.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Interesting heuristic but little theoretical justification\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"The authors propose to improve the robustness of trained neural networks against adversarial examples by randomly zeroing out weights/activations. Empirically the authors demonstrate, on two different task domains, that one can trade off some accuracy for a little robustness -- qualitatively speaking.\\n\\nOn one hand, the approach is simple to implement and has minimal impact computationally on pre-trained networks. On the other hand, I find it lacking in terms of theoretical support, other than the fact that the added stochasticity induces a certain amount of robustness. For example, how does this compare to random perturbation (say, zero-mean) of the weights? This adds stochasticity as well so why and why not this work? The authors do not give any insight in this regard.\\n\\nOverall, I still recommend acceptance (weakly) since the empirical results may be valuable to a general practitioner. The paper could be strengthened by addressing the issues above as well as including more empirical results (if nothing else).\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Response to AnonReviewer1\", \"comment\": \"Thanks for your clear review of our paper. We are glad that you appreciated both the method and the clarity of exposition.\\n\\n1. Regarding Fig 1.c: While dense models are susceptible to adversarial attack, they are actually quite robust to random noise. The purpose of reporting the results of this experiment is to provide context for the other results. Because dense models are not especially vulnerable to random noise, we are not surprised that they perform well here. \\n\\n2. Thanks for the suggestion that the analysis in the appendix should be summarized within the body of the paper. Per your request, we have added an additional subsection (5.3) in the current draft that briefly describes the baselines and we have included a corresponding figure that shows the quantitative results for each.\\n\\n3. While we are reluctant to present an explanation for a phenomena that we do not fully understand, we are happy to share the intuitions that guided us in developing the algorithm: \\n\\nOriginally we were looking sparsifying the weights and/or activations of the network. We were encouraged by results, e.g. https://arxiv.org/abs/1510.00149, showing high accuracy with sparsified weights (as by pruning). We thought that by sparsifying a network, we might maintain high accuracy while lowering the Lipschitz constant and thus conferring some robustness against small perturbations. We later drew some inspiration from randomized algorithms that sparsify matrices by randomly dropping entries according to their weights and scaling up the survivors to produce a sparse matrix with similar spectral properties to the original.\\n\\n4. Sampling from the multinomial is fast. Without getting into detail about how many random bits are needed, given uniform samples, we can convert to a sample from a multinomial by performing a binary search. So it\\u2019s roughly k log(n) where k is the number of samples and n is the number of activations. As a practical concern, sampling from the multinomial in our algorithms does not comprise a significant computational obstacle.\\n\\n5. As you correctly point out, In our experiments, we adopt approach from Goodfellow et al. of evaluating with adversarial perturbations produced by taking a single step with capped infinity norm. However, we generate these attacks differently for each model. Against our stochastic models, the adversary produces the attack by estimating the gradient with MC samples. \\n\\n6. Per your suggestions we have compared against a stronger modes of attack, namely an iterative update where we take multiple small updates, each of capped infinity norm. In these experiments, SAP continues to outperform the dense model significantly. We are currently compiling these results and will add them to the draft when ready.\"}",
"{\"title\": \"ICLR 2018 Conference Acceptance Decision\", \"comment\": \"This is a borderline paper. The reviewers are happy with the simplicity of the proposed method and the fact that it can be applied after training; but are concerned by the lack of theory explaining the results. I will recommend accepting, but I would ask the authors add the additional experiments they have promised, and would also suggest experiments on imagenet.\", \"decision\": \"Accept (Poster)\"}",
"{\"title\": \"General reply to all reviewers\", \"comment\": \"We would like to thank the reviewers for their thoughtful responses to our paper. We are glad to see that there is a consensus among the reviewers to accept and are grateful to each of the reviewers for critical suggestions that will help us to improve the work. Please find individual replies to each of the reviews in the respective threads.\"}",
"{\"title\": \"Response to AnonReviewer3\", \"comment\": \"Thanks for the thoughtful review of our paper. We are glad that you recognize the empirical strength of the result and the simplicity of the method. We are also share your desire for greater theoretical understanding.\", \"regarding\": \"\\u201chow does this compare to random perturbation (say, zero-mean) of the weights?\\u201d.\\nWe ran this experiment, and found that it did not help. Additionally, for a more direct comparison, we compared against zero-mean Gaussian noise applied to the activations. We call this method Random Noisy Activations (RNA). It was previously described only in Appendix B, but we have now added a brief description to section 5 and reported the quantitative results in Figure 5.\\n\\nDespite extensive empirical study, precisely why our method works but random noise on the activations does not remains unclear. While we can imagine some ways of spinning a theoretical story post-hoc, the honest answer is that we do not yet possess a solid theoretical explanation. We share your desire for a greater understanding and plan to investigate this direction further in future work.\\n\\n***TL;DR: Per your suggestions, we have improved the draft by running additional experiments. Please find in Figure 5 results for 0-mean gaussian noise applied to weights with sigma values {.01, .02, \\u2026, .05}, as well as results for several other sensible baselines and greater detail in Appendix B.***\"}"
]
} |
BkM27IxR- | Learning to Optimize Neural Nets | [
"Ke Li",
"Jitendra Malik"
] | Learning to Optimize is a recently proposed framework for learning optimization algorithms using reinforcement learning. In this paper, we explore learning an optimization algorithm for training shallow neural nets. Such high-dimensional stochastic optimization problems present interesting challenges for existing reinforcement learning algorithms. We develop an extension that is suited to learning optimization algorithms in this setting and demonstrate that the learned optimization algorithm consistently outperforms other known optimization algorithms even on unseen tasks and is robust to changes in stochasticity of gradients and the neural net architecture. More specifically, we show that an optimization algorithm trained with the proposed method on the problem of training a neural net on MNIST generalizes to the problems of training neural nets on the Toronto Faces Dataset, CIFAR-10 and CIFAR-100. | [
"Learning to learn",
"meta-learning",
"reinforcement learning",
"optimization"
] | Reject | https://openreview.net/pdf?id=BkM27IxR- | https://openreview.net/forum?id=BkM27IxR- | ICLR.cc/2018/Conference | 2018 | {
"note_id": [
"HJt6cPaQz",
"B1ieSJTrz",
"Skd5kh5ef",
"BydHF89lz",
"HyZeqPTmM",
"Bynx0wHgM",
"HJ4qKvpmM"
],
"note_type": [
"official_comment",
"decision",
"official_review",
"official_review",
"official_comment",
"official_review",
"official_comment"
],
"note_created": [
1515186881453,
1517249778699,
1511862160390,
1511840064081,
1515186665171,
1511517684099,
1515186571776
],
"note_signatures": [
[
"ICLR.cc/2018/Conference/Paper272/Authors"
],
[
"ICLR.cc/2018/Conference/Program_Chairs"
],
[
"ICLR.cc/2018/Conference/Paper272/AnonReviewer1"
],
[
"ICLR.cc/2018/Conference/Paper272/AnonReviewer3"
],
[
"ICLR.cc/2018/Conference/Paper272/Authors"
],
[
"ICLR.cc/2018/Conference/Paper272/AnonReviewer2"
],
[
"ICLR.cc/2018/Conference/Paper272/Authors"
]
],
"structured_content_str": [
"{\"title\": \"Response to your review\", \"comment\": \"The following are new compared to (Li & Malik, 2016):\\n\\n- A partially observable formulation, which allows the use of observation features that are noisier but can be computed more efficiently than state features. Because only the observation features are used at test time, this improves the time and space efficiency of the learned algorithm. \\n- Learns an optimization algorithm that works in a stochastic setting (when we have noisy gradients). \\n- Introduced features so that the search is only over algorithms that are invariant to scaling of the objective functions and/or the parameters. \\n- The update formula is now parameterized as a recurrent net rather than a feedforward net. \\n- The block-diagonal structure on the matrices, which allows the method to scale to high-dimensional problems. \\n\\nAs discussed in Sect. 3.5, the block-diagonal structure is what enables us to learn an optimization algorithm for high-dimensional problems. Because the time complexity of LQG is cubic in the state dimensionality, (Li & Malik, 2016) cannot be tractably applied to the high-dimensional problems considered in our paper. \\n\\nThe objective values shown in the plots are computed on the training set. However, curves on the test set are similar. \\n\\nNote that the optimization algorithm is only (meta-)trained *once* on the problem of training on MNIST and is *not* retrained on the problems of (base-)training on TFD, CIFAR-10 and CIFAR-100. The time used for meta-training is therefore a one-time upfront cost; it is analogous to the time taken by researchers to devise a new optimization algorithm. For this reason, it does not make sense to include the time used for meta-training when comparing meta-test time performance. \\n\\nWe'll clarify the details on hyperparameters in the camera-ready. \\n\\nRegarding terminology, \\\"learning what to learn\\\" is a broader area that subsumes multi-task learning and also includes transfer learning and few-shot learning, for example. \\\"Learning which model to learn\\\" is different from the usual base-level learning because the aim is to search over hypothesis classes (model classes) rather than individual hypotheses (model parameters). Note that the use of these terms to refer to multi-task learning and hyperparameter optimization is not some sort of re-branding exercise; it is simply a reflection of how the terms \\\"learning to learn\\\" and \\\"meta-learning\\\" were used historically. For example, Thrun & Pratt's book on \\\"Learning of Learn\\\" (2012) focuses on \\\"learning what to learn\\\", and Brazdil et al.\\u2019s book on \\\"Metalearning\\\" (2008) focuses on \\\"learning which model to learn\\\". Because there has never been consensus on the precise definition of \\\"learning to learn\\\", the \\\"what\\\", \\\"which\\\" and \\\"how\\\" subsections in Sect. 2 are simply a convenient taxonomy of the diverse range of methods that all fall under the umbrella of \\\"learning to learn\\\".\"}",
"{\"decision\": \"Reject\", \"title\": \"ICLR 2018 Conference Acceptance Decision\", \"comment\": \"The presented work is a good attempt to expand the work of Li and Malik to the high-dimensional, stochastic setting. Given the reviewer comments, I think the paper would benefit from highlighting the comparatively novel aspects, and in particular doing so earlier in the paper.\\n\\nIt is very important, given the nature of this work, to articulate how the hyperparameters of the learned optimizers, and of the hand-engineered optimizers are chosen. It is also important to ensure that the amount of time spent on each is roughly equal in order to facilitate an apples-to-apples comparison.\\n\\nThe chosen architectures are still quite small compared to today's standards. It would be informative to see how the learned optimizers compare on realistic architectures, at least to see the performance gap.\\n\\nPlease clarify the objective being optimized, and it would be useful to report test error.\\n\\nThe approach is interesting, but does not yet meet the threshold required for acceptance.\"}",
"{\"title\": \"Learning to Optimize Neural Nets\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"Summary of the paper\\n---------------------------\\nThe paper derives a scheme for learning optimization algorithm for high-dimensional stochastic problems as the one involved in shallow neural nets training. The main motivation is to learn to optimize with the goal to design a meta-learner able to generalize across optimization problems (related to machine learning applications as learning a neural network) sharing the same properties. For this sake, the paper casts the problem into reinforcement learning framework and relies on guided policy search (GPS) to explore the space of states and actions. The states are represented by the iterates, the gradients, the objective function values, derived statistics and features, the actions are the update directions of parameters to be learned. To make the formulated problem tractable, some simplifications are introduced (the policies are restricted to gaussian distributions family, block diagonal structure is imposed on the involved parameters). The mean of the stationary non-linear policy of GPS is modeled as a recurrent network with parameters to be learned. A hatch of how to learn the overall process is presented. Finally experimental evaluations on synthetic or real datasets are conducted to show the effectiveness of the approach.\\n\\nComments\\n-------------\\n- The overall idea of the paper, learning how to optimize, is very seducing and the experimental evaluations (comparison to normal optimizers and other meta-learners) tend to conclude the proposed method is able to learn the behavior of an optimizer and to generalize to unseen problems.\\n- Materials of the paper sometimes appear tedious to follow, mainly in sub-sections 3.4 and 3.5. It would be desirable to sum up the overall procedure in an algorithm. Page 5, the term $\\\\omega$ intervening in the definition of the policy $\\\\pi$ is not defined.\\n- The definitions of the statistics and features (state and observation features) look highly elaborated. Can authors provide more intuition on these precise definitions? How do they impact for instance changing the time range in the definition of $\\\\Phi$) in the performance of the meta-learner?\\n- Figures 3 and 4 illustrate some oscillations of the proposed approach. Which guarantees do we have that the algorithm will not diverge as L2LBGDBGD does? How long should be the training to ensure a good and stable convergence of the method?\\n- An interesting experience to be conducted and shown is to train the meta-learner on another dataset (CIFAR for example) and to evaluate its generalization ability on the other sets to emphasize the effectiveness of the method.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"This paper proposed a reinforcement learning (RL) based method to learn an optimal optimization algorithm for training shallow neural networks. This work is an extended version of [Li &Malik 2016] aiming to address the high-dimensional problem.\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"This paper proposed a reinforcement learning (RL) based method to learn an optimal optimization algorithm for training shallow neural networks. This work is an extended version of [1], aiming to address the high-dimensional problem.\", \"strengths\": \"The proposed method has achieved a better convergence rate in different tasks than all other hand-engineered algorithms.\\nThe proposed method has better robustess in different tasks and different batch size setting.\\nThe invariant of coordinate permutation and the use of block-diagonal structure improve the efficiency of LQG.\", \"weaknesses\": \"1. Since the batch size is small in each experiment, it is hard to compare convergence rate within one epoch. More iterations should be taken and the log-scale style figure is suggested. \\n\\n2. In Figure 1b, L2LBGDBGD converges to a lower objective value, while the other figures are difficult to compare, the convergence value should be reported in all experiments.\\n\\n3. \\u201cThe average recent iterate\\u201c described in section 3.6 uses recent 3 iterations to compute the average, the reason to choose \\u201c3\\u201d, and the effectiveness of different choices should be discussed, as well as the \\u201c24\\u201d used in state features.\\n\\n4. Since the block-diagonal structure imposed on A_t, B_t, and F_t, how to choose a proper block size? Or how to figure out a coordinate group?\\n\\n5. The caption in Figure 1,3, \\u201cwith 48 input and hidden units\\u201d should clarify clearly.\\nThe curves of different methods are suggested to use different lines (e.g., dashed lines) to denote different algorithms rather than colors only.\\n\\n6. typo: sec 1 parg 5, \\u201ccurrent iterate\\u201d -> \\u201ccurrent iteration\\u201d.\", \"conclusion\": \"Since RL based framework has been proposed in [1] by Li & Malik, this paper tends to solve the high-dimensional problem. With the new observation of invariant in coordinates permutation in neural networks, this paper imposes the block-diagonal structure in the model to reduce the complexity of LQG algorithm. Sufficient experiment results show that the proposed method has better convergence rate than [1]. But comparing to [1], this paper has limited contribution.\\n\\n[1]: Ke Li and Jitendra Malik. Learning to optimize. CoRR, abs/1606.01885, 2016.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Response to your review\", \"comment\": \"The coordinate group depends on the structure of the underlying optimization problem and should correspond to the set of parameters for which the particular ordering among them has little or no significance. For example, for neural nets, the parameters corresponding to the weights in the same layer should be in the same coordinate group, because their ordering can be permuted (by permuting the units above and below) without changing the function the neural net computes.\\n\\nThe inability to scale to high-dimensional problems was actually the main limitation of the previous work (Li & Malik, 2016) [1] \\u2013 it was unclear at the time if this could be overcome (see for example the reviews of [1] at ICLR 2017). Overcoming the scalability issue therefore represents a significant contribution.\"}",
"{\"title\": \"See below for details\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"[Main comments]\\n\\n* I would advice the authors to explain in more details in the intro\\nwhat's new compared to Li & Malik (2016) and Andrychowicz et al. (2016).\\nIt took me until section 3.5 to figure it out.\\n\\n* If I understand correctly, the only new part compared to Li & Malik (2016) is\\nsection 3.5, where block-diagonal structure is imposed on the learned matrices.\\nIs that correct?\\n\\n* In the experiments, why not comparing with Li & Malik (2016)? (i.e., without\\n block-diagonal structure)\\n\\n* Please clarify whether the objective value shown in the plots is wrt the training\\n set or the test set. Reporting the training objective value makes little\\nsense to me, unless the time taken to train on MNIST is taken into account in\\nthe comparison. \\n\\n* Please clarify what are the hyper-parameters of your meta-training algorithm\\n and how you chose them.\\n\\nI will adjust my score based on the answer to these questions.\\n\\n[Other comments]\\n\\n* \\\"Given this state of affairs, perhaps it is time for us to start practicing\\n what we preach and learn how to learn\\\"\\n\\nThis is in my opinion too casual for a scientific publication...\\n\\n* \\\"aim to learn what parameter values of the base-level learner are useful\\n across a family of related tasks\\\"\\n\\nIf this is essentially multi-task learning, why not calling it so? \\\"Learning\\nwhat to learn\\\" does not mean anything. I understand that the authors wanted to\\nhave \\\"what\\\", \\\"which\\\" and \\\"how\\\" sections but this is not clear at all.\\n\\nWhat is a \\\"base-level learner\\\"? I think it would be useful to define it more\\nprecisely early on.\\n\\n* I don't see the difference between what is described in Section 2.2\\n (\\\"learning which model to learn\\\") and usual machine learning (searching for\\nthe best hypothesis in a hypothesis class).\\n\\n* Typo: p captures the how -> p captures how\\n\\n* The L-BFGS results reported in all Figures looked suspicious to me. How do you\\n explain that it converges to a an objective value that is so much worse?\\nMoreover, the fact that there are huge oscillations makes me think that the\\nauthors are measuring the function value during the line search rather than\\nthat at the end of each iteration.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Response to your review\", \"comment\": \"Below is an intuitive explanation of the state and observation features:\\n\\nAverage recent iterate, gradient and objective value are the means over the three most recent iterates, gradients and objective values respectively, unless there are fewer than three iterations in total, in which case the mean is taken over the iterations that have taken place so far.\", \"the_state_features_consist_of_the_following\": \"- The relative change in the average recent objective value compared to five iterations ago, as of every fifth iteration in the 120 most recent iterations; intuitively, this can capture if and by how much the objective value is getting better or worse. \\n- The average recent gradient normalized by the element-wise magnitude of the average recent gradient five iterations ago, as of every fifth iteration in the 125 most recent iterations. \\n- The normalized absolute change in the average iterate from five iterations ago, as of every fifth iteration in the 125 most recent iterations; intuitively, this can capture the per-coordinate step sizes we used previously. \\n\\nSimilarly, the observation features consist of the following:\\n- The relative change in the objective value compared to the previous iteration\\n- The gradient normalized by the element-wise magnitude of the gradient from the previous iteration\\n- The normalized absolute change in the iterate from the previous iteration\\n\\nThe normalization is designed so that the features are invariant to scaling of the objective function and to reparameterizations that involve scaling of the individual parameters. \\n\\nThe reason that the algorithm learned using the proposed approach does not diverge as L2LBGDBGD does is because the training is done under a more challenging and realistic setting, namely when the local geometries of the objective function are not known a priori. This is the setting under which the learned algorithm must operate at test time, since the geometry of an unseen objective function is unknown. This is the key difference between the proposed method and L2LBGDBGD, and more broadly, between reinforcement learning and supervised learning. L2LBGDBGD assumes the local geometry of the objective function to be known and so requires the local geometries of the objective function seen at test time to match the local geometries of one of the objective functions seen during training. Whenever this does not hold, it diverges. As a result, there is very little generalization to different objective functions. On the other hand, the proposed approach does not assume known geometry and therefore the algorithm it learns is more robust to differences in geometry at test time. \\n\\nIn reinforcement learning (RL) terminology, L2LBGDBGD assumes that the model/dynamics is known, whereas the proposed method assumes the model/dynamics is unknown. In the context of learning optimization algorithms, the dynamics captures what the next gradient is likely to be given the current gradient and step vector, or in other words, the local geometry of the objective function. \\n\\nThe reason why the algorithm learned using the proposed approach oscillates in Figs. 3 and 4 is because the batch size is reduced to 10 from 64 (which was the batch size used during meta-training), and so the gradients are noisier. Importantly, the algorithm is able to recover from the oscillations and converge to a good optimum in the end, demonstrating the robustness of the algorithm learned using the proposed approach. \\n\\nIn practice, about 10-20 iterations of the GPS algorithm are needed to obtain a good optimization algorithm.\"}"
]
} |
BJhxcGZCW | Generative Discovery of Relational Medical Entity Pairs | [
"Chenwei Zhang",
"Yaliang Li",
"Nan Du",
"Wei Fan",
"Philip S. Yu"
] | Online healthcare services can provide the general public with ubiquitous access to medical knowledge and reduce the information access cost for both individuals and societies. To promote these benefits, it is desired to effectively expand the scale of high-quality yet novel relational medical entity pairs that embody rich medical knowledge in a structured form. To fulfill this goal, we introduce a generative model called Conditional Relationship Variational Autoencoder (CRVAE), which can discover meaningful and novel relational medical entity pairs without the requirement of additional external knowledge. Rather than discriminatively identifying the relationship between two given medical entities in a free-text corpus, we directly model and understand medical relationships from diversely expressed medical entity pairs. The proposed model introduces the generative modeling capacity of variational autoencoder to entity pairs, and has the ability to discover new relational medical entity pairs solely based on the existing entity pairs. Beside entity pairs, relationship-enhanced entity representations are obtained as another appealing benefit of the proposed method. Both quantitative and qualitative evaluations on real-world medical datasets demonstrate the effectiveness of the proposed method in generating relational medical entity pairs that are meaningful and novel. | [
"Knowledge Discovery",
"Generative Modeling",
"Medical",
"Entity Pair"
] | Reject | https://openreview.net/pdf?id=BJhxcGZCW | https://openreview.net/forum?id=BJhxcGZCW | ICLR.cc/2018/Conference | 2018 | {
"note_id": [
"Bym8Y7aXf",
"HJ-9qfp7M",
"H1_4279lf",
"r1hITk7ez",
"SJyOXNclf",
"Sk4evyprz",
"S1Gt1m6mM",
"HyQ_CgtPG"
],
"note_type": [
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"decision",
"official_comment",
"official_comment"
],
"note_created": [
1515170122685,
1515166344753,
1511828527546,
1511353684096,
1511830375280,
1517250284300,
1515167609806,
1519091307374
],
"note_signatures": [
[
"ICLR.cc/2018/Conference/Paper896/Authors"
],
[
"ICLR.cc/2018/Conference/Paper896/Authors"
],
[
"ICLR.cc/2018/Conference/Paper896/AnonReviewer3"
],
[
"ICLR.cc/2018/Conference/Paper896/AnonReviewer1"
],
[
"ICLR.cc/2018/Conference/Paper896/AnonReviewer2"
],
[
"ICLR.cc/2018/Conference/Program_Chairs"
],
[
"ICLR.cc/2018/Conference/Paper896/Authors"
],
[
"ICLR.cc/2018/Conference/Paper896/Authors"
]
],
"structured_content_str": [
"{\"title\": \"Comments on Review\", \"comment\": \"Thanks for your review.\\n\\n1.\\tThe medical entity pairs generated by proposed model can be used to expand an existing knowledge graph with new entities as vertexes and relations as edges in a generative fashion. However, the KB completion task and the proposed entity pair discovery task share different objectives, and adopt totally different approaches:\\n a)\\tIn the medical domain, it is difficult to obtain a full spectrum of free-text where all kinds of relational medical entity pairs are co-occurred. It is efficient to learn the intrinsic medical relations from existing entity pairs directly and generate unseen entity pairs in a generative fashion. Although both tasks provide additional entity pairs as the output, we approach this problem from a novel, generative perspective that significantly lowers the data requirements during training. Table 3 shows that our model works well even when all the training entity pairs have the same relationship. This can not be achieved by discriminatively trained KB completion methods. KB completion methods like Trans-E relies on entity pairs having different relations and learns to distinguish one from another; otherwise negative entity pairs with no semantics meanings are used.\\n b)\\tThe generative discovery model is supposed to only generate rational entity pairs. Moreover, it is shown to have the ability to generate entity pairs having a pre-assigned relationship type, aka conditional inference, without the requirement of further domain knowledge. In the KB completion task, the rational entity pairs cannot be even obtained when there is no high-quality test set that contains entity pairs having that relationship. Otherwise, additional expert knowledge may be involved (e.g. to make sure that there exists a sentence that mentions two new entities having a certain relationship). Even then, the KB completion model needs to successfully classify the relationship for each test sample. The proposed model makes the conditional inference possible and efficient.\\n c)\\tLast but not least, it is unfair to simply evaluate the rational entity pairs generated by the proposed model against a discriminatively trained KB completion model that learns to tell the rational relation from other relations (or simply from a negative relation) when candidate entity pairs are already given for evaluation. We genuinely believe that it is way more challenging to understand what an apple is in order to create a new apple with a different look, than simply trained to distinguish an apple from a banana. \\n\\n2.\\tIn relation extraction methods where the objective is to detect whether or not a certain relation exists in a sentence, some words in the sentence serve as indicators. For example, for the \\\"born in\\\" relationship between a person and a place, words like \\\"born\\\", \\\"from\\\" are crucial. In the medical domain, free-text that contain a full spectrum of sentences that cover all medical entity pairs are hard to obtain, let alone domain-specific indicator words that are available to use. Without such text data as additional contexts, the proposed model is still able to generate novel entity pairs, which we consider as a major contribution.\\n\\n3.\\tFor our generative approach, \\\"nearest neighbor search\\\" is only performed as the last step of the decoder during evaluation to get natural language entities from the generated embeddings. Such operation is only performed on the generated rational entity pairs: it is not required at all during the training process. In many classic \\\"discriminatively-trained\\\" KB completion models, such search is usually used to trim candidate entity pairs that are not worth evaluating.\\n\\n4.\\tThe medical dataset has unique properties that other datasets do not have, which make it suitable for our generative entity pair discovery task. \\n\\ta)\\tFirst, the medical entity pairs contain clear and unambiguous relational semantics. This allows the model to directly encode two entities into the latent space without incorporating free-text contexts in which two medical entities are co-occurred. For example, the entity pairs <urethritis, urethra itching> and <radial nerve palsy, upper extremity weakness> can be used to learn the medical relationship from a disease to a symptom which it may cause. On the contrary, the entity pair <Obama, USA> in datasets, such as FB15K-237, possesses multiple relationships such as \\\"born in\\\", \\\"president of\\\", and \\\"live in\\\".\\n\\tb)\\tSecond, different medical relationships used in this work are closely correlated with each other. For example, disease->disease, disease->symptom and symptom->symptom relationships share common entities, which is not frequently observed in other datasets. The proposed method is able to benefit from such property when solely learning from entity pairs. As shown in Table 3, quality and novelty are consistently improved when multiple correlated medical relationships are trained together, other than trained separately.\"}",
"{\"title\": \"Comments on Review\", \"comment\": \"Thanks a lot for your review.\\n\\n1.\\tThe testing is not conducted on the validation set. The validation set is only used for hyperparameter tuning. We split the labeled entity pairs into training (70%) and validation (30%) set (described in the first paragraph of Section 3.1). As described in Appendix B, a hyperparameter analysis is conducted to show the validation losses when the model is trained with a wide range of hyperparameter settings, where the hyperparameter setting with the lowest validation loss is adopted.\\nDuring testing, the proposed CRVAE model is able to generate unseen, meaningful entity pairs for a given medical relationship. The generator of the proposed model samples from the latent space according to the relationship of new entity pairs we want to obtain and then decodes the sampled vector, along with the relationship indicator, into entity pairs that are evaluated separately without the use of the validation set. The quantitative evaluation results are shown in Table 2 and Table 3, where three measurements are used: quality, support, and novelty. For qualitative evaluation, additional case studies and visualizations are provided in Section 3.4.2-3.4.4.\\n\\n2.\\tWe want to have a more controllable generation process in terms of which relationship of entity pairs we want to generate. The representation of r in the generation part enables the conditional inference: it guides the model to generate entity pairs having a certain relationship (instead of using a random noise to generate entity pairs having arbitrary relationships), which is one of our key contributions. As shown in Figure 2 in Section 2.4, the representation of r is fed to the generator in two stages: 1) when generating the latent vector $\\\\hat z$ from the latent space 2) when decoding the sampled vector $\\\\hat z$.\\n\\n Another reason for introducing the representation of r into the generation process is that the latent space itself does not capture clear enough information without the use of the representation of r. We\\u2019ve introduced a baseline model RVAE (without incorporating r) and illustrated our observations in Figure 4. We color the labeled validation samples in the latent space, from which we can find that the baseline model RVAE (without incorporating r) is able to map entity pairs with different relationships vaguely into different regions in the latent space. However, since the label r is not used in RVAE, it is still hard to draw a clear enough boundary for each relationship, so as to sample accordingly and generate entity pairs having that relationship. This motivates us to incorporate the representation of r into the generation process. As shown in the right part of Figure 4, when r is given to the generator, the categorical information it provides naturally allows the generator to sample differently when the relationship varies. For example, if we want to generate entity pairs with symptom->disease relation, we will feed both the one-hot vector r indicating the symptom->disease relationship, as well as a latent value $\\\\hat z$ sampled from the latent space that is conditioned on the symptom->disease relation, to the generator in order to get entity pairs having the symptom->disease relationship.\"}",
"{\"title\": \"An interesting application\", \"rating\": \"4: Ok but not good enough - rejection\", \"review\": \"The authors suggest using a variational autoencoder to infer binary relationships between medical entities. The model is quite simple and intuitive and the authors demonstrate that it can generate meaningful relationships between pairs of entities that were not observed before.\\nWhile the paper is very well-written I have certain concerns regarding the motivation, model, and evaluation methodology followed:\\n\\n1) A stronger motivation for this model is required. Having a generative model for causal relationships between symptoms and diseases is \\\"intriguing\\\" yet I am really struggling with the motivation of getting such a model from word co-occurences in a medical corpus. I can totally buy the use of the proposed model as means to generate additional training data for a discriminative model used for information extraction but the authors need to do a better job at explaining the downstream applications of their model. \\n\\n2) The word embeddings used seem to be sufficient to capture the \\\"knowledge\\\" included in the corpus. An ablation study of the impact of word embeddings on this model is required. \\n\\n3) The authors do not describe how the data from xywy.com were annotated. Were they annotated by experts in the medical domain or random users?\\n\\n4) The metric of quality is particularly ad-hoc. Meaningful relationships in a medical domain and evaluation using random amazon mechanical turk workers do not seem to go well together. \\n\\n5) How does the proposed methods compare against a simple trained extractor? For instance one can automatically extract several linguistic features of the sentences two known related entities appeared with and learn how to extract data. The authors need to compare against such baselines or justify why they cannot be used.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"doubts on experimental setting\", \"rating\": \"4: Ok but not good enough - rejection\", \"review\": \"SUMMARY.\\n\\nThe paper presents a variational autoencoder for generating entity pairs given a relation in a medical setting.\\nThe model strictly follows the standard VAE architecture with an encoder that takes as input an entity pair and a relation between the entities.\\nThe encoder maps the input to a probabilistic latent space.\\nThe latent variables plus a one-hot-encoding representation of the relation is used to reconstruct the input entities.\\nFinally, a generator is used to generate entity pairs give a relation.\\n\\n----------\\n\\nOVERALL JUDGMENT\\nThe paper presents a clever use of VAEs for generating entity pairs conditioning on relations.\\nMy main concern about the paper is that it seems that the authors have tuned the hyperparameters and tested on the same validation set.\\nIf this is the case, all the analysis and results obtained are almost meaningless.\\nI suggest the authors make clear if they used the split training, validation, test.\\nUntil then it is not possible to draw any conclusion from this work.\\n\\nAssuming the experimental setting is correct, it is not clear to me the reason of having the representation of r (one-hot-vector of the relation) also in the decoding/generation part.\\nThe hidden representation obtained by the encoder should already capture information about the relation.\\nIs there a specific reason for doing so?\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Lack of advantages over (or evaluations against) pre-existing work\", \"rating\": \"2: Strong rejection\", \"review\": \"In the medical context, this paper describes the classic problem of \\\"knowledge base completion\\\" from structured data only (no text). The authors argue for the advantages of a generative VAE approach (but without being convincing). They do not cite the extensive literature on KB completion. They present experimental results on their own data set, evaluating only against simpler baselines of their own VAE approach, not the pre-existing KB methods.\\n\\nThe authors seem unaware of a large literature on \\\"knowledge base completion.\\\" E.g. [Bordes, Weston, Collobert, Bengio, AAAI, 2011], [Socher et al 2013 NIPS], [Wang, Wang, Guo 2015 IJCAI], [Gardner, Mitchell 2015 EMNLP], [Lin, Liu, Sun, Liu, Zhu AAAI 2015], [Neelakantan, Roth, McCallum 2015], \\n\\nThe paper claims that operating on pre-structured data only (without using text) is an advantage. I don't find the argument convincing. There are many methods that can operate on pre-structured data only, but also have the ability to incorporate text data when available, e.g. \\\"universal schema\\\" [Riedel et al, 2014].\\n\\nThe paper claims that \\\"discriminative approaches\\\" need to iterate over all possible entity pairs to make predictions. In their generative approach they say they find outputs by \\\"nearest neighbor search.\\\" But the same efficient search is possible in many of the classic \\\"discriminatively-trained\\\" KB completion models also.\\n\\nIt is admirable that the authors use an interesting (and to my knowledge novel) data set. But the method should also be evaluated on multiple now-standard data sets, such as FB15K-237 or NELL-995. The method is evaluated only against their own VAE-based alternatives. It should be evaluated against multiple other standard KB completion methods from the literature, such as Jason Weston's Trans-E, Richard Socher's Tensor Neural Nets, and Neelakantan's RNNs.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"decision\": \"Reject\", \"title\": \"ICLR 2018 Conference Acceptance Decision\", \"comment\": \"The authors seem to miss important related literature for their comparison.\\nThey also tuned hyperparameters and tested on the same validation set.\\nThey should split between train/validation/test.\\n\\nReviews are just too low across the board to accept.\"}",
"{\"title\": \"Comments on Review\", \"comment\": \"Thanks a lot for your review.\\n\\n1.\\tIn the medical domain, it is difficult to obtain a full spectrum of free-text in which all the relational medical entity pairs are co-occurred so that they can be further extracted in a discriminative fashion. The proposed generative method significantly lowers the data requirement for rational, novel medical entity pair discovery. It learns the intrinsic medical relations directly from the existing entity pairs without incorporating additional medical corpus in which two entities are co-occurred. As indicated in the review, the newly discovered entity pairs are definitely helpful in many ways: an intuitive downstream application is to provide more training samples for supervised learning models. Clustering could also benefit from the newly discovered entity pairs as a form of oversampling technique. \\n\\n2.\\tWe agree that the word embedding captures medical knowledge and embodies rich semantic information from the diversely expressed entity pairs. However, the word embedding cannot be removed for ablation study. It does not only build the backbone for entity pair representations and accelerate the model convergence, more importantly, the pre-trained word embeddings are necessary when decoding the generated word embeddings of the entity pairs into natural language entities. Without the word embedding, evaluation cannot be performed as we only obtain the generated embeddings, not entity pairs that are interpretable in the natural language for human annotation. Furthermore, the vocabulary of pre-trained word embedding is way larger than the number of unique entities in the labeled entity pairs. Using the word embedding may allow the model to decode unseen entities that exist in the vocabulary, but not in the training data.\\n\\n3.\\tThe relational medical entity pairs obtained from xywy.com are annotated manually by domain-experts. \\n\\n4.\\tThe generated relational medical entity pairs are evaluated both qualitatively and quantitatively. As far as we know, there is no existing quantitative metric for quality evaluation of the generated medical entity pairs. Therefore, human quality evaluation is conducted by Amazon Mechanical Turk workers. Instructions and requirements for workers are shown in Appendix C.\\n\\n5.\\tThe discriminative relation extraction from free-text and the generative entity pair discovery are two different tasks. The extractor is not explicitly evaluated in this work for the following reasons:\\n a.\\tDifferent training schema: the traditional extractor is trained discriminatively. It relies on the difference between entity pairs of different relationships and learns a decision boundary to distinguish one relation from another. The extractor fails to work in the case where all the training entity pairs belong to the same medical relation. Our generative setting solely learns from the existing entity pairs, no matter they belong to the same relationship or not. As shown in Table 3, our generative model works well when trained with entity pairs that all belong to the same relation, and works even better when entity pairs with different relations are trained together.\\n b.\\tDifferent testing schema: a large number of candidate entity pairs need to be provided and evaluated by the extractor in order to get the final, rational entity pairs. The choice of candidates sometimes involves additional expert knowledge; otherwise, any pairwise entities need to be fed to and tested by the extractor model. Our generative model learns to only generate rational medical entity pairs just given the type of relationship. When testing, we genuinely believe that it is way more challenging to understand what an apple is in order to create a new apple with a different look, than simply trained to discriminate an apple from a banana. Thus it is unfair to simply compare their results. \\n c.\\tUse of data: Our model does not need external documents in both training/testing phase. It only requires labeled data and pre-trained word embeddings. The extractor suffers from the data sparsity problem during training: it is hard to obtain a full spectrum of documents where two medical entity pairs are not only mentioned simultaneously in a single sentence but also pertain a specific medical relationship in that sentence. Also, the extractor relies on keywords or indicators in a single sentence to determine the existence of a certain relation, which is not required by our model.\"}",
"{\"title\": \"CRVAE for Efficient Relational Modeling\", \"comment\": \"1) Comments on related works\\nCurrent discriminatively trained models share different objectives, and adopt entirely different approaches to discover novel entity pairs. They rely on context as external resources, or well-prepared candidate entity pairs for the models to examine. The proposed model significantly lowers the data requirement for efficient relational medical entity pair discovery:\", \"relation_extraction_methods_usually_require_a_substantial_collection_of_contexts_over_a_full_spectrum_of_relationships_that_one_wants_to_work_on\": \"e.g. contexts obtained from free-text corpora where two entities co-occur in the same sentence with a relationship between them. As medical relationships in the real-world are becoming more and more complex and diversely expressed, such context is hard to obtain.\\n\\nKnowledge graph completion methods usually do not require contexts for training. However, they are vulnerable to the \\u201cgarbage-in, garbage-out\\u201d situation during testing: we can not even obtain the rational medical entity pairs for a specific relationship when no high-quality entity pairs are having that relationship among the candidate entity pairs. The choice of candidates may involve additional human annotation; otherwise, any dyadic combinations of medical entities need to be fed to and tested by the model, which is tedious and labor-intensive. While the generative nature of our model makes it only generate rational entity pairs by learning from the existing rational ones: no additional data needs to be prepared for efficient generative discovery.\\n\\n\\n2) Comments on the experiment setting\", \"the_proposed_model_discovers_entity_pairs_in_a_generative_fashion\": \"by directly sampling from the latent space, not by verifying pre-determined test cases. We don't need to prepare a test set for the model to examine. The validation set is not used for testing: it is used, and only used for hyperparameter study for the best model configuration. Quantitative and qualitative metrics such as Quality, Support, and Novelty are used to directly evaluate the meaningfulness and novelty of the generated entity pairs on real-world medical entity pairs.\"}"
]
} |
rkvDssyRb | Multi-Advisor Reinforcement Learning | [
"Romain Laroche",
"Mehdi Fatemi",
"Joshua Romoff",
"Harm van Seijen"
] | We consider tackling a single-agent RL problem by distributing it to $n$ learners. These learners, called advisors, endeavour to solve the problem from a different focus. Their advice, taking the form of action values, is then communicated to an aggregator, which is in control of the system. We show that the local planning method for the advisors is critical and that none of the ones found in the literature is flawless: the \textit{egocentric} planning overestimates values of states where the other advisors disagree, and the \textit{agnostic} planning is inefficient around danger zones. We introduce a novel approach called \textit{empathic} and discuss its theoretical aspects. We empirically examine and validate our theoretical findings on a fruit collection task. | [
"Reinforcement Learning"
] | Reject | https://openreview.net/pdf?id=rkvDssyRb | https://openreview.net/forum?id=rkvDssyRb | ICLR.cc/2018/Conference | 2018 | {
"note_id": [
"H18ZJWAgG",
"rJbjUB3JM",
"B1lRyEXEz",
"r1u6UJ6BG",
"B1m1clFlM"
],
"note_type": [
"official_review",
"official_review",
"official_comment",
"decision",
"official_review"
],
"note_created": [
1512079102390,
1510917785129,
1515565000394,
1517250240068,
1511750106586
],
"note_signatures": [
[
"ICLR.cc/2018/Conference/Paper161/AnonReviewer2"
],
[
"ICLR.cc/2018/Conference/Paper161/AnonReviewer3"
],
[
"ICLR.cc/2018/Conference/Paper161/AnonReviewer2"
],
[
"ICLR.cc/2018/Conference/Program_Chairs"
],
[
"ICLR.cc/2018/Conference/Paper161/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"Well-written but lacks deep technical and empirical contributions\", \"rating\": \"4: Ok but not good enough - rejection\", \"review\": \"Summary\\n\\nThe paper is well-written but does not make deep technical contributions and does not present a comprehensive evaluation or highly insightful empirical results.\\n\\nAbstract / Intro\\n\\nI get the entire focus of the paper is some variant of Pac-Man which has received attention in the RL literature for Atari games, but for the most part the impressive advances of previous Atari/RL papers are in the setting that the raw video is provided as input, which is much different than solving the underlying clean mathematically abstracted problem (as a grid world with obstacles) as done here and evident in the videos. Further it is honestly hard for me to be strongly motivated about a paper that focuses on the need to decompose Pac-man into sub-agents/advisor value functions.\\n\\nSection 2\", \"another_historically_well_cited_paper_for_mdp_decomposition\": \"Flexible Decomposition Algorithms for Weakly Coupled Markov Decision Problems, Ronald Parr. UAI 98.\", \"https\": \"//www.aaai.org/ocs/index.php/AAAI/AAAI12/paper/download/5012/5336\", \"definition_1\": \"Sure, the problem will have local optima (attractors) when decomposed suboptimally -- I'm not sure what new insight we've gained from this analysis... it is a general problem with any function approximation scheme that does not guarantee that the rank ordering of actions for a state is preserved.\\n\\n* Agnostic\\n\\nOther than approximating some type of myopic rollout, I really don't see why this approach would be reasonable? I am surprised it works at all though my guess is that this could simply be an artifact of evaluating on a single domain with a specific structure.\\n\\n* Empathic\\n\\nThis appears to be the key contribution though related work certainly infringes on its novelty. Is this paper then an empirical evaluation of previous methods in a single Pac-man grid world variant?\\n\\nI wonder if the theory of DEC-MDPs would have any relevance for novel analysis here?\\n\\nSection 5\\n\\nI'm disappointed that the authors only evaluate on a single domain; presumably the empathic approach has applications beyond Pac-Man?\\n\\nThe fact that empathic generally performs better is not at all surprising. The fact that a modified discount factor for egocentric can also perform well is not surprising given that lower discount factors have often been shown to improve approximated MDP solutions, e.g.,\\n\\n Biasing Approximate Dynamic Programming with a Lower Discount Factor\\n\\n Marek Petrik, Bruno Scherrer (NIPS-08).\", \"http\": \"//marek.petrik.us/pub/Petrik2009a.pdf\\n\\n***\", \"side_note\": \"The following part is somewhat orthogonal to the review above in that I would not expect the authors to address this on revision, *but* at the same time I think it provides a connection to the special case of concurrent action decomposition into advisors, which could potentially provide a high impact direction of application for this work (i.e., concurrent problems are hard and show up in numerous operations research problems covering inventory control, logistics, epidemic response).\\n\\nFor the special case that each advisor is assigned to one action in a factored space of concurrent actions, the egocentric algorithm would be very close to the Hindsight approximation in Section 6 of this paper (including an additive decomposition of rewards):\\n\\n Planning in Factored Action Spaces with Symbolic Dynamic Programming\\n Aswin Nadamuni Raghavan, Alan Fern, Prasad Tadepalli, Roni Khardon, and Saket Joshi (AAAI-12).\", \"this_simple_algorithm_is_hard_to_beat_for_the_following_reason_that_connects_some_details_of_your_egocentric_and_empathic_settings\": \"rather than decomposing a concurrent MDP into independent problems per concurrent action, the optimization of each action (by each advisor) is done in sequence (advisors are ordered) and gets to condition on the previously selected advisor actions. So it provides an alternate paradigm where advisors actually get to see and condition their policy on what other advisors are doing. In my own work comparing optimal concurrent solutions to this approach, I have found this approach to be near-optimal and much more efficient to solve since it exploits decomposition.\\n\\nWhy is this relevant to this work? Because (a) it suggests another variant of the advisor decomposition that at least makes sense in the case of concurrent actions (and perhaps shared actions though this would require some extension) and (b) it suggests there are more options than just the full egocentric and empathic settings in this important class of concurrent action problems that are necessarily solved in practice for large action spaces by some form of decomposition. This could be an interesting direction for future exploration of the ideas in this work, where there might be additional technical novelty and more space for empirical contributions and observations.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"The paper presents a somewhat unifying wide-angle view of multi-learner RL, but the paper is unclear and unfocused\", \"rating\": \"4: Ok but not good enough - rejection\", \"review\": \"The paper presents Multi-Advisor RL (MAd-RL), a formalized view of many forms of performing RL by training multiple learners, then aggregating their results into a single decision-making agent. Previous work and citations are plentiful and complete, and the field of study is a promising approach to RL. Through MAd-RL, the authors analyze the effects of egocentric, agnostic, and empathic planning at the sub-learner level on the resulting applied aggregated policy. After this theoretical discussion, the different types of sub-learners are used on a Pac-Man problem.\\n\\nI believe an interesting paper lies within this, and were this a journal, would recommend edits and resubmission. However, in its current state, the paper is too disorganized and unclear to merit publication. It took quite a bit of time for me to understand what the authors wanted me to focus on - the paper needs a clearer statement early summarizing its intended contributions. In addition, more care to language usage is needed - for example, \\\"an attractor\\\" refers to an MDP in Figure 3, a state in Theorem 2, and a set in the Theorem 2 discussion. Additionally, the theoretical portion focuses on the effects of the three different sub-learner types, but the experiments are \\\"intend[ed] to show that the value function is easier to learn with the MAd-RL architecture,\\\" which is an entirely different goal.\\n\\nI recommend the authors decide what to focus on, rethink how paper space is allocated, and take care to more clearly drive home their intended point.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Response\", \"comment\": \"> We interpret the word \\\"deep\\\" as \\\"strong\\\"\\n\\nIndeed, this was my intention... I was not expecting deep learning contributions.\\n\\n> in the way Ensemble Learning makes a strong learner out of weak learners\\n\\nThis is definitely an interesting direction for RL and overall the paper certainly made me think. I would really like to see this work published eventually, but I think it is a bit premature for ICLR this year and needs to choose more compelling examples (my thinking may simply have been misled by the current example choices) and more variety in experimental evaluation to drive home the generality of the proposed framework.\"}",
"{\"decision\": \"Reject\", \"title\": \"ICLR 2018 Conference Acceptance Decision\", \"comment\": \"The reviewers agree this is an interesting paper with interesting ideas, but is not ready for publication in its current shape. In particular, there is a need for strong empirical results.\"}",
"{\"title\": \"Very interesting theoretical analysis. Needs a sharper focus.\", \"rating\": \"4: Ok but not good enough - rejection\", \"review\": \"This paper presents MAd-RL, a method for decomposition of a single-agent RL problem into a simple sub-problems, and aggregating them back together. Specifically, the authors propose a novel local planner - emphatic, and analyze the newly proposed local planner along of two existing ones - egocentric and agnostic. The MAd-RL, and theoretical analysis, is evaluated on the Pac-Boy task, and compared to DQN and Q-learning with function approximation.\", \"pros\": \"1. The paper is well written, and well-motivated.\\n2. The authors did an extraordinary job in building the intuition for the theoretical work, and giving appropriate examples where needed.\\n3. The theoretical analysis of the paper is extremely interesting. The observation that a linearly weighted reward, implies linearly weighted Q function, analysis of different policies, and local minima that result is the strongest and the most interesting points of this paper.\", \"cons\": \"1. The paper is too long. 14 pages total - 4 extra pages (in appendix) over the 8 page limit, and 1 extra page of references. That is 50% overrun in the context, and 100% overrun in the references. The most interesting parts and the most of the contributions are in the Appendix, which makes it hard to assess the contributions of the paper. There are two options: \\n 1.1 If the paper is to be considered as a whole, the excessive overrun gives this paper unfair advantage over other ICLR papers. The flavor and scope and quality of the problems that can be tackled with 50% more space is substantially different from what can be addressed within the set limit. If the extra space is necessary, perhaps this paper is better suited for another publication? \\n 1.2 If the paper is assessed only based on the main part without Appendix, then the only novelty is emphatic planner, and the theoretical claims with no proofs. The results are interesting, but are lacking implementation details. Overall, a substandard paper.\\n2. Experiments are disjoint from the method\\u2019s section. For example:\\n 2.1 Section 5.1 is completely unrelated with the material presented in Section 4.\\n 2.2 The noise evaluation in Section 5.3 is nice, but not related with the Section 4. This is problematic because, it is not clear if the focus of the paper is on evaluating MAd-RL and performance on the Ms.PacMan task, or experimentally demonstrating claims in Section 4.\", \"recommendations\": \"1. Shorten the paper to be within (or close to the recommended length) including Appendix.\\n2. Focus paper on the analysis of the advisors, and Section 5. on demonstrating the claims.\\n3. Be more explicit about the contributions.\\n4. How does the negative reward influence the behavior the agent? The agent receives negative reward when near ghosts.\\n5. Move the short (or all) proofs from Appendix into the main text.\\n6. Move implementation details of the experiments (in particular the short ones) into the main text.\\n7. Use the standard terminology (greedy and random policies vs. egoistic and agnostic) where possible. The new terms for well-established make the paper needlessly more complex. \\n8. Focus the literature review on the most relevant work, and contrast the proposed work with existing peer reviewed methods.\\n9. Revise the literature to emphasize more recent peer reviewed references. Only three references are recent (less than 5 years), peer reviewed references, while there are 12 historic references. Try to reduce dependencies on non-peer reviewed references (~10 of them).\\n10. Make a pass through the paper, and decouple it from the van Seijen et al., 2017a\\n11. Minor: Some claims need references:\\n 11.1 Page 5: \\u201cegocentric sub-optimality does not come from the actions that are equally good, nor from the determinism of the policy, since adding randomness\\u2026\\u201d - Wouldn\\u2019t adding epsilon-greediness get the agent unstuck?\\n 11.2 Page 1. \\u201cIt is shown on the navigation task \\u2026.\\u201d - This seems to be shown later in the results, but in the intro it is not clear if some other work, or this one shows it. \\n12. Minor:\\n 12.1 Mix genders when talking about people. Don\\u2019t assume all people that make \\u201ccomplex and important problems\\u201d, or who are \\u201cconsulted for advice\\u201d, are male.\\n 12.2 Typo: Page 5: a_0 sine die\\n 12.3 Page 7 - omit results that are not shown\\n 12.4 Make Figures larger - it is difficult, if not impossible to see\\n 12.5 What is the difference between Pac-Boy and Ms. Pacman task? And why not use Ms. Packman?\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
Hk0wHx-RW | Learning Sparse Latent Representations with the Deep Copula Information Bottleneck | [
"Aleksander Wieczorek*",
"Mario Wieser*",
"Damian Murezzan",
"Volker Roth"
] | Deep latent variable models are powerful tools for representation learning. In this paper, we adopt the deep information bottleneck model, identify its shortcomings and propose a model that circumvents them. To this end, we apply a copula transformation which, by restoring the invariance properties of the information bottleneck method, leads to disentanglement of the features in the latent space. Building on that, we show how this transformation translates to sparsity of the latent space in the new model. We evaluate our method on artificial and real data. | [
"Information Bottleneck",
"Deep Information Bottleneck",
"Deep Variational Information Bottleneck",
"Variational Autoencoder",
"Sparsity",
"Disentanglement",
"Interpretability",
"Copula",
"Mutual Information"
] | Accept (Poster) | https://openreview.net/pdf?id=Hk0wHx-RW | https://openreview.net/forum?id=Hk0wHx-RW | ICLR.cc/2018/Conference | 2018 | {
"note_id": [
"S1ZdyY1SG",
"ByJ3JKkBG",
"ByQLos_xM",
"H13MWgq4M",
"rJYSUovgG",
"rymg416Sf",
"ByR8Gr5gf"
],
"note_type": [
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"decision",
"official_review"
],
"note_created": [
1516371817240,
1516371878589,
1511729995347,
1516007699903,
1511663169142,
1517249514846,
1511834198184
],
"note_signatures": [
[
"ICLR.cc/2018/Conference/Paper583/Authors"
],
[
"ICLR.cc/2018/Conference/Paper583/Authors"
],
[
"ICLR.cc/2018/Conference/Paper583/AnonReviewer1"
],
[
"ICLR.cc/2018/Conference/Paper583/AnonReviewer5"
],
[
"ICLR.cc/2018/Conference/Paper583/AnonReviewer2"
],
[
"ICLR.cc/2018/Conference/Program_Chairs"
],
[
"ICLR.cc/2018/Conference/Paper583/AnonReviewer3"
]
],
"structured_content_str": [
"{\"title\": \"Additional review response part 1\", \"comment\": \"We would like to thank the reviewer for the additional review. We respond to the questions and issues raised in the review below.\\n\\n\\n\\n\\nWhile Section 3.3 clearly defines the explicit form of the algorithm (where data and labels are essentially pre-processed via a copula transform), details regarding the \\u201cimplicit form\\u201d are very scarce. From Section 3.4, it seems as though the authors are optimizing the form of the gaussian information bottleneck I(x,t), in the hopes of recovering an encoder $f_\\\\beta(x)$ which gaussianizes the input (thus emulating the explicit transform) ? Could the authors clarify whether this interpretation is correct, or alternatively provide additional clarifying details?\\n\\nThis seems to be a misunderstanding. The $f_\\\\beta$ transformation stands for an abstract, general transformation of the input data. In our model, it is implemented by the copula transformation (explicit or implicit) and the encoder network. $f_\\\\beta$ thus does not emulate the explicit transformation, and is not confined to representing the (implicit or explicit) copula transformation. The copula transformation, not necessarily implemented as a neural network, is a part of $f_\\\\beta$.\\nThe purpose of introducing $f_\\\\beta$ is to explain the difference of the model with and without the extra copula transformation and why applying the transformation translates to sparsity not observed in the \\u201cregular\\u201d sparse Gaussian information bottleneck.\\nWe elaborate on the difference between the implicit and explicit copula in the answer to the last question.\", \"there_are_also_many_missing_details_in_the_experimental_section\": \"how were the number of \\u201cactive\\u201d components selected ?\\n\\nThe only parameter of our model is $\\\\lambda$. As described in Section 3.4, by continuously increasing $\\\\lambda$, one decreases sparsity defined by the number of active neurons. Thus, one can adjust the number of active components by continuously varying $\\\\lambda$ (curves in Figures 2, 4, 6 with increasing numbers of active components correspond to increasing $\\\\lambda$).\\nThe number of active components is chosen differently in different experiments. In Experiments 1, 6, 7 $\\\\lambda$, and thus the number of active components, is varied over a large interval. In Experiment 3, $\\\\lambda$ is also varied, and subsequently chosen so that the dimensionality of latent spaces in the two compared models is the same.\\n\\n\\n\\n\\nWhich versions of the algorithm (explicit/implicit) were used for which experiments ? I believe explicit was used for Section 4.1, and implicit for 4.2 but again this needs to be spelled out more clearly\\n\\nAs we mentioned in the rebuttal, throughout the paper as well as for the experiments, the explicit copula transformation defined in Eq. (6) is used. The explicit transformation is also the default choice of the form of the copula transformation.\\n\\n\\n\\n\\nI would also like to see a discussion (and perhaps experimental comparison) to standard preprocessing techniques, such as PCA-whitening.\\n\\nPCA whitening, in contrast to the copula transformation, does not disentangle marginal distributions from the dependence structure captured by the copula. It also does not restore the invariance properties of the model we identified as motivation. It does not lead to a boost in information curves such as in Figure 2; we can add the appropriate experiment to our manuscript.\\n\\n\\n\\n\\nI do not think their [experiments\\u2019] scope (single synthetic, plus a single UCI dataset) is sufficient. While the gap in performance is significant on the synthetic task, this gap appears to shrink significantly when moving to the UCI dataset. How does this method perform for more realistic data, even e.g. MNIST ? I think it is crucial to highlight that the deficiencies of DIB matter in practice, and are not simply a theoretical consideration.\\n[\\u2026]\\nthe representation analyzed in Figure 7 is promising, but again the authors could have targeted other common datasets for disentangling, e.g. the simple sprites dataset used in the beta-VAE paper.\\n\\nWe would like to stress that imposing sparsity on the latent representation is an important aspect of our model. It is in general difficult to quantify latent representations. Our model yields significantly sparser representations even when the information curves are closer.\\nOur model shows its full strength when a multiview analysis is involved, especially with data where multiple variables have different and rescaled distributions. Datasets constructed such that marginals (or simply labels, such as in the MNIST dataset) are uniform distributed do not pose enough challenge, since the output space is too easy to reconstruct even without the copula transformation.\\nAs for dataset size, we would like to point out that finding meaningful sparse representations is more challenging for smaller datasets with higher dimensionality, therefore we think that the datasets we used do show the most relevant properties of the copula DIB.\"}",
"{\"title\": \"Additional review response part 2\", \"comment\": \"I would have also liked to see a more direct and systemic validation of the claims made in the paper. For example, the shortcomings of DIB identified in Section 3.1, 3.2 could have been verified more directly by plotting I(y,t) for various monotonic transformations of x.\\n\\nWe verified this for beta transformation in Experiment 1. We observe that the impact of our method is most pronounced when different variables are transformed in possibly different ways (i.e. when they are subject to diverse transformations with various scales).\\n\\n\\n\\n\\nA direct comparison of the explicit and implicit forms of the algorithms would also also make for a stronger paper in my opinion.\\n\\nWe mention the implicit copula transformation learned by neural networks in Section 3.3 for completeness as an alternative to the default explicit approach, but we would like to point out that the explicit approach is a preferred choice in practice.\\nIn the same section (in the revised paper), we elaborate on the few situations where the implicit copula might be advantageous, such as when there is a necessity of implicit tie breaking between data points. We also explain why the explicit copula is usually more advantageous. One circumvents the problem of devising an architecture capable of learning the marginal cdf, thus simplifying the neural network. Perhaps more importantly, the implicit approach does not scale well with dimensionality of the data, since the networks used for approximating the marginal cdf have to be trained independently for every dimension.\"}",
"{\"title\": \"An extension to DVIB\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"[====================================REVISION ======================================================]\\nOk so the paper underwent major remodel, which significantly improved the clarity. I do agree now on Figure 5, which tips the scale for me to a weak accept. \\n[====================================END OF REVISION ================================================]\\n\\nThis paper explores the problems of existing Deep variational bottle neck approaches for compact representation learning. Namely, the authors adjust deep variational bottle neck to conform to invariance properties (by making latent variable space to depend on copula only) - they name this model a copula extension to dvib. They then go on to explore the sparsity of the latent space\", \"my_main_issues_with_this_paper_are_experiments\": \"The proposed approach is tested only on 2 datasets (one synthetic, one real but tiny - 2K instances) and some of the plots (like Figure 5) are not convincing to me. On top of that, it is not clear how two methods compare computationally and how introduction of the copula affects the convergence (if it does)\\n\\nMinor comments\", \"page_1\": \"forcing an compact -> forcing a compact\\n\\u201cand and\\u201d =>and\", \"section_2\": \"mention that I is mutual information, it is not obvious for everyone\", \"figure_3\": \"circles/triangles are too small, hard to see\", \"figure_5\": \"not really convincing. B does not appear much more structured than a, to me it looks like a simple transformation of a.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"A promising improvement to DVIB, but paper suffers from lack of clarity and limited experimentation.\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"This paper identifies and proposes a fix for a shortcoming of the Deep Information Bottleneck approach, namely that the induced representation is not invariant to monotonic transform of the marginal distributions (as opposed to the mutual information on which it is based). The authors address this shortcoming by applying the DIB to a transformation of the data, obtained by a copula transform. This explicit approach is shown on synthetic experiments to preserve more information about the target, yield better reconstruction and converge faster than the baseline. The authors further develop a sparse extension to this Deep Copula Information Bottleneck (DCIB), which yields improved representations (in terms of disentangling and sparsity) on a UCI dataset.\\n\\n(significance) This is a promising idea. This paper builds on the information theoretic perspective of representation learning, and makes progress towards characterizing what makes for a good representation. Invariance to transforms of the marginal distributions is clearly a useful property, and the proposed method seems effective in this regard.\\nUnfortunately, I do not believe the paper is ready for publication as it stands, as it suffers from lack of clarity and the experimentation is limited in scope.\\n\\n(clarity) While Section 3.3 clearly defines the explicit form of the algorithm (where data and labels are essentially pre-processed via a copula transform), details regarding the \\u201cimplicit form\\u201d are very scarce. From Section 3.4, it seems as though the authors are optimizing the form of the gaussian information bottleneck I(x,t), in the hopes of recovering an encoder $f_\\\\beta(x)$ which gaussianizes the input (thus emulating the explicit transform) ? Could the authors clarify whether this interpretation is correct, or alternatively provide additional clarifying details ? There are also many missing details in the experimental section: how were the number of \\u201cactive\\u201d components selected ? Which versions of the algorithm (explicit/implicit) were used for which experiments ? I believe explicit was used for Section 4.1, and implicit for 4.2 but again this needs to be spelled out more clearly. I would also like to see a discussion (and perhaps experimental comparison) to standard preprocessing techniques, such as PCA-whitening.\\n\\n(quality) The experiments are interesting and seem well executed. Unfortunately, I do not think their scope (single synthetic, plus a single UCI dataset) is sufficient. While the gap in performance is significant on the synthetic task, this gap appears to shrink significantly when moving to the UCI dataset. How does this method perform for more realistic data, even e.g. MNIST ? I think it is crucial to highlight that the deficiencies of DIB matter in practice, and are not simply a theoretical consideration. Similarly, the representation analyzed in Figure 7 is promising, but again the authors could have targeted other common datasets for disentangling, e.g. the simple sprites dataset used in the beta-VAE paper. I would have also liked to see a more direct and systemic validation of the claims made in the paper. For example, the shortcomings of DIB identified in Section 3.1, 3.2 could have been verified more directly by plotting I(y,t) for various monotonic transformations of x. A direct comparison of the explicit and implicit forms of the algorithms would also also make for a stronger paper in my opinion.\", \"pros\": [\"Theoretically well motivated\", \"Promising results on synthetic task\", \"Potential for impact\"], \"cons\": [\"Paper suffers from lack of clarity (method and experimental section)\", \"Lack of ablative / introspective experiments\", \"Weak empirical results (small or toy datasets only).\"], \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting work . Both the clarity and the experimental results have been improved in the revised version\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"This paper presents a sparse latent representation learning algorithm based on an information theoretic objective formulated through meta-Gaussian information bottleneck and solved via variational auto-encoder stochastic optimization. The authors suggest Gaussianify the data using copula transformation and further adopt a diagonal determinant approximation with justification of minimizing an upper bound of mutual information. Experiments include both artificial data and real data.\\n\\nThe paper is unclear at some places and writing gets confusing. For example, it is unclear whether and when explicit or implicit transforms are used for x and y in the experiments, and the discussion at the end of Section 3.3 also sounds confusing. It would be more helpful if the author can make those points more clear and offer some guidance about the choices between explicit and implicit transform in practice. Moreover, what is the form of f_beta and how beta is optimized? In the first equation on page 5, is tilde y involved? How to choose lambda?\\n\\nIf MI is invariant to monotone transformations and information curves are determined by MIs, why \\u201ctransformations basically makes information curve arbitrary\\u201d? Can you elaborate? \\n\\nAlthough the experimental results demonstrate that the proposed approach with copula transformation yields higher information curves, more compact representation and better reconstruction quality, it would be more significant if the author can show whether these would necessarily lead to any improvements on other goals such as classification accuracy or robustness under adversarial attacks.\", \"minor_comments\": [\"What is the meaning of the dashed lines and the solid lines respectively in Figure 1?\", \"Section 3.3 at the bottom of page 4: what is tilde t_j? and x in the second term? Is there a typo?\", \"typo, find the \\u201cmost orthogonal\\u201d representation if the inputs -> of the inputs\", \"Overall, the main idea of this paper is interesting and well motivated and but the technical contribution seems incremental. The paper suffers from lack of clarity at several places and the experimental results are convincing but not strong enough.\", \"***************\"], \"updates\": \"***************\\nThe authors have clarified some questions that I had and further demonstrated the benefits of copula transform with new experiments in the revised paper. The new results are quite informative and addressed some of the concerns raised by me and other reviewers. I have updated my score to 6 accordingly.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"ICLR 2018 Conference Acceptance Decision\", \"comment\": \"Observing that in contrast to classical information bottleneck, the deep variational information bottleneck (DVIB) model is not invariant to monotonic transformations of input and output marginals, the authors show how to incorporate this invariance along with sparsity in DVIB using the copula transform. The revised version of the paper addressed some of the reviewer concerns about clarity as well as the strength of the experimental section, but the authors are encouraged to improve these aspects of the paper further.\", \"decision\": \"Accept (Poster)\"}",
"{\"title\": \"This paper improved on an existing latent variable model by combining ideas from different but somewhat related papers. Experimental results indeed show some improvements.\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"The paper proposed a copula-based modification to an existing deep variational information bottleneck model, such that the marginals of the variables of interest (x, y) are decoupled from the DVIB latent variable model, allowing the latent space to be more compact when compared to the non-modified version. The experiments verified the relative compactness of the latent space, and also qualitatively shows that the learned latent features are more 'disentangled'. However, I wonder how sensitive are the learned latent features to the hyper-parameters and optimizations?\", \"quality\": \"Ok. The claims appear to be sufficiently verified in the experiments. However, it would have been great to have an experiment that actually makes use of the learned features to make predictions. I struggle a little to see the relevance of the proposed method without a good motivating example.\", \"clarity\": \"Below average. Section 3 is a little hard to understand. Is q(t|x) in Fig 1 a typo? How about t_j in equation (5)? There is a reference that appeared twice in the bibliography (1st and 2nd).\", \"originality_and_significance\": \"Average. The paper (if I understood it correctly) appears to be mainly about borrowing the key ideas from Rey et. al. 2014 and applying it to the existing DVIB model.\", \"confidence\": \"1: The reviewer's evaluation is an educated guess\"}"
]
} |
S1tWRJ-R- | Joint autoencoders: a flexible meta-learning framework | [
"Baruch Epstein",
"Ron Meir",
"Tomer Michaeli"
] | The incorporation of prior knowledge into learning is essential in achieving good performance based on small noisy samples. Such knowledge is often incorporated through the availability of related data arising from domains and tasks similar to the one of current interest. Ideally one would like to allow both the data for the current task and for previous related tasks to self-organize the learning system in such a way that commonalities and differences between the tasks are learned in a data-driven fashion. We develop a framework for learning multiple tasks simultaneously, based on sharing features that are common to all tasks, achieved through the use of a modular deep feedforward neural network consisting of shared branches, dealing with the common features of all tasks, and private branches, learning the specific unique aspects of each task. Once an appropriate weight sharing architecture has been established, learning takes place through standard algorithms for feedforward networks, e.g., stochastic gradient descent and its variations. The method deals with meta-learning (such as domain adaptation, transfer and multi-task learning) in a unified fashion, and can easily deal with data arising from different types of sources. Numerical experiments demonstrate the effectiveness of learning in domain adaptation and transfer learning setups, and provide evidence for the flexible and task-oriented representations arising in the network. | [
"transfer learning",
"domain adaptation",
"unsupervised learning",
"autoencoders",
"multi-task learning"
] | Reject | https://openreview.net/pdf?id=S1tWRJ-R- | https://openreview.net/forum?id=S1tWRJ-R- | ICLR.cc/2018/Conference | 2018 | {
"note_id": [
"rkseIXolf",
"ByQN_opGz",
"HJzFmcNZM",
"By6WviTfz",
"ByAzrJ6Hz",
"H1cgp9qxf",
"ByOi8oaGf",
"Byv5BsTzM"
],
"note_type": [
"official_review",
"official_comment",
"official_review",
"official_comment",
"decision",
"official_review",
"official_comment",
"official_comment"
],
"note_created": [
1511892467599,
1514154027038,
1512510330524,
1514153732885,
1517249814381,
1511857393990,
1514153632176,
1514153358967
],
"note_signatures": [
[
"ICLR.cc/2018/Conference/Paper534/AnonReviewer2"
],
[
"ICLR.cc/2018/Conference/Paper534/Authors"
],
[
"ICLR.cc/2018/Conference/Paper534/AnonReviewer3"
],
[
"ICLR.cc/2018/Conference/Paper534/Authors"
],
[
"ICLR.cc/2018/Conference/Program_Chairs"
],
[
"ICLR.cc/2018/Conference/Paper534/AnonReviewer1"
],
[
"ICLR.cc/2018/Conference/Paper534/Authors"
],
[
"ICLR.cc/2018/Conference/Paper534/Authors"
]
],
"structured_content_str": [
"{\"title\": \"The paper proposes a model for allowing various deep neural network architectures to share weights (parameters) across different datasets. The authors then apply the framework to transfer learning.\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"The paper addresses the question of identifying 'shared features' in neural networks trained on different datasets. Concretely, suppose you have two datasets X1, X2 and you would like to train auto-encoders (with potential augmentation with labeled examples) for the two datasets. One could work on the two separately; here, the authors propose sharing some of the weights to try and exploit/identify common features between the two datasets. The authors formalize by essentially looking to optimize an auto-encoder that take inputs of the form (x1, x2) and employing architectures that allow few nodes to interact with both x1,x2. The authors then try to minimize an appropriate loss function by standard methods.\\n\\nThe authors then apply the above methodology to transfer learning between various datasets. The empirical results here are interesting but not particularly striking; the most salient feature perhaps is that the architectures and training algorithms are perhaps a bit simpler but the overall improvements over existing methods are not too exciting.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Response to AnonReviewer1\", \"comment\": \"* It reads as an extension of \\\"frustratingly easy domain adaptation\\\" to DNN (please cite this work).\\n\\nThere is a sizable list of references we intend to include in the next version, including the work you refer to, as well as (among others) [4], [5], [6], [7]. We attempted to comply by the \\\"strong recommendation\\\" to keep the references to a single page, and had to retain only those works we were explicitly influenced by, or recent state-of-the-art deep learning papers focusing on domain adaptation and explicit extraction of separate shared and task-related features.\\n\\n* The authors brought up two strategies on learning the shared and private weights at the end of section 3.2. However, no follow-up comparison between the two are provided. It seems like most of the results are coming from the end-to-end learning.\\n\\nThe first paragraph in Section 4.2 provides precisely the sought-for comparison. We find that the end-to-end learning approach is both simpler and better, and thus use it for the rest of the experiments. We will add a reference to that conclusion in Section 2.3.\", \"experiment_results\": \"* section 4.1: Figure 2 is flawed. The colors do not correspond to the sub-tasks. For example, there are digits 1, 4 in color magenta, which is supposed to be the shared branch of digits of 5~9. Vice versa.\\n\\nIn Figure 2a, all branches are applied to all digits, with the colors representing the data that the branch was exposed to. The idea is that a branch should be more \\u2018inclined\\u2019 to treat digits it never saw as noise. This phenomenon can be observed clearly in the red digits: 0-4 are more dispersed (consider the 0's, 1's and 3' for the most obvious examples) than the rather cluttered 5-9. The fact that the shared branches map 0-4 and 5-9 much more closely than the private ones is quantified in the paper. Note that this is distinct from the observation that the common branches containing\\nthe shared layer (green and magenta) are much more mixed between themselves than the private branches (red and black). See also reply to AnonReviewer3. However, we agree that the figure is confusing, and it will be reworked. In particular, we intend to split it into four separate ones, for each branch, as well as add more visual evidence for our beliefs.\\n\\n* From reducing the capacity of JAE to be the same as the baseline, most of the improvement is gone.\\n\\nThe reduced-capacity JAEs still retain over two thirds (22-24% vs 33-37%) of the observed advantage, therefore most of the advantage remains. \\n\\n* It is not clear how much of the improvement will remain if the baseline model gets to see all the samples instead of just those from each sub-task\\n\\nThe baseline models, as a pair, see all of the samples the JAE model sees. \\n\\n* section 4.2.1: The authors demonstrate the influence of shared layer depth in table 2. While it does seem to matter for tasks of dissimilar inputs, have the authors compare having a completely shared branch or sharing more than just a single layer?\\n\\nWe did perform various comparisons between different sharing strategies, but so far could not discern an obviously superior option. However, it remains an intriguing question that we will be paying attention to in future research.\\n\\n* The authors suggested in section 4.1 CIFAR experiment that the proposed method provides more performance boost when the two tasks are more similar, which seems to be contradicting to the results shown in Figure 3, where its performance is worse when transferring between USPS and MNIST, which are more similar tasks vs between SVHN and MNIST. Do the authors have any insight?\\n\\nRegarding the surprisingly good performance on the SVHN->MNIST task (vs. the CIFAR experiments), the explanation is the setting. Following established protocol (e.g., [3]), we perform the MNIST<->USPS tasks with small subsets of the datasets, whereas SVHN->MNIST is done using the entire dataset. \\n\\nSee also our reply concerning labeled set size flexibility and transfer learning with multiple tasks - challenges we are able to handle far more naturally than competing approaches.\\n\\n[4] Weston, Jason, et al. \\\"Deep learning via semi-supervised embedding.\\\" Neural Networks: Tricks of the Trade. Springer Berlin Heidelberg, 2012. 639-655.\\n[5] S. Parameswaran and K. Q. Weinberger, \\u201cLarge margin multi-task metric learning,\\u201d NIPS 23, pp. 1867\\u20131875, 2010.\\n[6] Dumoulin at al., Adversarially Learned Inference, https://arxiv.org/abs/1606.00704\\n[7] Devroye, L., Gyo\\u00f6rfi, L., and Lugosi, G. (1996). A Probabilistic Theory of Pattern Recognition. Springer.\"}",
"{\"title\": \"An appealing architecture for domain adaptation, multitask, and transfer learning but without strong enough results\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"The paper focuses on learning common features from multiple domains data in a unsupervised and supervised learning scheme. Setting this as a general multi task learning, the idea consists in jointly learning autoecnoders, one for each domain, for the multiples domain data in such a way that parts of the parameters of the domain autoencoder are shared. Each domain/task autoencoder then consists in a shared part and a private part. The authors propose a variant of the model in the case of supervised learning and end up with a general architecture for multi-task, semi-supervised and transfer learning.\\n\\nThe presentation of the paper is good and the paper is easy to follow and explores the rather intuitive and simple idea of sharing parameters between related tasks.\\n\\nExperimental show some interesting results. First unsupervised experiments on Mnist data show improved MSe of joint autoecnoders but are these differences really significant (e.g. from 0.56 to 5.52) ? Moreover i am not sure to understand the meaning of separation criterion computed on t-sne of hidden representations. Results of Table 1 show improved reconstruction performance (MSE?) of joint auto encoders over independent ones for unrelated pairs such as airplane and horses. I a not sure ti understand why this improvement occurs even with very different classes. The investigation on the depth where sharing should occur is quite interesting and related to the usual idea of higher transferable property low level features. Results on transfer are the most interesting ones actually but do not seem to improve so much over baselines.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Response to AnonReviewer2\", \"comment\": \"* The empirical results here are interesting but not particularly striking; the most salient feature perhaps is that the architectures and training algorithms are perhaps a bit simpler but the overall improvements over existing methods are not too exciting.\\n\\nWe believe the architectures and training we use are a lot simpler than most comparable methods. For instance, our model for SVHN->MNIST is an order of magnitude smaller than [1], and we do not require a GAN. \\n\\nSee also our reply concerning labeled set size flexibility and transfer learning with multiple tasks - challenges we are able to handle far more naturally than competing approaches.\"}",
"{\"decision\": \"Reject\", \"title\": \"ICLR 2018 Conference Acceptance Decision\", \"comment\": \"Thank you for submitting you paper to ICLR. ICLR. The consensus from the reviewers is that this is not quite ready for publication. In particular, the experimental results are promising, but further work is required to fully demonstrate the efficacy of the approach.\"}",
"{\"title\": \"Review\", \"rating\": \"4: Ok but not good enough - rejection\", \"review\": \"The work proposed a generic framework for end-to-end transfer learning / domain adaptation with deep neural networks. The idea is to learn a joint autoencoders, containing private branch with task/domain-specific weights, as well as common branch consisting of shared weights used across tasks/domains, as well as task/domain-specific weights. Supervised losses are added after the encoders to utilize labeled samples from different tasks. Experiments on the MNIST and CIFAR datasets showed improvements over baseline models. Its performance is comparable to / worse than several existing deep domain adaptation works on the MNIST, USPS and SVHN digit datasets.\\n\\nThe structure of the paper is good, and easy to read. The idea is fairly straight-forward. It reads as an extension of \\\"frustratingly easy domain adaptation\\\" to DNN (please cite this work). Different from most existing work on DNN for multi-task/transfer learning, which focuses on weight sharing in bottom layers, the work emphasizes the importance of weight sharing in deeper layers. The overall novelty of the work is limited though. \\n\\nThe authors brought up two strategies on learning the shared and private weights at the end of section 3.2. However, no follow-up comparison between the two are provided. It seems like most of the results are coming from the end-to-end learning.\", \"experimental_results\": \"section 4.1: Figure 2 is flawed. The colors do not correspond to the sub-tasks. For example, there are digits 1, 4 in color magenta, which is supposed to be the shared branch of digits of 5~9. Vice versa. \\nFrom reducing the capacity of JAE to be the same as the baseline, most of the improvement is gone. It is not clear how much of the improvement will remain if the baseline model gets to see all the samples instead of just those from each sub-task. \\n\\nsection 4.2.1: The authors demonstrate the influence of shared layer depth in table 2. While it does seem to matter for tasks of dissimilar inputs, have the authors compare having a completely shared branch or sharing more than just a single layer?\\n\\nThe authors suggested in section 4.1 CIFAR experiment that the proposed method provides more performance boost when the two tasks are more similar, which seems to be contradicting to the results shown in Figure 3, where its performance is worse when transferring between USPS and MNIST, which are more similar tasks vs between SVHN and MNIST. Do the authors have any insight?\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Response to AnonReviewer3\", \"comment\": \"* First unsupervised experiments on Mnist data show improved MSe of joint autoecnoders but are these differences really significant (e.g. from 0.56 to 5.52) ?\\n\\nWe agree that MNIST does not show a lot of improvement, due to its simplicity. Note that our experiments with CIFAR-10 display a significant advantage for the JAE scheme. \\n\\n* Moreover i am not sure to understand the meaning of separation criterion computed on t-sne of hidden representations.\\n\\nWe expect the shared branches to map the inputs to relatively similar hidden states, as they both capture the joint features from both datasets. Following the same logic, the task-specific branches should map inputs to relatively distinctly \\u2013 they learn different mappings and should not be similar. The statistical measure of this difference is given by the Fisher separation criterion, which is indeed small for the shared branches and large for the private ones. \\n\\n* Results of Table 1 show improved reconstruction performance (MSE?) of joint auto encoders over independent ones for unrelated pairs such as airplane and horses. I a not sure ti understand why this improvement occurs even with very different classes.\\n\\nOur explanation for the experienced improvement, even with very different classes, is that the various classes of natural images as captured by the CIFAR-10 dataset share \\\"deep\\\" features necessary for successful reconstruction. We certainly agree that more similar classes should share more of these features, and our results support this intuition. \\n\\n* Results on transfer are the most interesting ones actually but do not seem to improve so much over baselines.\\n\\nWe agree that some of the improvements over existing methods are modest, though by no means all (e.g., SVHN->MNIST, Fig. 3.c). However, we would like to point out that the methods we compare ourselves to either use large, complicated architectures, require computationally expensive training, or both. We believe that the fact that we out-perform such state-of-the-art approaches with a simple concept while also employing much smaller models is compelling evidence in favor of the shared-subspace hypothesis. Moreover, the ability to perform domain adaptation without training a GAN should be of interest, as most successful state-of-the-art methods require training at least one GAN, a notoriously challenging task.\\n\\nSee also our reply concerning labeled set size flexibility and transfer learning with multiple tasks - challenges we are able to handle far more naturally than competing approaches.\"}",
"{\"title\": \"Thank You For The Thoughtful Reviews\", \"comment\": \"We thank the reviewers for the various points raised. We will reply to each review separately; however, we would like first to point out a contribution of our work that we believe bears stressing. Among the works with similar approach and comparable performance to ours, most seem to be unable to handle more than two tasks (e.g., transfer learning from two sources to a target ) without either a significant increase in complexity or some novel ideas. [1] would require a number of loss functions growing quadratically in the task number, and an even more demanding architecture than they already use. [2] would require a quadratically growing amount of discriminators, or else a novel idea to perform efficient domain adaptation for multiple tasks. It is even less clear how to extend [3] to such scenarios.\\nIn contrast, the approach we propose handles this task in stride, simply adding a branch to the joint autoencoder. The experiments in Sec. 4.2.3 support this claim. We believe that this property of joint autoencoders is not matched by any comparable approach, and consider this to be a key advantage of the proposed method.\\n\\nIn addition, we are able to deal with a more flexible range of labeled sample sizes than the aforementioned papers, some of which are not capable of making immediate use of labeled data.\\n\\n[1] Bousmalis, K. et al. (2016). Domain separation networks. Advances in Neural Information Processing Systems 29 (NIPS 2016)\\n[2] Liu, M.-Y. and Tuzel, O. (2016). Coupled generative adversarial networks. In Advances in Neural Information Processing Systems, pages 469\\u2013477.\\n[3] Tzeng, E., Hoffman, J., Saenko, K., and Darrell, T. (2017). Adversarial discriminative domain adaptation. CoRR abs/1702.05464.\"}"
]
} |
r1pW0WZAW | Analyzing and Exploiting NARX Recurrent Neural Networks for Long-Term Dependencies | [
"Robert DiPietro",
"Christian Rupprecht",
"Nassir Navab",
"Gregory D. Hager"
] | Recurrent neural networks (RNNs) have achieved state-of-the-art performance on many diverse tasks, from machine translation to surgical activity recognition, yet training RNNs to capture long-term dependencies remains difficult. To date, the vast majority of successful RNN architectures alleviate this problem using nearly-additive connections between states, as introduced by long short-term memory (LSTM). We take an orthogonal approach and introduce MIST RNNs, a NARX RNN architecture that allows direct connections from the very distant past. We show that MIST RNNs 1) exhibit superior vanishing-gradient properties in comparison to LSTM and previously-proposed NARX RNNs; 2) are far more efficient than previously-proposed NARX RNN architectures, requiring even fewer computations than LSTM; and 3) improve performance substantially over LSTM and Clockwork RNNs on tasks requiring very long-term dependencies. | [
"recurrent neural networks",
"long-term dependencies",
"long short-term memory",
"LSTM"
] | Invite to Workshop Track | https://openreview.net/pdf?id=r1pW0WZAW | https://openreview.net/forum?id=r1pW0WZAW | ICLR.cc/2018/Conference | 2018 | {
"note_id": [
"BJMxrXomf",
"ryoD4ypSf",
"Bka_1mi7M",
"H1OSO2dlz",
"SJkNAGiQM",
"SyaF_Qomf",
"BJMTxiOlG",
"rycLSbcgf"
],
"note_type": [
"official_comment",
"decision",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_review",
"official_review"
],
"note_created": [
1515037929561,
1517249635344,
1515036532691,
1511733311657,
1515036199432,
1515038853191,
1511727290042,
1511818578007
],
"note_signatures": [
[
"ICLR.cc/2018/Conference/Paper739/Authors"
],
[
"ICLR.cc/2018/Conference/Program_Chairs"
],
[
"ICLR.cc/2018/Conference/Paper739/Authors"
],
[
"ICLR.cc/2018/Conference/Paper739/AnonReviewer2"
],
[
"ICLR.cc/2018/Conference/Paper739/Authors"
],
[
"ICLR.cc/2018/Conference/Paper739/Authors"
],
[
"ICLR.cc/2018/Conference/Paper739/AnonReviewer3"
],
[
"ICLR.cc/2018/Conference/Paper739/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"Response to AnonReviewer2\", \"comment\": \"We are pleased that you enjoyed our work. Thank you very much for your detailed review and insightful comments. We have done our best to address every question raised, and we have updated the paper to reflect every response here:\\n\\n>>>>> for the copy task (section 5.2): Is it really true that Clockwork RNNs fail because they make it \\\"difficult to learn long-term behavior that must be detected at high frequency\\\" [section 2]?\\n\\nFor large delays (D >= 100), this is precisely the reason that Clockwork RNNs fail, but we see no way of providing further empirical evidence of this. We instead describe in detail why Clockwork RNNs must fail:\\n\\n- Symbol 0 can be 'copied ahead' by all partitions, and so perhaps it is possible to learn to replicate this symbol later in time.\\n\\n- Symbol 1 can only be seen by the highest-frequency partition (period of T = 1) because 1 % T = 0 for T = 1, but not T = 2, 4, 8, 16, etc. Also, this partition cannot send information to lower-frequency partitions. Hence Clockwork RNNs cannot learn to replicate symbol 1 for the exact same reason that a simple RNN cannot: the shortest past to the loss has at least D matrix multiplies and nonlinearities.\\n\\n- Symbol 2 can similarly only be seen by the two highest-frequency partitions (T = 1, T = 2), so we have a shortest path with D / 2 nonlinearities and matrix multiplies (a negligible difference for medium-to-large delays).\\n\\n- Symbol 3 can only be seen by the single highest-frequency partition because again 3 % T = 0 only for T = 1, so the situation is identical to symbol 1.\\n\\n- And so on. Hence Clockwork RNNs must fail to learn to copy most of these symbols for medium-to-large delays.\\n\\nFor small delays (D = 50), Clockwork RNNs should solve the copy task, because the highest-frequency partition resembles a simple RNN. However, this partition has only 256 / 8 = 32 hidden units. We thus ran additional Clockwork RNN experiments with 1024 hidden units (and 10x as many parameters), with 128 units allocated to the high-frequency partition. We then see that Clockwork RNNs do solve the copy problem with a delay of 50 and continue to fail to solve the problem for higher delays, as expected.\\n\\n>>>>> In the sequential pMNIST classification, what about increasing the LSTM number of hidden units? If this brings the error rate further down, one could ask why exactly the LSTM captures long-term structure so differently with different number of units?\\n\\nWe ran additional experiments with 512 units for both LSTM and MIST RNNs. LSTM obtains an improved error rate of 7.6%, and MIST RNNs obtain an improved error rate of 4.5%. However, we verified that capacity does not help with long-term dependencies; please see the next question.\\n\\n>>>>> How relevant are the results in figure 2 (yes, the gradient properties are very different, but is this an issue for accuracy)?\\n\\nWe included Figure 2 to show that empirical observations match our expectations for gradient decay. To provide further empirical validation, we ran additional pMNIST experiments for the 512-unit LSTM and MIST RNNs:\\n\\n- Based on Figure 2, we used only the last 200 pixels (rather than all 784).\\n\\n- LSTM performance remained the same (within 1 std. dev., 7.4% error), showing that LSTM gained nothing from including the distant past.\\n\\n- MIST RNN performance degraded by 15 standard deviations (6.0% error), showing that MIST RNNs do benefit from the distant past.\\n\\n- Finally we note that MIST RNNs still outperform LSTM. This is expected since LSTM has trouble learning even from steps <= 200 from the loss (as shown in Fig. 2).\\n\\n>>>>> on the right-hand side of the inline formula in section 3.1, the symbol v is missing\\n\\nThank you. This arose from merging two previous examples. Fixed.\\n\\n>>>>> in formula 16, the primes seem to be misplaced, and the symbols t', t''', etc. should be defined\\n\\nFixed\\n\\n>>>>> the \\\\theta_l in the beginning of section 3.3 (formula 13) is completely superfluous.\\n\\nWe agree but include this to make the connection to practice immediately evident. We added a sentence to clarify this.\\n\\n>>>>> The position of the tables and figures is rather weird...\\n\\nFixed.\\n\\n>>>>> Relation to prior work: the authors are aware of most relevant work... There is one that seems close to what the authors do: J. Schmidhuber. Learning complex, extended sequences using the principle of history compression. Neural Computation, 4(2):234-242, 1992 ...\\n\\nLearning a generative model over inputs to identify surprising inputs for processing is an interesting approach; we added this to the Background section.\\n\\n>>>>> Perhaps this trick could further improve the system of the authors, as well as the Clockwork RNNs, at least for certain tasks?\\n\\nWe would not be surprised at all if this method can improve results for some tasks, especially those with highly-correlated, low-dimensional inputs such as MNIST (or even pMNIST). However, addressing this question fully would be far from trivial, so we leave it as future work.\"}",
"{\"title\": \"ICLR 2018 Conference Acceptance Decision\", \"comment\": \"I think the model itself is not very novel, as pointed by the reviewers and the analysis is not very insightful either. However, the results themselves are interesting and quite good (on the copy task and pMnist, but not so much the other datasets presented (timit etc) where it not clear that long term dependencies would lead to better results). Since the method itself is not very novel, the onus is upon the authors to make a strong case for the merits of the paper -- It would be worth exploring these architectures further to see if there are useful elements for real world tasks -- more so than is demonstrated in the paper -- for example showing it on tasks such as machine translation or language modelling tasks requiring long term propagation of information or even real speech recognition, not just basic TIMIT phone frame classification rate.\\n\\nAs a result, while I think the paper could make for an interesting contribution, in its present form, I have settled on recommending the paper for the workshop track.\\n\\n\\nAs a side note, paper is related to paper 874 in that an attention model is used to look at the past. The difference is in how the past is connected to the current model.\", \"decision\": \"Invite to Workshop Track\"}",
"{\"title\": \"Response to AnonReviewer1\", \"comment\": \"Thank you for your review. We also found it interesting that MIST RNNs can capture such long-term dependencies.\\n\\n>>>>> A similar kind of architecture has been already proposed: [1] Soltani et al. \\u201cHigher Order Recurrent Neural Networks\\u201d, arXiv 1605.00064\\n\\nBased on this comment, we have added a short discussion of [1] to the Background section.\\n\\nHowever, we would like to kindly note that [1] defines a \\\"higher order recurrent neural network (HORNN)\\\" precisely as a simple NARX RNN, which was introduced 20 years earlier in [2], and which was already discussed extensively in our paper.\\n\\nImportantly, every HORNN variant in [1] suffers from the same issue that is mentioned in our paper for simple NARX RNNs: the vanishing gradient problem is only mitigated mildly as n_d, the number of delays, increases; and simultaneously parameter and computation counts grow by this same factor n_d. We would like to emphasize that MIST RNNs are the first NARX RNNs that resolve both of these issues, by providing exponentially short connections to the past while maintaining even fewer parameters and computations than LSTM.\\n\\n[1] Rohollah Soltani and Hui Jiang. Higher order recurrent neural networks. arXiv preprint arXiv:1605.00064, 2016.\\n\\n[2] Tsungnan Lin, Bill G Horne, Peter Tino, and C Lee Giles. Learning long-term dependencies in NARX recurrent neural networks. IEEE Transactions on Neural Networks, 7(6):1329\\u20131338, 1996.\"}",
"{\"title\": \"The paper introduces a variant of the well-known (but as of today not very frequently used) NARX architecture for Recurrent Neural Networks. It is demonstrated that with the proposed method (MIST RNNs), good performance is achieved on several common RNN problems.\", \"rating\": \"7: Good paper, accept\", \"review\": \"The presented MIST architecture certainly has got its merits, but in my opinion is not very novel, given the fact that NARX RNNs have been described 20 years ago, and Clockwork RNNs (which, as the authors point out in section 2, have a similar structure) have also been in use for several years. Still, the presented results are good, with standard LSTMs being substantially outperformed in three out of five standard RNN/LSTM benchmark tasks. The analysis in section 3 is decent (see however the minor comments below), but does not offer revolutionary new insights - it's perhaps more like a corollary of previous work (Pascanu et al., 2013).\\n\\nRegarding the concrete results, I would have wished for a more detailed analysis of the more surprising results, in particular, for the copy task (section 5.2): Is it really true that Clockwork RNNs fail because they make it \\\"difficult to learn long-term behavior that must be detected at high frequency\\\" [section 2]? How relevant are the results in figure 2 (yes, the gradient properties are very different, but is this an issue for accuracy)? In the sequential pMNIST classification, what about increasing the LSTM number of hidden units? If this brings the error rate further down, one could ask why exactly the LSTM captures long-term structure so differently with different number of units?\\n\\nIn summary, for me this paper is solid, and although the architecture is not that new, it is worth bringing it again into the focus of attention.\", \"minor_comments\": [\"In several places, the formulas are rather strange and/or occasionally incorrect. In particular,\", \"on the right-hand sind of the inline formula in section 3.1, the symbol v is missing completely, which cannot be right;\", \"in formula 16, the primes seem to be misplaced, and the symbols t', t''', etc. should be defined;\", \"the \\\\theta_l in the beginning of section 3.3 (formula 13) is completely superfluous.\", \"The position of the tables and figures is rather weird, making the paper less readable than necessary. The authors should consider moving floating parts around (one could also move figure three to the bottom of a suitable page, for example).\", \"It is a matter of taste, but since all experimental results except the ones on the copy task are tabulated, one could think of adding a table with the results now contained in figure 3.\"], \"relation_to_prior_work\": \"the authors are aware of most relevant work.\", \"on_p2_they_write\": \"\\\"Many other approaches have also been proposed to capture long-term dependencies.\\\" There is one that seems close to what the authors do:\\n\\nJ. Schmidhuber. Learning complex, extended sequences using the principle of history compression. Neural Computation, 4(2):234-242, 1992\\n\\nIt is related to clockwork RNNs, about which the authors write:\\n\\n\\\"A recent architecture that is similar in spirit to our work is that of Clockwork RNNs (Koutnik et al., 2014), which split weights and hidden units into partitions, each with a distinct period. When it\\u2019s not a partition\\u2019s time to tick, its hidden units are passed through unchanged, thus in some ways mimicking the behavior of NARX RNNs. However Clockwork RNNs differ in two key ways. First, Clockwork RNNs sever high-frequency-to-low-frequency paths, thus making it difficult to learn long-term behavior that must be detected at high frequency (for example, learning to depend on quick motions from the past for activity recognition). Second, Clockwork RNNs require hidden units to be partitioned a priori, which in practice is difficult to do in any meaningful way. NARX RNNs suffer from neither of these drawbacks.\\\"\\n\\nThe neural history compressor, however, adapts to the frequency of unexpected events, by ticking only when there is an unpredictable event, thus overcoming some of the issues above. Perhaps this trick could further improve the system of the authors, as well as the Clockwork RNNs, at least for certain tasks?\", \"general_recommendation\": \"Accept, provided the comments are taken into account.\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Added revision incorporating reviewer feedback\", \"comment\": [\"Changes:\", \"The last 3 paragraphs of Section 2 (Background) were expanded and edited based on feedback from all 3 reviewers.\", \"Section 3 (The Vanishing Gradient Problem in the Context of NARX RNNs) was edited for clarity and to fix typos spotted by AnonReviewer2.\", \"Section 5.1 (Permuted MNIST results) was heavily modified based on AnonReviewer2's feedback. In particular, results were added with additional hidden-unit counts, and results were added to show that LSTM performance does not depend at all on information from the distant past (whereas MIST RNN performance does).\", \"A paragraph was added to the end of Section 5.2 (Copy Problem results) based on AnonReviewer2's feedback. In particular we discuss additional Clockwork RNN results; the reasons that Clockwork RNNs must fail for large delays; and show that Clockwork RNNs do indeed behave like simple RNNs if enough hidden units are provided.\", \"Figures and Tables were moved around for clarity, based on AnonReviewer2's feedback.\", \"Small miscellaneous edits were made throughout to open space for the previous changes.\"]}",
"{\"title\": \"Response to AnonReviewer3\", \"comment\": \"Thank you for your review. We kindly note that some of the comments in this review are incorrect, and as such we sincerely hope that you are willing to reconsider your evaluation of our work.\\n\\n>>>>> The experimental results are not convincing. This includes 1. the choices of tasks are limited -- very small in size, 2. the performance in pMNIST is worse than [1], under the same settings.\", \"point_2\": \"Please note that this is incorrect. In [1], the best reported error rate for pMNIST is 6.0% error, whereas we obtain 5.5 +- 0.2% error. Also, their results (Table 2) correspond to a hyperparameter sweep, with s = 11 achieving 6.0% error. We require no such sweeps: our delays were kept fixed for all 5 tasks in the paper (still outperforming every model proposed in [1]).\", \"point_1\": \"Please note that we evaluated these methods across\\n\\n- 2 synthetic tasks that have been widely used for testing long-term dependencies, as was highlighted in Section 5 with references (Hochreiter et al., 1997; Martens et al., 2011; Le et al., 2015; Arjovsky et al., 2016; Henaff et al., 2016; Danihelka et al., 2016)\\n\\n- 3 real tasks that were chosen because they a) likely require long-term dependencies and b) are of moderate size so that statistically-significant results can be obtained.\\n\\nWe followed the experimental design of [2], which also includes 3 real tasks of moderate size, preferring random hyperparameter sweeps and statistically-significant results over manual sweeps and statistically-questionable results. Also, please note that this design seems to be reasonable to the community, as [2] has been cited 400+ times since 2014.\", \"regarding_the_dataset_sizes\": \"TIMIT is standard, with splits identical to [2]. MobiAct contains approximately 3200 sequences of mobile sensor data from 67 users, very similar in size to the datasets in [2]. MISTIC-SL is smaller in size, but we chose this task because long-term dependencies are required and because state of the art is held by LSTM (which we ended up matching with MIST RNNs).\\n\\n[1] Zhang et al. Architectural complexity measures of recurrent neural networks. Advances in neural information processing systems (NIPS), 2016.\\n\\n[2] Greff et al. LSTM: A search space odyssey. IEEE Trans. on Neural Networks and Learning Systems, 2016.\\n\\n>>>>> Similar work (recurrent skip coefficient and the corresponding architecture in [1]) has been done, but has not been mentioned. \\n\\nBased on this comment, we have added a discussion of [1] to the Background section. However kindly note that\\n\\n- with regard to the architecture, [1] proposes precisely a simple NARX RNN ([19], discussed extensively in our paper) with non-zero weights for only two delays. This bears little resemblance to our work. Most importantly, MIST RNNs provide exponentially-short paths to the past while maintaining fewer parameters and computations than LSTM. In contrast, [1] does not provide exponentially-short paths, and uses two delays to avoid high parameter/computation counts. In case there is any doubt about this, we quote [1]: \\\"By using this specific construction, the recurrent skip coefficient increases from 1 (i.e., baseline) to k and the new model with extra connection has 2 hidden matrices (one from t to t + 1 and the other from t to t + k).\\\"\\n\\n- with regard to skip coefficients, [1] defines a *measure* of shortest paths called Recurrent Skip Coefficients. However in [1] the motivation for this definition is \\\"it is known that adding skip connections across multiple time steps may help improve the performance on long-term dependency problems [19, 20].\\\" Again, [19] introduced simple NARX RNNs, as discussed extensively in our paper. Thus the extent to which [1]'s skip coefficients overlap with our work is that we both recognize that short paths are important. A difference between our work and [1] is that we provide a self-contained derivation of this.\\n\\n[1] Zhang et al. Architectural complexity measures of recurrent neural networks. Advances in neural information processing systems (NIPS), 2016.\\n\\n[19] Lin et al. Learning long-term dependencies in NARX recurrent neural networks. IEEE Transactions on Neural Networks, 7(6):1329\\u20131338, 1996.\\n\\n[20] Sutskever et al. Temporal-kernel recurrent neural networks. Neural Networks, 23(2):239\\u2013243, 2010.\\n\\n>>>>> Analysis does not provide any new insights.\\n\\nThe connection of gradient components to paths via the chain rule for ordered derivatives is new. However we agree that the analysis portion of the paper is not revolutionary - this was not the goal of the analysis. Our goals were to provide a self-contained justification of our approach and to extend the results from ([1], [2]) to general NARX RNNs.\\n\\n[1] Bengio et al. Learning long-term dependencies with gradient descent is difficult. IEEE Transactions on Neural Networks, 5(2):157-166, 1994.\\n\\n[2] Pascanu et al. On the difficulty of training recurrent neural networks. International Conference on Machine Learning (ICML), 28:1310-1318, 2013.\"}",
"{\"title\": \"little novelty and unconvincing\", \"rating\": \"3: Clear rejection\", \"review\": \"The followings are my main critics of the paper:\\n1. Analysis does not provide any new insights. \\n2. Similar work (recurrent skip coefficient and the corresponding architecture in [1]) has been done, but has not been mentioned. \\n3. The experimental results are not convincing. This includes 1. the choices of tasks are limited -- very small in size, 2. the performance in pMNIST is worse than [1], under the same settings.\\n\\nHence I think the novelty of the paper is very little, and the experiments are not convincing.\\n\\n[1] Architectural Complexity Measures of Recurrent Neural Networks. Saizheng Zhang, Yuhuai Wu, Tong Che, Zhouhan Lin, Roland Memisevic, Ruslan Salakhutdinov, Yoshua Bengio. NIPS, 2016.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Review\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"Summary: The authors introduce a variant of NARX RNNs, which has an additional attention mechanism and a reset mechanism. The attention is only applied on subsets of hidden states, referred as delays. The delays are aggregated into a vector using the attention coefficients as weights, and then this vector is multiplied by the reset gates.\\n\\nThe model sounds a bit incremental, however, the performance improvements over pMNIST, copy and MobiAct tasks are interesting.\", \"a_similar_kind_of_architecture_has_been_already_proposed\": \"[1] Soltani et al. \\u201cHigher Order Recurrent Neural Networks\\u201d, arXiv 1605.00064\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
Byk4My-RZ | Flexible Prior Distributions for Deep Generative Models | [
"Yannic Kilcher",
"Aurelien Lucchi",
"Thomas Hofmann"
] | We consider the problem of training generative models with deep neural networks as generators, i.e. to map latent codes to data points. Whereas the dominant paradigm combines simple priors over codes with complex deterministic models,
we argue that it might be advantageous to use more flexible code distributions. We demonstrate how these distributions can be induced directly from the data. The benefits include: more powerful generative models, better modeling of latent
structure and explicit control of the degree of generalization. | [
"Deep Generative Models",
"GANs"
] | Reject | https://openreview.net/pdf?id=Byk4My-RZ | https://openreview.net/forum?id=Byk4My-RZ | ICLR.cc/2018/Conference | 2018 | {
"note_id": [
"S1TJm_gZG",
"ByCPOSp7M",
"SyoujCYgG",
"SJ31YST7f",
"BJwavHp7z",
"H1k_ZpFlf",
"HkQ7BJTSf"
],
"note_type": [
"official_review",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_review",
"decision"
],
"note_created": [
1512239845153,
1515178085716,
1511807859281,
1515178211661,
1515177918733,
1511801191245,
1517249819033
],
"note_signatures": [
[
"ICLR.cc/2018/Conference/Paper478/AnonReviewer1"
],
[
"ICLR.cc/2018/Conference/Paper478/Authors"
],
[
"ICLR.cc/2018/Conference/Paper478/AnonReviewer3"
],
[
"ICLR.cc/2018/Conference/Paper478/Authors"
],
[
"ICLR.cc/2018/Conference/Paper478/Authors"
],
[
"ICLR.cc/2018/Conference/Paper478/AnonReviewer2"
],
[
"ICLR.cc/2018/Conference/Program_Chairs"
]
],
"structured_content_str": [
"{\"title\": \"An interesting idea with a somewhat questionable execution\", \"rating\": \"5: Marginally below acceptance threshold\", \"review\": \"The paper proposes, under the GAN setting, mapping real data points back to the latent space via the \\\"generator reversal\\\" procedure on a sample-by-sample basis (hence without the need of a shared recognition network) and then using this induced empirical distribution as the \\\"ideal\\\" prior targeting which yet another GAN network might be trained to produce a better prior for the original GAN.\\n\\nI find this idea potentially interesting but am more concerned with the poorly explained motivation as well as some technical issues in how this idea is implemented, as detailed below.\\n\\n1. Actually I find the entire notion of an \\\"ideal\\\" prior under the GAN setting a bit strange. To start with, GAN is already training the generator G to match the induced P_G(x) (from P(z)) with P_d(x), and hence by definition, under the generator G, there should be no better prior than P(z) itself (because any change of P(z) would then induce a different P_G(x) and hence only move away from the learning target).\\n\\nI get it that maybe under different P(z) the difficulty of learning a good generator G can be different, and therefore one may wish to iterate between updating G (under the current P(z)) and updating P(z) (under the current G), and hopefully this process might converge to a better solution. But I feel this sounds like a new angle and not the one that is adopted by the authors in this paper.\\n\\n2. I think the discussions around Eq. (1) are not well grounded. Just as you said right before presenting Eq. (1), typically the goal of learning a DGM is just to match Q_x with the true data distrubution P_x. It is **not** however to match Q(x,z) with P(x,z). And btw, don't you need to put E_z[ ... ] around the 2nd term on the r.h.s. ?\\n\\n3. I find the paper mingles notions from GAN and VAE sometimes and misrepresents some of the key differences between the two.\\n\\nE.g. in the beginning of the 2nd paragraph in Introduction, the authors write \\\"Generative models like GANs, VAEs and others typically define a generative model via a deterministic generative mechanism or generator ...\\\". While I think the use of a **deterministic** generator is probably one of the unique features of GAN, and that is certainly not the case with VAE, where typically people still need to specify an explicit probabilistic generative model.\\n\\nAnd for this same reason, I find the multiple references of \\\"a generative model P(x|z)\\\" in this paper inaccurate and a bit misleading.\\n\\n4. I'm not sure whether it makes good sense to apply an SVD decomposition to the \\\\hat{z} vectors. It seems to me the variances \\\\nu^2_i shall be directly estimated from \\\\hat{z} as is. Otherwise, the reference \\\"ideal\\\" distribution would be modeling a **rotated** version of the \\\\hat{z} samples, which imo only introduces unnecessary discrepancies.\\n\\n5. I don't quite agree with the asserted \\\"multi-modal structure\\\" in Figure 2. Let's assume a 2d latent space, where each quadrant represents one MNIST digit (e.g. 1,2,3,4). You may observe a similar structure in this latent space yet still learn a good generator under even a standard 2d Gaussian prior. I guess my point is, a seemingly well-partitioned latent space doesn't bear an obvious correlation with a multi-modal distribution in it.\\n\\n6. The generator reversal procedure needs to be carried out once for each data point separately, and also when the generator has been updated, which seems to be introducing a potentially significant bottleneck into the training process.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Thank you\", \"comment\": \"Thank you for the comments. We invite you to have a look at our appendix, which now includes experiments you suggested.\\n\\n- It related to adversarial learned inference and BiGAN, in term of learning the mapping z ->x, x->z and seeking the agreement. \\n\\nWe agree that there is a relation, but also there are fundamental differences in our motivation and the approach itself. Most importantly, we do not learn the mapping x -> z, but we instead rely on a deterministic procedure for doing so.\\n\\n\\n- Have you looked at the decay of the singular values of the latent codes obtained from reversing the generator? Is this data low rank? how does this change depending on the dimensionality of the latent codes? Maybe adding plots to the paper can help.\\n\\nWe have updated the paper to include plots of the distribution of singular values in different dimensional latent spaces (see appendix, figure 8). It appears that the reconstructed latent codes are not low rank, agreeing with what one would expect from a well-trained generator.\\n\\n\\n- the prior agreement score is interesting but assuming gaussian prior also for the learned latent codes from real data is maybe not adequate. Maybe computing the entropy of the codes using a nearest neighbor estimate of the entropy can help understanding the entropy difference wrt to the isotropic gaussian prior?\\n\\nWe have experimented with nearest neighbor methods, but found them unreliable for high-dimensional spaces. Note that the reason we use a diagonal gaussian for the PAG scores is not that we propose this to be the best prior, but because it is a single step in complexity above the naive prior. If we find a discrepancy between these two, than we also know that the naive prior is inferior to any even more complex prior.\\n\\n\\n- Have you tried to multiply the isotropic normal noise with the learned singular values and generate images from this new prior and compute inceptions scores etc? Maybe also rotating the codes with the singular vector matrix V or \\\\Sigma^{0.5} V?\\n\\nAs mentioned, our intention is not to use the non-isotropic gaussian as a prior in practice, but we have indeed tried this and have not found a significant improvement in either inception scores or visual results.\\n\\n\\n- What architecture did you use for the prior generator GAN?\\n\\nWe briefly describe this in the appendix to be four fully connected layers. We\\u2019ve updated the section to clarify that the rest of the architecture (nonlinearities, batch norm, etc.) matches the original GAN.\\n\\n\\n- Have you thought of an end to end way to learn the prior generator GAN? \\nIt is certainly possible to learn the data induced prior continuously along with the training procedure and we have had good results when trying this ourselves. However, this requires running the reversal procedure in a continuous fashion, rather than just once, and introduces an impractical overhead. Further, we regard such a procedure as a separate contribution from this paper.\"}",
"{\"title\": \"Using flexible priors for generative models\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"The paper demonstrates the need and usage for flexible priors in the latent space alongside current priors used for the generator network. These priors are indirectly induced from the data - the example discussed is via an empirical diagonal covariance assumption for a multivariate Gaussian. The experimental results show the benefits of this approach.\\nThe paper provides for a good read.\", \"comments\": \"1. How do the PAG scores differ when using a full covariance structure? Diagonal covariances are still very restrictive. \\n2. The results are depicted with a latent space of 20 dimensions. It will be informative to see how the model holds in high-dimensional settings. And when data can be sparse. \\n3. You could consider giving the Discriminator, real data etc in Fig 1 for completeness as a graphical summary.\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Thank you\", \"comment\": \"Thanks for the comments. Please see our responses below.\\n\\n1. How do the PAG scores differ when using a full covariance structure? Diagonal covariances are still very restrictive.\\n\\nWe have attempted to use full covariances, but more often than not, we ran into numerical issues that made the resulting scores unusable. Note that the use of diagonal covariances for calculating the scores is purposefully chosen to be just a single step in complexity above the naive prior.\\n\\n\\n2. The results are depicted with a latent space of 20 dimensions. It will be informative to see how the model holds in high-dimensional settings. And when data can be sparse. \\n\\nThe improvement gained from using PGAN slightly decreases in higher dimensions (we have tried up to 200) in terms of visual results, simply because the data induced prior becomes less complex in higher dimensions. However, a discrepancy between the naive prior and the data induced prior remains and is equally measurable.\\n\\n\\n3. You could consider giving the Discriminator, real data etc in Fig 1 for completeness as a graphical summary.\\n\\nWe originally designed the Figure as you suggested but found the graphic to be too cluttered. Since we assume basic familiarity with GANs throughout the text, we therefore decided to use the \\u201csimplified\\u201d version provided in our submission.\"}",
"{\"title\": \"Very helpful feedback\", \"comment\": \"Thank you for the detailed feedback. We have made changes to the writeup and would like to address your comments below:\\n\\n1. Notion of \\u201cIdeal\\u201d prior:\\n\\nWe do agree that using the terminology \\u201cideal prior\\u201d to refer to the data induced prior might cause confusions and we have now adjusted the writeup accordingly.\\nHowever, we disagree with the statement \\u201cthere should be no better prior than P(z) itself\\u201d where P(z) refers to what we call \\u201cnaive\\u201d prior. The reason is that the generator does not have infinite capacity to map any distribution to any other distribution, but is restricted by its architecture and by the training procedure. We highlighted the resulting discrepancy in our experiments by showing that there exist \\u201cempty\\u201d regions under the naive prior (figure 3).\\nFor a perfect generator, moving away from the naive prior would indeed move the generated data away from the learning target, but in practice, we have shown that replacing the naive prior with the data induced prior can actually improve the results significantly (figure 5).\\n\\n\\n1.5 \\u201cone may wish to iterate between updating G (under the current P(z)) and updating P(z) (under the current G), and hopefully this process might converge to a better solution.\\u201d\\n\\nThis is indeed a valid procedure and we have done this successfully, but we would like to keep the contribution of this paper focused to justifying a single step in this procedure and therefore did not include these results.\\n\\n\\n2. I think the discussions around Eq. (1) are not well grounded.\\n\\nWe implicitly argue that matching the joint distributions relates to matching the marginals.\\nIndeed, the KL divergence between the joint distributions is trivially a lower bound on the KL divergence between the marginals and since training the generator to convergence will minimize the conditional KL, further improvement can only be made by matching the priors.\\n\\n2.5 don't you need to put E_z[ ... ] around the 2nd term on the r.h.s. ?\\n\\nAbsolutely. We have updated the writeup.\\n\\n\\n3. the paper mingles notions from GAN and VAE sometimes\\n\\nWe have updated the writeup to focus our discussion on GANs (expect in the first paragraph). \\n\\n\\n4. I'm not sure whether it makes good sense to apply an SVD decomposition to the \\\\hat{z} vectors. It seems to me the variances \\\\nu^2_i shall be directly estimated from \\\\hat{z} as is. Otherwise, the reference \\\"ideal\\\" distribution would be modeling a **rotated** version of the \\\\hat{z} samples, which imo only introduces unnecessary discrepancies.\\n\\nThe SVD is only used to compute the prior agreement score and the use of it is resulting from the definition of the KL between multivariate normals. When we learn the data induced prior, our targets are the reconstructed latent codes as is.\\n\\n\\n5. I don't quite agree with the asserted \\\"multi-modal structure\\\" in Figure 2. Let's assume a 2d latent space, where each quadrant represents one MNIST digit (e.g. 1,2,3,4). You may observe a similar structure in this latent space yet still learn a good generator under even a standard 2d Gaussian prior. I guess my point is, a seemingly well-partitioned latent space doesn't bear an obvious correlation with a multi-modal distribution in it.\\n\\nWe agree with your statement, but Figure 2 shows a latent space that is not only well-partitioned, but also has empty regions that shouldn\\u2019t be empty under the original prior. If there are regions in the latent space that are never used when explicitly reconstructing the data manifold, but the generator samples from all regions equally when learning to match the same data manifold, there must be a multi-modal structure that disagrees with the given prior.\\n\\n\\n6. The generator reversal procedure needs to be carried out once for each data point separately, and also when the generator has been updated, which seems to be introducing a potentially significant bottleneck into the training process.\\n\\nThe reversal procedure is carried out once per data point indeed, but this only happens once, after the generator has finished training using the naive prior. In addition, this can be carried out using very large batches of data (since no learning takes place during reversal). Thus, the overhead essentially amounts to one large-batch pass over the data in the entire duration of learning.\"}",
"{\"title\": \"review for flexible priors for GAN\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"Summary:\\n\\nThe paper proposes to learn new priors for latent codes z for GAN training. for this the paper shows that there is a mismatch between the gaussian prior and an estimated of the latent codes of real data by reversal of the generator . To fix this the paper proposes to learn a second GAN to learn the prior distributions of \\\"real latent code\\\" of the first GAN. The first GAN then uses the second GAN as prior to generate the z codes. \\n \\nQuality/clarity:\\n\\nThe paper is well written and easy to follow.\", \"originality\": \"\", \"pros\": [\"-The paper while simple sheds some light on important problem with the prior distribution used in GAN.\", \"the second GAN solution trained on reverse codes from real data is interesting\", \"In general the topic is interesting, the solution presented is simple but needs more study\"], \"cons\": [\"It related to adversarial learned inference and BiGAN, in term of learning the mapping z ->x, x->z and seeking the agreement.\", \"The solution presented is not end to end (learning a prior generator on learned models have been done in many previous works on encoder/decoder)\"], \"general_review\": \"\", \"more_experimentation_with_the_latent_codes_will_be_interesting\": [\"Have you looked at the decay of the singular values of the latent codes obtained from reversing the generator? Is this data low rank? how does this change depending on the dimensionality of the latent codes? Maybe adding plots to the paper can help.\", \"the prior agreement score is interesting but assuming gaussian prior also for the learned latent codes from real data is maybe not adequate. Maybe computing the entropy of the codes using a nearest neighbor estimate of the entropy can help understanding the entropy difference wrt to the isotropic gaussian prior?\", \"Have you tried to multiply the isotropic normal noise with the learned singular values and generate images from this new prior and compute inceptions scores etc? Maybe also rotating the codes with the singular vector matrix V or \\\\Sigma^{0.5} V?\", \"What architecture did you use for the prior generator GAN?\", \"Have you thought of an end to end way to learn the prior generator GAN?\", \"****** I read the authors reply. Thank you for your answers and for the SVD plots this is helpful. *****\"], \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"decision\": \"Reject\", \"title\": \"ICLR 2018 Conference Acceptance Decision\", \"comment\": \"This paper presents a method for learning more flexible prior distributions for GANs by learning another distribution on top of the latent codes for training examples. It's reminiscent of layerwise training of deep generative models. This seems like a reasonable thing to do, but it's probably not a substantial enough contribution given that similar things have been done for various other generative models. Experiments show improvement in samples compared with a regular GAN, but don't compare against various other techniques that have been proposed for fixing mode dropping. For these reasons, as well as various issues pointed out by the reviewers, I don't recommend acceptance.\"}"
]
} |
HJcSzz-CZ | Meta-Learning for Semi-Supervised Few-Shot Classification | [
"Mengye Ren",
"Eleni Triantafillou",
"Sachin Ravi",
"Jake Snell",
"Kevin Swersky",
"Joshua B. Tenenbaum",
"Hugo Larochelle",
"Richard S. Zemel"
] | In few-shot classification, we are interested in learning algorithms that train a classifier from only a handful of labeled examples. Recent progress in few-shot classification has featured meta-learning, in which a parameterized model for a learning algorithm is defined and trained on episodes representing different classification problems, each with a small labeled training set and its corresponding test set. In this work, we advance this few-shot classification paradigm towards a scenario where unlabeled examples are also available within each episode. We consider two situations: one where all unlabeled examples are assumed to belong to the same set of classes as the labeled examples of the episode, as well as the more challenging situation where examples from other distractor classes are also provided. To address this paradigm, we propose novel extensions of Prototypical Networks (Snell et al., 2017) that are augmented with the ability to use unlabeled examples when producing prototypes. These models are trained in an end-to-end way on episodes, to learn to leverage the unlabeled examples successfully. We evaluate these methods on versions of the Omniglot and miniImageNet benchmarks, adapted to this new framework augmented with unlabeled examples. We also propose a new split of ImageNet, consisting of a large set of classes, with a hierarchical structure. Our experiments confirm that our Prototypical Networks can learn to improve their predictions due to unlabeled examples, much like a semi-supervised algorithm would. | [
"Few-shot learning",
"semi-supervised learning",
"meta-learning"
] | Accept (Poster) | https://openreview.net/pdf?id=HJcSzz-CZ | https://openreview.net/forum?id=HJcSzz-CZ | ICLR.cc/2018/Conference | 2018 | {
"note_id": [
"SJ7s7qkzG",
"Hy4CMckMG",
"rJzcaGvgf",
"Hyrfr91fG",
"BkmDZ-ZZG",
"SkW9BQ9lG",
"Hyx7bEPez",
"Skst716Bf"
],
"note_type": [
"official_comment",
"official_comment",
"official_review",
"official_comment",
"comment",
"official_review",
"official_review",
"decision"
],
"note_created": [
1513231258876,
1513231051568,
1511628170266,
1513231628948,
1512276315364,
1511826825191,
1511633176069,
1517249410715
],
"note_signatures": [
[
"ICLR.cc/2018/Conference/Paper788/Authors"
],
[
"ICLR.cc/2018/Conference/Paper788/Authors"
],
[
"ICLR.cc/2018/Conference/Paper788/AnonReviewer3"
],
[
"ICLR.cc/2018/Conference/Paper788/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2018/Conference/Paper788/AnonReviewer2"
],
[
"ICLR.cc/2018/Conference/Paper788/AnonReviewer1"
],
[
"ICLR.cc/2018/Conference/Program_Chairs"
]
],
"structured_content_str": [
"{\"title\": \"Response to reviewer 1\", \"comment\": \"Thank you for the comments. We\\u2019d like to clarify our setup here: The problem as we have defined it is to correctly perform the given N-way classification in each episode (similarly as in the previous work). Distractors are introduced to make the problem harder in a more realistic way, but the goal is not to be able to classify them. Specifically, our model needs to understand which points are irrelevant for the given classification task (\\u201cdistractors\\u201d) in order to not take them into account, but actually classifying these distractors into separate categories is not required in order to perform the given classification task, so our models make no effort to do this.\\n\\nFurther, we would like to emphasize that adding distractor examples in few-shot classification settings is a novel and more realistic learning environment compared to previous approaches in supervised few-shot learning and as well as concurrent approaches in semi-supervised few-shot learning [1,2]. It is non-trivial to show that various versions of semi-supervised clustering can be trained end-to-end from scratch as another layer on top of prototypical networks, with the presence of distractor clusters (note that each distractor class has the same number of images as a non-distractor class).\", \"references\": \"[1]: Few-Shot Learning with Graph Neural Networks. Anonymous. Submitted to ICLR, 2017.\\n[2]: Semi-Supervised Few-Shot Learning with Prototypical Networks. Rinu Boney and Alexander Ilin. CoRR, abs/1711.10856, 2017.\"}",
"{\"title\": \"Response to reviewer 2\", \"comment\": \"We appreciate the constructive comments from reviewer 2 and we are delighted to learn that the reviewer feels that our paper is well written and organized.\\n\\n\\u201cbuilds largely on previous work\\u2026 only some small technical novelty\\u2026\\u201d\\nWe would like to emphasize that we introduce a new task for few-shot classification, incorporating unlabeled items. This is impactful as follow-up work can use our dataset as a public benchmark. In fact, there are several concurrent ICLR submissions and arxiv pre-prints [1,2] that also introduce semi-supervised few-shot learning. However compared to these concurrent papers, our benchmark extends beyond this work into more realistic and generic settings, with hierarchical class splits and unlabeled distractor classes, which we believe will make positive contributions to the community.\\n\\nThe fact that our semi-supervised prototypical network can be trained end-to-end from scratch is non-trivial, especially under many distractor clusters (note that each distractor class has the same number of images as a non-distractor class). We argue that our extension is simple yet effective, serving as another layer on top of the regular prototypical network layer, and provides consistent improvement in the presence of unlabeled examples. Further, to our knowledge, our best-performing method, the masked soft k-means, is novel.\\n\\n\\u201cIt would be better to present accuracy\\u2026\\u201d\\nThank you for the suggestion. We will revise it in our next version.\\n\\n\\u201cno other approach is considered besides the prototypical network and its variants.\\u201d\\nProtoNets is one of the top performing methods for few-shot learning and our proposed extensions each naturally forms another layer on top of the Prototypical layer. To address the concern, we are currently running other variants of the models such as a nearest neighbor baseline, and will report results before the ICLR discussion period ends. In the Omniglot dataset literature, many simple baselines has been extensively explored, and Prototypical Networks are so far the state-of-the-art. Table 1 summarizes the performance for a 5-way 5-shot benchmark (results reported by [3])\\n\\nTable 1 - Omniglot dataset baselines\\nMethod Accuracy\\nKNN pixel 48%\\nKNN deep 69%\\nMann et al. [3] 88%\\nProtoNet 99.7%\", \"references\": \"[1]: Few-Shot Learning with Graph Neural Networks. Anonymous. Submitted to ICLR, 2017.\\n[2]: Semi-Supervised Few-Shot Learning with Prototypical Networks. Rinu Boney and Alexander Ilin. CoRR, abs/1711.10856, 2017.\\n[3]: One-shot learning with Memory-Augmented Neural Networks. ICML 2016.\"}",
"{\"title\": \"limited novelty\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"This paper is an extension of the \\u201cprototypical network\\u201d which will be published in NIPS 2017. The classical few-shot learning has been limited to using the unlabeled data, while this paper considers employing the unlabeled examples available to help train each episode. The paper solves a new semi-supervised situation, which is more close to the setting of the real world, with an extension of the prototype network. Sufficient implementation detail and analysis on results.\\n\\nHowever, this is definitely not the first work on semi-supervised formed few-shot learning. There are plenty of works on this topic [R1, R2, R3]. The authors are advised to do a thorough survey of the relevant works in Multimedia and computer vision community. \\n \\nAnother concern is that the novelty. This work is highly incremental since it is an extension of existing prototypical networks by adding the way of leveraging the unlabeled data. \\n\\nThe experiments are also not enough. Not only some other works such as [R1, R2, R3]; but also the other na\\u00efve baselines should also be compared, such as directly nearest neighbor classifier, logistic regression, and neural network in traditional supervised learning. Additionally, in the 5-shot non-distractor setting on tiered ImageNet, only the soft kmeans method gets a little bit advantage against the semi-supervised baseline, does it mean that these methods are not always powerful under different dataset?\\n\\n[R1] \\u201cVideostory: A new multimedia embedding for few-example recognition and translation of events,\\u201d in ACM MM, 2014\\n\\n[R2] \\u201cTransductive Multi-View Zero-Shot Learning\\u201d, IEEE TPAMI 2015\\n\\n[R3] \\u201cVideo2vec embeddings recognize events when examples are scarce,\\u201d IEEE TPAMI 2014\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Response to reviewer 3\", \"comment\": \"\\u201cThere are plenty of works on this topic\\u2026\\u201d\\nWe also thank the reviewer for pointing out related zero-shot learning literature and we will study them and add those references to the next version of the paper. Based on our preliminary reading, [1] is a journal version that builds on top of [2], with both papers presenting very similar approaches for the application of event recognition in videos. Transductive Multi-View Zero-Shot Learning [3] uses a similar label propagation procedure as ours. However, while [3] uses standalone deep feature extractors, we show that our semi-supervised prototypical network can be trained completely end-to-end. One of the non-trivial results of our paper is that we show that end-to-end meta-learning significantly improves the performance (see Semi-supervised Inference vs. Soft K-means). We would like to emphasize that end-to-end semi-supervised learning in a meta-learning framework is, to the best of our knowledge, a novel contribution.\\n\\n\\u201c...other na\\u00efve baselines should also be compared...\\u201d\\nThe recent literature on few-shot learning has established that meta-learning-based approaches outperform kNN and standard neural network based approaches. For the Omniglot dataset, Mann et al. [4] has previously studied baselines such as KNN either in pixel space or deep features, and feedforward NNs. They found these baselines all lag behind their method by quite a lot, and meanwhile Prototypical Networks outperform Mann et al. by another significant margin. For example, Table 1 summarizes the performance for 5-shot, 5-way classification. Therefore, we will provide supervised nearest neighbor, logistic regression, and neural network baselines for completeness; however, we believe that our work is built on top of state-of-the-art methods, and should beat these simple baselines.\\n\\nTable 1 - Omniglot dataset baselines\\nMethod Accuracy\\nKNN pixel 48%\\nKNN deep 69%\\nMann et al. [4] 88%\\nProtoNet 99.7%\\n\\n\\u201c...not always powerful under different dataset?\\u201d\\nFor completeness we ran both 1-shot and 5-shot settings and found that our method consistently outperforms the baselines. While in 5-shot the improvement is less, this is reasonable since the number of labeled items is larger and the benefit brought by unlabeled items is considerably smaller than in 1-shot settings. We disagree with the comment that our model is not robust under different datasets, since the best settings we found is consistent across all three, quite diverse, datasets, including the novel and much larger tieredImageNet.\", \"references\": \"[1] \\u201cVideo2vec embeddings recognize events when examples are scarce,\\u201d IEEE TPAMI 2014\\n[2] \\u201cVideostory: A new multimedia embedding for few-example recognition and translation of events,\\u201d in ACM MM, 2014.\\n[3]: Transductive Multi-View Zero-Shot Learning, IEEE TPAMI 2015.\\n[4]: One-shot learning with Memory-Augmented Neural Networks. ICML 2016.\"}",
"{\"title\": \"release tiered-imagenet split?\", \"comment\": \"Great work! Could you release the split for tiered-Imagenet?\"}",
"{\"title\": \"extension of the Prototypical Network to semi-supervised setting\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"This paper proposes to extend the Prototypical Network (NIPS17) to the semi-supervised setting with three possible\\nstrategies. One consists in self-labeling the unlabeled data and then updating the prototypes on the basis of the \\nassigned pseudo-labels. Another is able to deal with the case of distractors i.e. unlabeled samples not beloning to\\nany of the known categories. In practice this second solution is analogous to the first, but a general 'distractor' class\\nis added. Finally the third technique learns to weight the samples according to their distance to the original prototypes.\", \"these_strategies_are_evaluated_in_a_particular_semi_supervised_transfer_learning_setting\": \"the models are first trained\\non some source categories with few labeled data and large unlabeled samples (this setting is derived by subselecting\\nmultiple times a large dataset), then they are used on a final target task with again few labeled data and large \\nunlabeled samples but beloning to a different set of categories.\\n\\n+ the paper is well written, well organized and overall easy to read\\n+/- this work builds largely on previous work. It introduces only some small technical novelty inspired by soft-k-means\\nclustering that anyway seems to be effective.\\n+ different aspect of the problem are analyzed by varying the number of disctractors and varying the level of\\nsemantic relatedness between the source and the target sets\\n\\nFew notes and questions\\n1) why for the omniglot experiment the table reports the error results? It would be better to present accuracy as for the other tables/experiments\\n2) I would suggest to use source and target instead of train and test -- these two last terms are confusing because\\nactually there is a training phase also at test time.\\n3) although the paper indicate that there are different other few-shot methods that could be applicable here, \\nno other approach is considered besides the prothotipical network and its variants. An further external reference \\ncould be used to give an idea of what would be the experimental result at least in the supervised case.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"The studied problem is interesting, and the paper is well-written. While the proposed method is a natural extension of the existing works.\", \"rating\": \"6: Marginally above acceptance threshold\", \"review\": \"In this paper, the authors studied the problem of semi-supervised few-shot classification, by extending the prototypical networks into the setting of semi-supervised learning with examples from distractor classes. The studied problem is interesting, and the paper is well-written. Extensive experiments are performed to demonstrate the effectiveness of the proposed methods. While the proposed method is a natural extension of the existing works (i.e., soft k-means and meta-learning).On top of that, It seems the authors have over-claimed their model capability at the first place as the proposed model cannot properly classify the distractor examples but just only consider them as a single class of outliers. Overall, I would like to vote for a weakly acceptance regarding this paper.\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"ICLR 2018 Conference Acceptance Decision\", \"comment\": \"The paper extends the earlier work on Prototypical networks to semi-supervised setting. Reviewers largely agree that the paper is well-written. There are some concerns on the incremental nature of the paper wrt to the novelty aspect but in the light of reported empirical results which show clear improvement over earlier work and given the importance of the topic, I recommend acceptance.\", \"decision\": \"Accept (Poster)\"}"
]
} |
rJl0r3R9KX | Regularized Learning for Domain Adaptation under Label Shifts | [
"Kamyar Azizzadenesheli",
"Anqi Liu",
"Fanny Yang",
"Animashree Anandkumar"
] | We propose Regularized Learning under Label shifts (RLLS), a principled and a practical domain-adaptation algorithm to correct for shifts in the label distribution between a source and a target domain. We first estimate importance weights using labeled source data and unlabeled target data, and then train a classifier on the weighted source samples. We derive a generalization bound for the classifier on the target domain which is independent of the (ambient) data dimensions, and instead only depends on the complexity of the function class. To the best of our knowledge, this is the first generalization bound for the label-shift problem where the labels in the target domain are not available. Based on this bound, we propose a regularized estimator for the small-sample regime which accounts for the uncertainty in the estimated weights. Experiments on the CIFAR-10 and MNIST datasets show that RLLS improves classification accuracy, especially in the low sample and large-shift regimes, compared to previous methods. | [
"Deep Learning",
"Domain Adaptation",
"Label Shift",
"Importance Weights",
"Generalization"
] | https://openreview.net/pdf?id=rJl0r3R9KX | https://openreview.net/forum?id=rJl0r3R9KX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"HJlQWqQreN",
"S1xCYXQGyV",
"BJe5ulInCQ",
"rygX0-Ehp7",
"BkeG9kNh6X",
"Hkldv4Aj6X",
"BJg59Q0iT7",
"r1ghyrDc2m",
"SyxJIBjuhm",
"rkl-PP8d2Q"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1545054715049,
1543807878293,
1543426162395,
1542369739111,
1542369162421,
1542345824485,
1542345617852,
1541203171721,
1541088583253,
1541068633167
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1592/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1592/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1592/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1592/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1592/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1592/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1592/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1592/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1592/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1592/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper gives a novel algorithm for transfer learning with label distribution shift with provably guarantees. As the reviewers pointed out, the pros include: 1) a solid and motivated algorithm for a understudied problem 2) the algorithm is implemented empirically and gives good performance. The drawback includes incomplete/unclear comparison with previous work. The authors claimed that the code of the previous work cannot be completed within a reasonable amount of time. The AC decided that the paper could be accepted without such a comparison, but the authors are strongly urged to clarify this point or include the comparison for a smaller dataset in the final revision if possible.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"meta-review\"}",
"{\"title\": \"memory issue\", \"comment\": \"We would like to thank you for providing the link to the code. Inspired by your suggestion to analyze [1], we deployed the code provided by the authors and ran it for data with dimensionality equal to 700 (similar to MNIST).\\nIn our experiments, we found that the largest number of samples for which we could feasibly run their code without memory halt was 11k. A similar observation was also reported in Lipton et al. 2018 where the authors were not able to apply the kernel mean matching approach of Zhang et al., 2013 on datasets of size larger than 8k samples (we are not aware of the type of machine they used in order to report this number). This hinders the application of these methods to real-life machine learning problems where the data size will be often above 100K. The aim of our paper is explicitly to find a method that works for general large-scale (training) datasets, and thus due to the memory issue, we do not plan to provide a comparison to these baselines in all of our plots. However, we can conduct a comparative study on a smaller dataset in a separate experiment and include it in the paper. We again appreciate your helpful comment, and we are working on adding a detailed explanation of this study to our paper.\"}",
"{\"title\": \"About baselines\", \"comment\": \"Thank you for your answers.\\n\\nThe code for [1] can be actually found here http://web.eecs.umich.edu/~cscott/code.html. There are at least two methods that authors can use as baselines that can be deployed on real data.\"}",
"{\"title\": \"A contribution for a rather unstudied problem with new theoretical results used in the implementation, improved empirical results\", \"comment\": \"We thank the reviewer for the detailed review and pointers to related works. Please see our detailed answers to your comments below.\\n\\n1) Related literature:\\n\\nWe appreciate the reviewer's pointers to related works based on an anomaly detection framework. The two mentioned papers as well as Blanchard et al. 2010, are considered as pioneer works in this area and do make a strong theoretical contribution. We added them to our related works along with a discussion. These methods employ a function class to estimate the class proportions where they require the knowledge of the VC dimension of this function class for weight estimation task. It has elegant theoretical results under mutual irreducibility assumption and derive asymptotic guarantees on the weight estimation which depends on the VC dimension of the mentioned class. However, as we understand these approaches when a class of deep neural networks with unknown (or at least vacuous) VC dimension is deployed, the comparison is empirically prohibited. Moreover, these methods require solving additional computationally expensive optimization problems in their inner loop weight estimation which are intractable for large function classes such as deep networks. We again appreciate the reviewer's suggestion and added a discussion about these methods in our paper.\\n\\n2) How the estimator obtained with the inverse of the estimated confusion matrix can be arbitrarily bad and ours is not:\\n\\nTechnically, estimating \\\\hat{w} using the inverse of the estimated confusion matrix \\\\hat{C} can be arbitrarily bad since the \\\\hat{C} can be arbitrary close to a singular matrix. In the low sample setting when the smallest singular value \\\\sigma_\\\\min of the true confusion matrix C is small, this issue gets amplified (reflected in the bound). In fact, Lipton et al. 2018 even require that the number of samples used to estimate the confusion matrix (the number samples in the source domain) has to be larger than O(1/\\\\sigma_\\\\min^2). That means, they do NOT offer guarantees for the small sample regime.\\n\\nOther than being a very large lower bound, this criterion is unrealistic to check since \\\\sigma_\\\\min is not known a priori. They require this constraint to make sure that with high probability the estimated confusion matrix \\\\hat{C} is bounded away from being singular. In contrast, we deploy the principles of statistical linear inverse problems and propose a weight estimation approach which does not require inverting \\\\hat{C} and estimate the importance-weights \\\\hat{w} by solving a convex optimization problem. \\n\\nWe have clarified our improvements compared to BBSL including minimum sample complexity and k \\\\log k in the introduction and paragraphs after Lemma 1. We incorporated this comment in our main draft and stated that the \\\\hat{C} can be arbitrarily close to a singular matrix. \\n\\n3) Comparison with Lipton et al ( factor k improvement ). and improvement only achieved when h_0 is ideal:\\nWe added a more detailed and direct comparison between our weight estimator and Theorem 3 in Lipton et al. in the discussion after Lemma 1.\", \"first_regarding_the_factor_k_improvement\": \"With respect to the dependence on \\\\delta, a slight confusion might have resulted from the way Theorem 3 in Lipton et al. was stated compared to ours. The authors did not write the statement in the form \\u201cwith probability at least 1- \\\\delta\\u201d but rather with terms dependent on k, n_q, and n_p. Translating the theorem to have a 1-\\\\delta type guarantee results in a dependence of delta that is exactly like ours.\", \"regarding_ideal_h_0\": \"h_0 need not be an ideal estimator, neither in Lipton et al.\\u2019s nor in our paper. The dependence on h_0 is implicit via \\\\sigma_{min}. We have emphasized this fact a bit more in the paper in the second paragraph after Lemma 1.\\n\\n4) Lemma 2 and its connection to Theorem 1.4 in Tropp 2012, as well as \\\\delta<0.5 assumption in Lemma 3:\\nWe added the definition of the norm by which we meant the spectral norm, equivalent to the largest eigenvalue of the matrix. Moreover, we also added more detailed links to theorems and results in Tropp 2012. \\n\\nRegarding delta<1/2 in Lemma 3: This was originally just a technical assumption (and natural, since usually, you want the bound to hold with probability above 0.5) to simplify the bound and make the dependence on n more transparent. In order to avoid confusion, we now removed it from our lemma statement. The change ultimately transfers to some change of universal constants in the upper bounds which we neglect by using the O-notation.\\n\\n5) Thanks for the detailed comments. We have corrected the typos\", \"re\": \"Figure 1: The labeling is actually correct as is. The dependence on k of the bound in Theorem 1 is in the log factors which were neglected in the lower bound to simplify the presentation\", \"re_two_layer_fully_connected_neural\": \"We made it more clear in the revised version that the bounds are independent of h_0 and that the dependence of the bound on h_0 is only via \\\\sigma_min.\"}",
"{\"title\": \"Improved estimators for correcting label shifts, but experiments can be improved\", \"comment\": \"We want to thank you for your thoughtful response and detailed questions.\\n\\n1) model for black box predictor h_0:\\nThe minimum singular value of the true C depends heavily on how well h can predict Y (which depends on the model you choose for the respective dataset in question). For example, imagine in binary classification that h predicts uniformly at random. Then all entries of C will be 1/2, C will have rank 1 and thus minimum singular value 0. This not only makes C hard to estimate. The final upper bound on the excess risk (in Theorem 1 and variants) also explicitly depends inversely on the minimum eigenvalue of C. Therefore, the better the black box predictor h_0 is in prediction, the better and more stable the estimation of C and the smaller the upper bound. We hope that the reformulation of the discussion in the second paragraph following Lemma 1 clarifies this point. You may also have a look at Figure 4 which illustrates the influence of the black box prediction accuracy directly.\\n\\n2) alpha for Dirichlet shift, number of samples for alpha = 0.01:\\nIn the Dirichlet shift, the probability vector p is sampled (from the simplex) according to a Dirichlet distribution with constant concentration parameters \\\\alpha_1 \\u2026 \\\\alpha_k = \\\\alpha. The larger \\\\alpha, the more mass is concentrated in the middle of the simplex (more uniform) and the smaller \\\\alpha, the more mass on the edges (more skewed), smaller p for the smallest class. I.e. the smaller \\\\alpha, the bigger the shift. We chose to run experiments on a variety of shifts to see how our method behaves under different shifts, and Dirichlet in particular since it was used in Lipton et al. While the total number of data points is set to 10000, for rather large shifts, it is possible that the smallest class has zero samples and this indeed happens for \\\\alpha = 0.01\\n\\n3) Minority-Class shifts with extreme shifts p=0.001:\\nYes, it is an extreme shift case. In our revision, we now use p=0.005 (now Figure 2) to better show when our method would help and outperform both unweighted classifier and Lipton et al. We have two more figures on p=0.01 and p=0.001 to the appendix. We will put more sets of experiments using different sample sizes and values of p in the appendix.\\n\\n4) Figure 3 low accuracy for small shifts, micro- vs. macro F-1 score, low total F-1 score:\\nFigure 3 shows the case when the target data is Dirichlet shifted with small alpha indicating a larger shift. In this paper, in order to create shifted data with certain label proportions, we do not use full the training set or testing set. In fact, both source and target sets consist of 10000 examples for all \\\\alpha. This may compromise performance but makes the comparison (between shifts) fair since the amount of data is fixed. We had added more explanation in the revision.\\n\\nThanks for the clarifying question, we added a more detailed description of our computation of the F-1 score in the paper now (second paragraph of 3.2.). The F-1 score is macro averaged so that when the test data is dominated by only a few classes, F-1 could be very low even though the overall accuracy is high. Note that the standard python package that we use, handles ill-defined cases as follows: When a class is not present but predicted in the target set, the F-1 value of that class is 0 while if there are no predicted examples in an absent class, the F-1 is counted as 1. Our ResNet is trained with fewer samples and has ~75% accuracy, which helps to explain the very low F-score for the extreme shift \\\\alpha =0.01: in fact there is only one class present in the target set, and any sample that is predicted to belong to one of the non-existent class, results to a zero F-1 value, thus decreasing the macro-averaged F-1 score drastically.\\n\\n5) The method only helps with fairly extreme shifts:\\nThe reason why we compare performances for relatively large shift cases is to demonstrate the advantages of our proposed method over BBSE in Lipton et al., achieving smaller weight estimation errors due to the regularization procedure. As shown in Figure 2(a) and Figure 3(a), as the shifts get larger, the weight estimation error of RLLS increases much more compared to BBSL. The harder the problem (i.e. more extreme shift or/and smaller target sample sizes, source shift rather than target shift), the bigger the advantage of our method compared to the baseline. The goal of our paper is exactly to target the hard regime. However, correcting label shifts should yield better accuracy (compared to the unweighted classifier) in more general cases using either method (RLLS or BBSL). We elaborate on this more in our revision and will also add more experiments on different shift types and CIFAR10 in the final version to give a more complete picture of the regimes where it is helpful.\"}",
"{\"title\": \"Interesting Algorithm with Solid Theories\", \"comment\": \"Thank you for your positive comments about the paper. Please see detailed answers to your questions below\\n\\n1. How realistic it is to assume we have prior knowledge on theta and sigma_min? \\n\\n1) Prior knowledge on upper bound of theta and lower bound of sigma_min. Our bounds hold for all choices of \\\\lambda, \\\\theta and \\\\sigma_\\\\min. In general, \\\\theta is unknown. However, it is reasonable to assume that we only want to be robust against shifts up to a certain \\\\theta_\\\\max, so that we essentially consider only the set of \\\\theta with norm up to \\\\theta_\\\\max. \\nAs to \\\\sigma_\\\\min in practice, although we may not know the \\\\sigma_\\\\min of the true confusion matrix, we can estimate it using the empirical confusion matrix. We have clarified that in algorithm 1 and added a clarifying discussion about both these matters in the paragraphs following Theorem 1.\\n\\n2. If I understand correctly, the only experiment where lambda is varied is Sec 3.3? It would be interesting if authors also included BBSE in Sec 3.3 as a baseline. \\n\\n2) We appreciate the reviewer's comment on the BBSE experiment. Upon the reviewer\\u2019s suggestion, we also ran BBSE for the experiment in Sec 3.3 and added the corresponding curves to Figure 5.\\n\\n3. The authors mentioned in the discussion that the generalization guarantee is obtained with no prior knowledge q/p is needed. However, doesn't theta implicitly represent the knowledge in p/q? \\n\\n3) We apologize for the lack of clarity in the mentioned statement. As the reviewer correctly observed, our generalization bound depends on theta. We clarified this statement to emphasize that, in contrast to prior methods, e.g., Chan & Ng, 2005 and Storkey, 2009, our importance weighting algorithm does not require any prior knowledge of theta and results in a generalization bound which depends on the theta.\"}",
"{\"title\": \"General reply to the reviewers and the area chair\", \"comment\": \"We would like to thank the reviewers and area chair for their thoughtful responses to our paper. We are grateful to each of you for the suggestions that helped us to improve the clarity of the presentation. To improve the flow and clarity of the presentation, we have restructured paper in various places. We have moved some of the experiments to the appendix and added new ones (some of them to the main text) that help to clarify the reviewers' questions. We added and reformulated clarifying discussions about our results and how they compare with baselines.\\n\\nWe have run more experiments which aim to clarify the regime in which our procedure has advantages compared to other label shift correcting methods. We aim to add more figures in the appendix in a potential camera ready version for different shifts parameters on CIFAR10 to present an even more complete picture.\\n\\nPlease find individual replies to each of the reviews in the respective threads.\"}",
"{\"title\": \"Interesting Algorithm with Solid Theories\", \"review\": \"Authors proposes a new algorithm for improving the stability of class importance weighting estimation procedure (Lipton et al., 2018) with a two-step procedure. The reparamaterization of w using the weight shift theta and lambda allows authors develop a generalization upperbound with terms rely on theta, sigma and lambda.\\n\\nThe problem of label shift is a known important issue in transfer learning but has been understudied.\\n\\nThe paper is very well written and the algorithm is well-motivated (introducing regularization to avoid the singularity) and post processing step looks sound (using lambda to de-biase). I only have a few minor questions: \\n\\n1. How realistic it is to assume we have prior knowledge on theta and sigma_min? \\n\\n2. If I understand correctly, the only experiment where lambda is varied is Sec 3.3? It would be interesting if authors also included BBSE in Sec 3.3 as a baseline. \\n\\n3. The authors mentioned in the discussion that the generalization guarantee is obtained with no prior knowledge q/p is needed. However, doesn't theta implicitly represent the knowledge in p/q? \\n\\n------------------------------------------------\\n\\nI have read authors' comments.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Improved estimators for correcting label shifts, but experiments can be improved\", \"review\": [\"The authors consider the problem of learning under label shifts, where the label proportions p(y) and q(y) of the training and test distributions differ, while the conditionals p(x|y) and q(x|y) are equal. They build upon the work by Lipton et al. 18 on estimating label proportion weights q(y)/p(y) using the confusion matrix, by proposing an improved estimator with regularization. They show that their estimator provides better weight estimates compared to the unregularized version, and it also gives better prediction accuracies under large label shift scenarios.\", \"One question I have about this approach is the choice of h in the confusion matrix estimation. Since the theory holds for any fixed hypothesis h, is there any guidance on how we should pick h? The authors seem to use the same model class for the weight estimation and predictions in the experiments. How would using a simpler h for weight estimation (e.g., linear logistic regression) affect the results presented here?\", \"The Dirichlet shifts described with only the parameter alpha is not particularly intuitive in conveying the size of shifts. The CIFAR10 and MNIST datasets contain about 6000 examples per class. How would a large shift with alpha=0.01 change the distribution, especially for the smallest class how many samples are retained? This can help the readers judge when the correction of label shifts are helpful.\", \"To clarify, in the experiments for Figure 4 using Minority-Class shifts, with p=0.001, is it true that there are less than 100 training examples for each of the minority classes in the training set? This seems like a very extreme shift.\", \"I also have trouble understanding Figure 3. RESNET-18 should give >90% accuracy on the original CIFAR10, but in 3b we see accuracies around 75% for small shifts. Also how is the F1-score in 3c computed? Is it micro-averaged or macro-averaged F1? Either way an F1 score below 20% is very low for the unweighted classifier, since RESNET-18 should give fairly good classification accuracy on each class separately if it has >90% overall accuracy.\", \"The paper is quite solid in motivating the need for better weight estimators for reweighing label proportions and their derivations, and manage to show improvements over the unregularized estimator. Details on the experiments should be improved to give the readers better ideas on when correcting for label shifts help. Right now it looks like it only helps for cases with fairly extreme shifts.\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"A contribution for a rather unstudied problem with new theoretical results used in the implementation, improved empirical results - however state-of-the-art section is incomplete leading to a lack of baselines, lack of comparison with the close related work of Lipton et al.'18\", \"review\": \"This paper presents a new contribution for a largely understudied problem of label shift (also called target shift), a situation occurring when the class proportions vary between the training and test sets. The proposed contribution builds upon a recent work on the subject by Lipton et al., 2018 and addresses several of its weaknesses. The paper also gives several improved generalisation bounds w.r.t. that of Lipton et al. that are further used as guidelines to tune the regularisation parameter based on the size of source and target samples. Finally, the empirical results show that the proposed algorithm outperforms that of Lipton et al. especially in cases where the shift in proportions becomes quite important.\\n\\n*Pros: \\n - A work in an area with very view contributions and a certain lack of theoretical results\\n -Theoretical results that are actually used in the algorithmic implementation and that allow to define the regularisation parameter based on the size of the available samples\\n -Improved empirical results\\n\\n\\n*Cons: \\n -An incomplete state-of-the-art section that does not cite several important contributions on the subject;\\n -Lack of baselines due to the incomplete state-of-the-art section;\\n -Lack of clear comparison with Lipton et al. both in terms of the proposed method and the obtained theoretical guarantees. \\n\\n\\n*Detailed comments:\\nThis paper is rather interesting and well-written.\\n\\nI have several major concerns regarding this paper. They can be summarised as follows:\\n\\n There is an important part of literature review on target shift that is missing in this paper. Even though, the paper mentioned the work of Chang, 2005 and Zhang, 2013, it completely ignores several other highly relevant methods such as [1,2]. These works also propose algorithms that allow to estimate class proportions that vary between training and test data. This estimation can then be used for cost-sensitive learning to correct the target shift. The paper should mention this work and add the corresponding methods to the baselines for comparison. \\n\\n Several statements that justify the contribution of this paper are unsupported. For instance, the paper states that the estimator obtained with the inverse of the confusion matrix can be arbitrary bad when the sample size and/or the singular values are small. However, this exact dependence can be found in Lemma 1 for the proposed contribution also! This is repeated in the beginning of Section 2.2 to justify the regularised version of the estimator but once again no evidence was provided to support the claim. The obtained bound for the regularised algorithm also has these two terms and thus it is not clear why the regularised algorithm is supposed to work better. \\n\\n The paper may want to clearly state the differences between the proposed algorithm and that of Lipton et al. and also between the obtained error bounds. The paper states that it achieves a k*log(k) improvement over Lipton et al. bounds but as fair as I can see this improvement is achieved only when h_0 is an ideal estimator. Furthermore, Lipton et al.\\u2019s bounds are linear in k while the proposed bounds replace this term with log(k/delta) so that when \\\\delta is small, ie the bound holds with high probability, the bound becomes much worse. I would suggest to add a brief discussion on the relationship between the two to better highlight the original contribution of the paper. \\n\\n The proofs are quite badly written with many lacking results used to move from one inequality to another. For instance, Lemma 2 is proved using the theorem 1.4[Matrix Bernstein] and dilation technique from Tropp but it is not clear which results the authors are using in particular; Theorem 1.4 is related to the largest eigenvalue of the sum of matrices while the authors obtain an inequality for the norm of the sum without any further comment on how this transition was made. Also, I do not see why delta is smaller than 1/2 in Lemma 2. \\n\\n\\n*Minor comments:\\n\\n - p.1: expected have -> expected to have\\n - p.4: we are instead only gave access -> given access to \\n - I do not understand Figure 1. Should it be n_q*n_p on the y axis ?\\n - The inequality for n_q next to Figure 1 is derived from the bound (6). Why it is independent of k?\\n - Why the authors choose to the black box predictor h0 to be a two-layer fully connected neural? Is there any particular reason to use this classification model?\\n\\n[1] Class Proportion Estimation with Application to Multiclass Anomaly Rejection, AISTATS14\\n[2] Mixture Proportion Estimation via Kernel Embeddings of Distributions, ICML16\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
SylCrnCcFX | Towards Robust, Locally Linear Deep Networks | [
"Guang-He Lee",
"David Alvarez-Melis",
"Tommi S. Jaakkola"
] | Deep networks realize complex mappings that are often understood by their locally linear behavior at or around points of interest. For example, we use the derivative of the mapping with respect to its inputs for sensitivity analysis, or to explain (obtain coordinate relevance for) a prediction. One key challenge is that such derivatives are themselves inherently unstable. In this paper, we propose a new learning problem to encourage deep networks to have stable derivatives over larger regions. While the problem is challenging in general, we focus on networks with piecewise linear activation functions. Our algorithm consists of an inference step that identifies a region around a point where linear approximation is provably stable, and an optimization step to expand such regions. We propose a novel relaxation to scale the algorithm to realistic models. We illustrate our method with residual and recurrent networks on image and sequence datasets. | [
"robust derivatives",
"transparency",
"interpretability"
] | https://openreview.net/pdf?id=SylCrnCcFX | https://openreview.net/forum?id=SylCrnCcFX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"BJxlHFwdlV",
"SJljqe6RyV",
"SkguX-b90m",
"H1ebgb-9CQ",
"S1g2tfHURX",
"SJenv7uOpm",
"HyeqaWu_67",
"HkeVYZOO67",
"SygRbZudpQ",
"ByeYOl_OT7",
"HJlUOxtgTX",
"rkgbycE5hQ",
"H1eDBfaunm",
"HylATdMS27"
],
"note_type": [
"comment",
"meta_review",
"official_comment",
"official_comment",
"comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1545267512308,
1544634515280,
1543274784035,
1543274728977,
1543029379921,
1542124388222,
1542123969803,
1542123899778,
1542123782165,
1542123632561,
1541603438262,
1541192153177,
1541095999208,
1540856005640
],
"note_signatures": [
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1591/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1591/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1591/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1591/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1591/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1591/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1591/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1591/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1591/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1591/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1591/AnonReviewer1"
]
],
"structured_content_str": [
"{\"comment\": \"Thanks for the enlightening paper. I believe there is one missed relevant work: \\\"Deep Defense: Training DNNs with Improved Adversarial Robustness (arXiv:1803.00404, NeurIPS 2018)\\\" which also aims at enlarging the l_p margin.\", \"title\": \"Missing relevant work\"}",
"{\"metareview\": \"The paper aims to encourage deep networks to have stable derivatives over larger regions under networks with piecewise linear activation functions.\\n\\nAll reviewers and AC note the significance of the paper. AC also thinks this is also a very timely work and potentially of broader interest of ICLR audience.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Novel work, and potentially of broader interest\"}",
"{\"title\": \"Official response\", \"comment\": \"The manageable size of MNIST was beneficial for parameter analysis and to illustrate properties of the method, while the more challenging Caltech-256 (299x299x3 dimensions) with ResNet was used to demonstrate scalability. Our method is not limited nor specifically tailored to image classification (cf. our sequence dataset example). While additional experiments (e.g., using the referred datasets) would always help, we did not see it as necessary.\"}",
"{\"title\": \"Official response\", \"comment\": \"Thank you for asking. As elaborated in the introduction (3rd paragraph), our goal of establishing gradient stability is different from adversarial learning (e.g., output stability as in the referred papers).\\n\\nTechnically, our approach is indeed relevant to the solution of (Croce et al.). We will be happy to discuss and cite the paper in the camera ready version. However, please note that (Croce et al.) was posted after the ICLR submission deadline, so it cannot be viewed as prior work (it\\u2019s the other way around).\"}",
"{\"comment\": \"Why didn't you conduct any experiments on CIFAR10 or CIFAR100 or ImageNet, and test the acc and #CLR as well as the margin ? The experiment on the toy dataset MNIST is not convincing.\", \"title\": \"Lack of experiments on CIFAR10 or CIFAR100 or ImageNet\"}",
"{\"title\": \"Thank you for the comments. Please note that general responses are provided above.\", \"comment\": \"Major 1: I would prefer some elaborations on why the relaxation proposed in Eqn (8) serves to encourage the margin of L2 ball? What's the working mechanism or heuristic behind this relaxation? This is supposedly one of the key techniques used in optimization, yet remains obscure.\\n\\nWe will add more explanation to make it clearer. The working mechanism is based on the theoretical bounds and Lagrangian relaxations. Briefly, the derivation proceeds in two parts. In the first part, Lemma 8 (Eq. 5) simply rewrites Eq. (4) in a constraint form but needs to assume that a non-zero margin exists. To get Eq. (6), we use the fact that now |z^i_j|>=1 and thus the numerator in the margin in Eq. (5) can be lower bounded by 1, implying an upper bound on the overall learning objective. In the second part, we note that Eq. (6) is now akin to a hard-margin SVM/TSVM (see [1]). The constraint can be relaxed to a Lagrangian form resulting in Eq. (7) and the TSVM can be analogously relaxed to Eq. (8). To see the correspondences, note that in a single neuron case the gradient \\\\nabla_x z^i_j and z^i_j in Eq. (7) simply correspond to w and w^T x + b in Eq. (8).\\n\\n[1] Boser, Bernhard E., Isabelle M. Guyon, and Vladimir N. Vapnik. \\\"A training algorithm for optimal margin classifiers.\\\" Proceedings of the fifth annual workshop on Computational learning theory. ACM, 1992.\", \"major_2___1\": \"On empirical gains, the author(s) claimed that \\\"about 30 times larger margins can be achieved by trading off 1% accuracy.\\\" It seems that consistently yields inferior prediction accuracy.\\n\\nThe performance might indeed degrade if one really pursues an extremely large linear region (e.g., 30 times larger than a vanilla model). However, in Table 1 and 2, we also show that for reasonable parameter choices, our loss can achieve the same accuracy with more robust derivatives. For example, in MNIST, our approach exhibits 10 times larger locally linear regions given the same accuracy. For the ResNet experiment in Table 3, our approach even improves the accuracy. \\n\\nThe more important message we want to convey is that in some cases when robustness of derivatives is a requirement (e.g., a robust explanation for a sensitive decision), we provide a way to set the trade-off.\", \"major_2___3\": \"A better job should be done to validate the claim `` The problem we tackle has implications to interpretability and transparency of complex models. ''\\n\\nWhile this has been partially answered in the general comments, it's an important point, so we provide a more detailed answer here. Our claim of implication to transparency is supported by our results on stability of gradient-based explanations (Section 5.3).\\n\\nThe gradient saliency map is a well-known interpretability method for deep models. Our inference solution verifies the $l_p$ margins where such interpretation is guaranteed to be stable, and our learning algorithm stabilizes the explanations as validated through the $l_p$ margins and gradient distortions.\", \"minor_1\": \"Just to clarify, does the | - | used in Eqn (9) for |I(x,\\\\gamma)| denote counting measure?\\n\\nYes, this is correct. We will add a description below Eq. (9) to clarify it. Thank you for the comment.\", \"minor_2\": \"I do not see the necessity of introducing Lemma 7 in the text. Please explain.\\n\\nThank you for the question, we have updated the paper to address it.\\n\\nLemma 7 is used in Table 1 to compute the number of complete linear regions (#CLR). As mentioned in the last paragraph of Section 5.1, \\u201cThe lower #CLR in our approach than the baseline model reflects the existence of certain larger linear regions that span across different testing points\\u201d, so it serves as an indirect measurement for the size of linear regions.\", \"minor_3\": \"Lemma 8, ``... then any optimal solutions for the problem is also optimal for Eqn (4). '' Do you mean ``the following problem'' (Eqn (5))?\\n\\nYes, it is correct. We have updated the paper to address it.\"}",
"{\"title\": \"Thank you for the comments. Please note that general responses are provided above.\", \"comment\": \"Q: missing relevant recent work\\n\\nThanks for pointing out this work. We will read it and then add it to our related work.\"}",
"{\"title\": \"Thank you for the comments. Please note that general responses are provided above.\", \"comment\": \"Q5. The visualizations show the stability properties nicely, but a bit more explanations of those figures would help the readers quite a bit.\\n\\nThanks for the comment. We can certainly add more explanations about the figures. Is there a specific figure you were referring to? \\n\\nQ6. While I understand some of the feasibility issues associated with other existing methods, it would be interesting to try to compare performance (if not exact performance, the at least loss/gradient surfaces etc.) with some of them.\\n\\nWe are not aware of any directly comparable existing method for establishing robust derivatives, so we focus on an ablation setting comparing a vanilla loss with the proposed robust loss in various circumstances (FC networks, RNN, and ResNet). Existing methods using activation patterns (e.g., adversarial defense in (Wong & Kolter, 2018)) are not directly comparable to our work. We expand regions where gradients are invariant. However, gradients can be large or small even if invariant over a larger region. In contrast, any large gradient will likely lead to an adversarial example.\"}",
"{\"title\": \"General Response - 3\", \"comment\": \"3. Basic complexity analysis or running time comparison for gradient computation. (R1) / The gradient-based penalties suffer from heavy computational overhead. How much drain does this extra gradient penalty impose on the training efficiency. (R2)\\n\\nCondensed versions of the complexity analysis and empirical running time have been added to the paper. \\n\\nI. complexity\\n\\nWe assume that (1) parallel computation does not incur any overhead, and (2) batch matrix multiplication takes a unit operation. The notation is consistent with the paper in that M refers to the number of hidden layers and N_i refers to the number of neurons in the i^th layer. \\n\\nIn short, for a batch of samples,\\n0. It takes M operations for a forward pass up to the last hidden layer.\\n1. our perturbation algorithm take 2M operations to compute the gradients of all the neurons.\\n2. Straightforward back-propagation takes \\\\sum_{i=1}^M [ 2i x N_i ] operations to compute the gradients of all the neurons. Note that vanilla back-propagation requires (# of neurons) sequential calls, since it cannot be parallelized across neurons for this type of gradients. The details are in the updated paper draft. \\n\\nThe proposed approach is then, to our best knowledge, the first algorithm that is architecture-agnostic and has a tractable complexity to compute the gradient for all the neurons.\\n\\nII. running time\\n\\nWe measure the running time for the 4-layer FC networks on MNIST. To accurately analyze the difference, we report the running time for performing a complete mini-batch gradient descent step (from the initial forward pass to the final gradient update) in each iteration:\\n\\nIt takes\\n1. Vanilla loss: \\t\\t\\t\\t\\t\\t 0.00129 sec\\n2. Full ROLL loss computed by back-propagation: \\t\\t0.31185 sec\\n3. Full ROLL loss computed by perturbation: \\t\\t\\t0.02667 sec\\n4. Approximate ROLL loss computed by perturbation: \\t0.00298 sec\\non average for a complete mini-batch gradient descent update. \\n\\nThe full ROLL loss refers to Eq. (9) (gamma=100), and the approximate ROLL loss refers to Eq. (10), where we use 3 samples (1 / 256 input dimensions) to approximate the ROLL loss. The accuracy / median l_2 margin of the approximate ROLL loss (0.9782 / 0.0094) is comparable to the full ROLL loss (0.9761 / 0.0092). In total, it takes the vanilla loss 42.49 seconds and the approximate ROLL loss 69.94 seconds to complete training for 20 epochs. Overall, our approach is 2.3 times slower in gradient update, and only 1.6 times slower in total than the vanilla loss. The approximate loss is about 9 times faster than the full loss. Compared to back-propagation, our perturbation algorithm achieves about 12 times empirical speed-up. In summary, the computational overhead of our method is minimal compared to vanilla training, which is achieved by the perturbation algorithm and the approximate loss.\\n\\nNote that the full and approximate ROLL losses actually have the same number of operations (in parallel) but their running times are different because parallel batch matrix multiplication does not take constant time for different batch sizes in practice. If we simply use the approximate version to avoid the overhead for large batches, the empirical running time indeed matches our complexity analysis (roughly 2M for the ROLL loss versus M for the vanilla loss).\\n\\nFor training ResNet on Caltech-256, when the sub-samples can fit the GPU memory, it takes less than 1 day to complete training for the approximate ROLL loss. Note that all of our experiments are done on single TITAN X GPU with 12G memory.\"}",
"{\"title\": \"General Response - 1 & 2\", \"comment\": \"We thank all the reviewers for the insightful comments, suggestions and questions. The general responses are provided here, while specific questions are responded individually. Here we focus on the utility of our approach, implications to interpretability, as well as complexity of the perturbation algorithm.\\n\\n1. The utility of doing so is not clearly demonstrated. (R1) / In my opinion, the author(s) failed to showcase the practical utility of their solution. (R2)\", \"we_believe_establishing_robust_derivatives_is_important_in_its_own_right\": \"stable derivatives serve many roles, including interpretability, but require extra effort to achieve in deep models. Note that robustness of explanations is an open problem in the community that has received significant interest over the past year (Ghorbani et al., 2017; Alvarez-Melis & Jaakkola, 2018a). Given this premise, rather than showcasing the utility of gradient stability, we focus on showing that our method yields more robust derivatives across many architectures by measuring margins/gradient distortions. We also included an application of inducing robust explanations in gradient saliency maps.\\n\\n2. If improving the quality or validity of local linearization for explaining predictions is one of the main motivations for this work, showing that the proposed method does so would strengthen the overall message. / The point of gradient visualization in Figure 4 is not clear. (R1) / A better job should be done to validate the claim `` The problem we tackle has implications to interpretability and transparency of complex models. '' (R2) / The adversarial scenarios need to be explained better. (R3)\\n\\nThe goal of the adversarial scenario is to analyze the robustness of gradient explanations either coming from a typical deep model or after our method. We do so by visualizing how the gradient can change (i.e., be distorted) in the worst case in a small neighborhood. In Figure 4, we show that our approach yields more robust gradient saliency maps than a vanilla deep model. Note that the margin analyses in the experiments section yield the same conclusion, since gradients within any established margin remain the same. The proposed inference (certifying the margin) and learning (expanding the margin) algorithms thus directly contribute to interpretability. Indeed, an unstable explanation of a critical decision (e.g., medical/financial/security decisions) would likely be unacceptable. We certify and enlarge the \\u201cmargin of validity\\u201d of such explanations.\"}",
"{\"comment\": \"This seems like an interesting result but I am curious how it compares with other scalable verification and training techniques such as those proposed in Mirman et al's \\\"Differentiable Abstract Interpretation for Provably Robust Neural Networks.\\\", ICML'18 and Croce et al's \\\"\\nProvable Robustness of ReLU networks via Maximization of Linear Regions\\\" and Gowal et al's \\\"On the Effectiveness of Interval Bound Propagation for Training Verifiably Robust Models\\\"?\", \"title\": \"Comparison to prior scalable provability training\"}",
"{\"title\": \"Very nice work with clear intuition and impressive results\", \"review\": \"1. This is a very relevant and timely work related to robustness of deep learning models under adversarial attacks.\\n\\n2. In recent literature of verifiable/certifiable networks, (linear) ReLU network has emerged as a tractable model architecture where analytically sound algorithms/understanding can be achieved. This paper adopts the same setting, but very clearly articulates the differences between this work and the other recent works (Weng et al 2018, Wong et al. 2018). \\n\\n3. The primary innovation here is that the authors not only identify the locally linear regions in the loss surface but expand that region by learning essentially leading to gradient stability. \\n\\n4. A very interesting observation is that the robustifying process does not really reduce the overall accuracy which is the case of many other methods. \\n\\n5. The visualizations show the stability properties nicely, but a bit more explanations of those figures would help the readers quite a bit.\\n\\n6. While I understand some of the feasibility issues associated with other existing methods, it would be interesting to try to compare performance (if not exact performance, the at least loss/gradient surfaces etc.) with some of them.\\n\\n7. The adversarial scenarios need to be explained better.\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"It seems to be an interesting problem but is the solution proposed practical?\", \"review\": \"########## Updated Review ##########\\n\\nThe author(s) have presented a very good rebuttal, and I am impressed. My concerns have been addressed and my confusions have been clarified. To reflect this, I am raising my points to 8. It is a good paper, job well done. I enthusiastically recommend acceptance. \\n\\n################################\\n\\nA key challenge that presents the deep learning community is that state-of-the-art solutions are oftentimes associated with unstable derivatives, compromising the robustness of the network. In this paper, the author(s) explore the problem of how to train a neural network with stable derivatives by expanding the linear region associated with training samples. \\n\\nThe author(s) studied deep networks with piecewise linear activations, which allow them to derive lower bounds on the $l_p$ margin with provably stable derivatives. In the special case of $l_2$ metric, this bound is analytic, albeit rigid and non-smooth. To avoid associated computational issues, the author(s) borrowed an idea from transductive/semi-supervised SVM (TSVM) to derive a relaxed formulation. \\n\\nIn general, I find this paper rather interesting and well written. However, I do have a few concerns and confusions as listed below:\", \"major_ones\": [\"I would prefer some elaborations on why the relaxation proposed in Eqn (8) serves to encourage the margin of L2 ball? What's the working mechanism or heuristic behind this relaxation? This is supposedly one of the key techniques used in optimization, yet remains obscure.\", \"On empirical gains, the author(s) claimed that \\\"about 30 times larger margins can be achieved by trading off 1% accuracy.\\\" It seems that consistently yields inferior prediction accuracy. In my opinion, the author(s) failed to showcase the practical utility of their solution. A better job should be done to validate the claim `` The problem we tackle has implications to interpretability and transparency of complex models. ''\", \"As always, gradient-based penalties suffer from heavy computational overhead. The final objectives derived in this paper (Eqn (7) & Eqn (9)) seem no exception to this, and perhaps even worse since the gradient is taken wrt each neuron. Could the author(s) provide statistics on empirical wallclock performance? How much drain does this extra gradient penalty impose on the training efficiency?\"], \"minor_ones\": [\"Just to clarify, does the | - | used in Eqn (9) for |I(x,\\\\gamma)| denote counting measure?\", \"I do not see the necessity of introducing Lemma 7 in the text. Please explain.\", \"Lemma 8, ``... then any optimal solutions for the problem is also optimal for Eqn (4). '' Do you mean ``the following problem'' (Eqn (5))?\"], \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"This compelling theoretical framework could benefit from more applications.\", \"review\": \"The paper considers deep nets with piecewise linear activation functions, which are known to give rise to piecewise linear input-output mappings, and proposes loss functions which discourage datapoints in the input space from lying near the boundary between linear regions. These loss functions are well-motivated theoretically, and have the intended effect of increasing the distance to the nearest boundary and reducing the number of distinct linear regions.\\n\\nMy only concern is that while their method appears to effectively increase the l_1 and l_2 margin (as they have defined it), the utility of doing so is not clearly demonstrated. If improving the quality or validity of local linearization for explaining predictions is one of the main motivations for this work, showing that the proposed method does so would strengthen the overall message. However, I do feel that \\u201cestablishing robust derivatives over larger regions\\u201d is an important problem in its own right. \\n\\nWith the exception of some minor typos, the exposition is clear and the theoretical claims all appear correct. The authors may have missed some relevant recent work [1], but their contributions are complementary. It is not immediately clear that the parallel computation of gradients proposed in section 4.1 is any faster than standard backpropagation, as this has to be carried out separately for each linear region. A basic complexity analysis or running time comparison would help clarify this. I think I am missing the point of the gradient visualizations in figure 4, panels b-e and g-j. \\n\\n\\n[1] Elsayed, Gamaleldin F., et al. \\\"Large Margin Deep Networks for Classification.\\\" arXiv preprint arXiv:1803.05598 (To appear in NIPS 2018).\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
H1xAH2RqK7 | Generative Adversarial Models for Learning Private and Fair Representations | [
"Chong Huang",
"Xiao Chen",
"Peter Kairouz",
"Lalitha Sankar",
"Ram Rajagopal"
] | We present Generative Adversarial Privacy and Fairness (GAPF), a data-driven framework for learning private and fair representations of the data. GAPF leverages recent advances in adversarial learning to allow a data holder to learn "universal" representations that decouple a set of sensitive attributes from the rest of the dataset. Under GAPF, finding the optimal decorrelation scheme is formulated as a constrained minimax game between a generative decorrelator and an adversary. We show that for appropriately chosen adversarial loss functions, GAPF provides privacy guarantees against strong information-theoretic adversaries and enforces demographic parity. We also evaluate the performance of GAPF on multi-dimensional Gaussian mixture models and real datasets, and show how a designer can certify that representations learned under an adversary with a fixed architecture perform well against more complex adversaries. | [
"Data Privacy",
"Fairness",
"Adversarial Learning",
"Generative Adversarial Networks",
"Minimax Games",
"Information Theory"
] | https://openreview.net/pdf?id=H1xAH2RqK7 | https://openreview.net/forum?id=H1xAH2RqK7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"BJxQk8Y4gN",
"ryxFGgOcAX",
"rylDOiBc0X",
"rygU8-Cp6X",
"SJlvx8RP67",
"rJlZpem8p7",
"H1xyb2JXp7",
"HyluRikQpm",
"HklGflyXTm",
"B1lkEkyQ6X",
"S1xBl0nJp7",
"SJggqV6v37"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review"
],
"note_created": [
1545012698790,
1543303185309,
1543293807125,
1542476110034,
1542084078684,
1541972153065,
1541762038641,
1541761999682,
1541758985971,
1541758759473,
1541553644915,
1541031048135
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1590/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1590/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1590/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1590/AnonReviewer4"
],
[
"ICLR.cc/2019/Conference/Paper1590/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1590/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1590/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1590/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1590/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1590/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1590/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"While there was some support for the ideas presented, the majority of the reviewers did not think the submission was ready for presentation at ICLR. Concerns raised included that the experiments needed more work, and the paper needs to do a better job of distinguishing the contributions beyond those of past work.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Not ready for publication at ICLR\"}",
"{\"title\": \"Summary of changes in the revised paper\", \"comment\": \"We thank all the reviewers and readers for their feedback. We have updated our paper on OpenReview. We list below the major changes we have made to the paper.\\n\\n1. We have rewritten \\u201cour contributions\\u201d subsection to highlight our main contributions.\\n\\n2. We have moved the \\u201crelated work\\u201d section to the introduction and highlighted the major differences between our work and other related work. We have also included the references on using GAN to generate synthetic fair dataset. A detailed literature review is provided in Appendix A. \\n\\n3. We have rewritten Theorem 1 and added Corollary 1 to incorporate a general alpha-loss function, which provides a continuous interpolation between a hard decision adversary (using 0-1 loss) and a soft decision adversary (using log-loss). We have also rewritten proposition 1 and its proof to provide a clearer explanation of how GAPF enforces demographic parity. \\n\\n4. We have included more detailed explanation of the constrained minimax game and how to enforce the distortion constraint in section 2. \\n\\n5. We have rewritten section 3 to make our results of the Gaussian mixture model more accessible. We have also added numerical results of the 32-dimensional Gaussian mixture model. \\n\\n6. We have moved the technical details of the decorrelator and adversary network architecture in section 4 to Appendix E. We have also included more simulation results of the GENKI dataset. \\n 6.1 We have added the false error rate of the facial expression classifier learned from the private/fair representation to show that GAPF enforces fairness (Table 1 and 2).\\n 6.2 We have included the mutual information estimation of the Transposed Convolution Neural Network Decorrelator.\\n 6.3 We have plotted the gender classification accuracy as a function of the expression classification accuracy to illustrate the tradeoff between enforcing privacy/fairness and preserving the utility of the representation. \\n\\n7. We have fixed some grammatical errors and typos. We have also corrected some technical terms to make the paper more accessible.\"}",
"{\"title\": \"Authors' Response\", \"comment\": \"Dear AnonReviewer4,\\n\\nThank you for the detailed comments and feedback. We would like to address your concerns below.\\n\\n**Related work in the first section**\\n\\nThank you for the comment. We have moved the related work section to the introduction to make it more accessible. We have also provided a more detailed literature review section in Appendix A. \\n\\n**GMM study case**\\n\\nFor the multi-dimensional Gaussian mixture data model, we derive game-theoretically optimal decorrelation schemes and compare them with those that are directly learned in a data-driven fashion to show that the gap between theory and practice is negligible. The goal here is not to show how well we can learn a machine learning model from Gaussian mixture data. Our goal is to provide formal verification of how fair/private schemes that are learned by competing against computational adversaries with a fixed architecture generalize against adversaries with more complex architecture. If we have a smaller number of samples, we expect to see the performance of the learned decorrelation scheme degrades. We have included the numerical results of the 32-dimensional Gaussian mixture data in Section 3 of the revision. We observe that the learned decorrelation scheme performs as well as the game-theoretically optimal one for the 32-dimensional Gaussian mixture data. \\n\\n**Important parts in the appendix**\\n\\nWe have included more descriptions of the constrained minimax game and how to enforce the distortion constraint in the alternate minimax algorithm in Section 2. We have also included the theoretical results and a detailed description of the GMM model in Section 3. \\n\\n**Comparison with other papers**\\n\\nExisting relevant works (such as Edwards & Storkey 2016) have two limitations: (1) they require the designer to have a specific classification task in mind at training time, and (2) they only give empirical guarantees against computationally finite adversaries. Indeed, it is natural and important to ask: (1) how can we obtain representations that work well without having a specific learning task in mind at training time, and (2) how well do the learned representations perform against more powerful adversaries? Our work addresses both limitations. To address the first, we introduce a constrained minimax formulation that ensures that the utility is preserved by enforcing a distortion constraint. To address the second, we show (in Theorem 1, Corollary 1, and Proposition 1) how our framework recovers an array of (operationally motivated) information-theoretic notions of information leakage. We critically leverage this connection to show that representations learned in a data-driven fashion against a finite adversary are as good as representations that are game-theoretically optimal. We further introduce a systematic approach using mutual-information estimation to certify that the learned representations will perform well against unseen, more complex adversaries. We encourage the reviewer to take a look at our response to AnonReviewer3, where we listed our contributions in great detail. \\n\\n**On a lighter note**\\n\\nThank you for the feedback. We have included these comments in the revision.\", \"a\": \"We have removed \\u201cstate-of-the-art\\u201d in our paper.\", \"b\": \"To the best of our knowledge, our approach is the first to apply \\u201cdistortion constraints\\u201d to learn a fair/private representation of datasets that can be used for a variety of learning tasks. We have changed \\u201cin the machine learning community\\u201d to \\u201cin previous works\\u201d to avoid the confusion.\"}",
"{\"title\": \"Unconvincing newness, but a good GAN model to understand Private Presententation Learning,\", \"review\": \"The paper authors provide a good overview of the related work to Private/Fair Representation Learning (PRL). Well written, The theoretical approach is extensively explained and the first sections of the paper are easy to follow. The authors demonstrate the model performance on or the GMM, the comparison between theoretical and data driven performance is a good case study to understand the PRL.\\n\\nWe usually expect to see related work in the first sections, in this case it's has been put just before the conclusion. It can be still justified by the need o introduce the PRL concepts before comparing with other works.\\nThe GMM study case is interesting, but incorporates strong assumptions. Moreover, for a 4 or 8 dimensional GM, 20K data points are more than enough to infer the correct parameter. It would have been more useful if it was used to comapre between the mentioned methods in \\\"Related Work\\\".\", \"there_seems_to_be_important_parts_of_the_paper_that_has_been_put_in_the_appendices\": \"how to solve the constrained problem, Algorithm.... Similarly, some technical details were expanded in the paper body (Network structure).\\n\\nThe authors mentioned the similarities with other works and their model choices that set theirs apart from other. Yet, the paper doesn't provide performance ( accuracy, MI) comparison to other works. There seems to be a strong similarity with Censoring representations with an adversary, Harrison Edwards and Amos Storke (link: https://arxiv.org/abs/1511.05897). Difference : distortion instead of H divergence, non-generative autoencoders.\\n\\nConsequently, I question the novelty of the paper's contribution. Without extensive comparison with other methods and especially to similar ones mentioned in the related work, there is little to say about the \\\"state-of-the-artness\\\". Yet, it is important to acknowledge the visible effort behind the paper and how the author(s) managed to leverage the simplicity and power of GANs.\", \"on_a_lighter_note\": \"A)- the paper mention \\\"state-of-the-art CNNs, state-of-the-art entropy estimators, MI, generative models\\\", for the Machine Learning community, many of these elements have been around for a while now.\\nB)- \\\"Observe that the hard constraint in equation 2 makes our minimax problem different from what is extensively studied in the machine learning community\\\": I would argue it's not an objective statement.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Response to Relevant References\", \"comment\": \"Dear Anonymous,\\n\\nThank you very much for these references. We will add them to our related work in the revised version. \\n\\nThe two papers you mentioned focus on generating synthetic non-sensitive attributes and labels which ensure fairness while preserving the utility of the data (predicting the label). The synthetic data is generated by a conditional generative adversarial network (GAN) which generates the non-sensitive attributes-label pair given the noise variable and the sensitive attribute. The utility is preserved by generating data that is very similar to the original data. To ensure fairness, the generator generates data samples such that an auxiliary classifier (discriminator) trained to predict the sensitive attribute from the synthetic data performs as poorly as possible.\", \"fairgan_uses_two_discriminators\": \"one discriminates fake / real non-sensitive attributes-label pair and another discriminates generated data from different sensitive groups. This model ensures demographic parity while preserving data utility (predicting the label). The problem is formulated as an unconstrained minimax game in which the empirical loss function is formulated by a weighted sum of the loss functions of the two discriminators. Fairness GAN is similar to FairGAN. The goal here is to develop a conditional GAN-based model to ensure demographic parity or equality of opportunity in the system by learning to generate a fairer dataset. The authors consider both demographic parity and equality of opportunity as fairness metric. They also formulate the problem as an unconstrained minimax game between the discriminator and the generator. To ensure utility, the Fairness GAN uses three pairs of losses which make sure that the generated non-sensitive attributes-label pair, the non-sensitive attributes alone as well as the non-sensitive attributes conditional on the sensitive attributes to be very similar to the original data. To enforce fairness, they include a pair of losses to encourage either demographic parity or equality of opportunity.\\n\\nThe methods presented in these papers are very different from our method. First, we are focusing on creating representations of the data for a variety of learning tasks. Second, we use a generative model to cleverly injecting noise where it matters to ensure privacy/fairness rather than generate a fairer synthetic dataset. Third, we consider a constrained minimax game in which we use a distortion constraint to preserve the utility of the learned representation for a variety of learning tasks rather than focusing on a particular label. Fourth, we make precise connections between the data-driven adversarial learning framework and the game- and information-theoretic setting (with knowledge of dataset statistics) and show how the change of the loss function in our framework leads to a variety of information-theoretic adversaries with different powers. Furthermore, we use simulations on Gaussian mixture models to show that the learned representations from a finite number of samples and a computationally bounded adversary (neural networks) performs as good as a representation created by the game-theoretic optimal mechanism which assumes knowledge of dataset statistics and infinite adversarial computational power. Finally, we propose using mutual information estimators to verify that no adversary (regardless of their computational power) can reliably learn the sensitive attribute from the learned representation. We encourage the reader to read the detailed list of contributions below where we attempt to make this clear to ICLR reviewers. There are different ways for enforcing fairness, and our work presents a framework that aids in achieving this goal. More work is needed to be done in this area.\\n\\nWe thank you again for your comment and bringing these references to our attention.\"}",
"{\"comment\": \"Hi, just pointing out some related papers.\\n\\n1. Xu, Depeng, Shuhan Yuan, Lu Zhang, and Xintao Wu. \\\"FairGAN: Fairness-aware Generative Adversarial Networks.\\\" arXiv preprint arXiv:1805.11202 (2018).\\n2. Sattigeri, Prasanna, Samuel C. Hoffman, Vijil Chenthamarakshan, and Kush R. Varshney. \\\"Fairness GAN.\\\" arXiv preprint arXiv:1805.09910 (2018).\", \"title\": \"Relevant References\"}",
"{\"title\": \"Authors' Response\", \"comment\": \"Dear AnonReviewer1,\\n \\nThank you for the detailed comments and observations. We address your concerns below.\\n\\n**Clarity of section 3** \\n\\nWe are currently working on rewriting section 3 to make it more accessible. This section shows that decorrelation schemes learned in a data-driven fashion against a computationally bounded adversary perform well when evaluated against a maximum a posteriori probability (MAP) adversary that has access to distributional information and knows the applied decorrelation schemes. We evaluate the learned decorrelation scheme in the following three steps:\\n \\n1. We learn the decorrelation scheme in a data driven fashion using synthetic Gaussian mixture data. \\n2. We evaluate the performance of the learned decorrelation scheme under a strong adversary who has access to dataset statistics, knows the learned decorrelation scheme, and can compute the MAP decision rule.\\n3. We compare the performance of the learned scheme with the game-theoretic optimal one. \\n\\nThe first step can be done for any dataset but the last two steps can only be done for data that we have access to its distribution. Since the distribution of a real dataset is very difficult to obtain. We assume that the public variable follows a Gaussian mixture model conditioned on the value of the sensitive variable. In this case, we can compute the game-theoretic optimal decorrelation scheme and the optimal decision rule of the strong adversary. We agree that the conclusion drawn from the Gaussian mixture model is limited and may not generalize to more complex model. But this serves as a good sanity check, especially given that Gaussian mixture models have been used in many areas [1]. \\n\\n**Novelty of this paper**\\n\\nWe would like to list the novelty of this paper and highlight the important contributions as follows. \\n\\n1. All the relevant papers have exclusively focused on showing (via experiments) that this approach works well in practice when you design things with a particular classification task in mind (i.e., in a supervised fashion). This requires having access to additional training labels which may be unavailable during the training phase. Our paper shows (via experiments on medium-sized datasets) that this approach works even when the designer does not want to restrict their attention to one classification task (i.e., in an unsupervised fashion). Indeed, our experiments and simulations show that the learned representations work well on classification tasks that haven't been accounted for. \\n\\n2. We make very precise connections between the data-driven adversarial learning framework and the game- and information-theoretic setting (which assumes that the designer has access to the joint distributions between data and sensitive attributes, and the minimax optimization is performed over all theoretically possible randomized decorrelation and adversarial learning rules). We also show how the change of the loss function in our framework leads to a variety of information theoretic adversaries with different powers. This is an important novelty because it allows us to generalize conclusions that can be made upon learning representations from a finite number of samples against a computationally bounded adversary to the more important setting of infinite samples (i.e., access to distributional information) and infinite adversarial computational power. Indeed, this is explicitly shown in Section 3 for Gaussian mixture models. Notice that this section shows that decorrelation schemes that are learned in a data-driven fashion against a computationally bounded adversary perform well when evaluated against a maximum a posteriori probability (MAP) adversary that has access to distributional information and knows the applied decorrelation schemes. Further, this shows that there is no gap between the game-theoretically optimal decorrelation schemes and the ones that are learned via a generative neural network for binary variable S. \\n\\n3. Different from previous works where the objective is modeled as a weighted combination of loss functions and distortion penalty, our formulation is a minimax game subject to a \\\"hard distortion constraint\\\". This allows us to directly limit the amount of distortion added to learn the representation, which is crucial for preserving the utility of the learned representation. Moreover, notice that enforcing the hard distortion constraint calls for a new training process that relies on the Penalty method or Augmented Lagrangian method presented in Appendix C.\"}",
"{\"title\": \"Authors' Response Continued\", \"comment\": \"4. Even though we learn our randomized decorrelation neural networks by training against a specific adversarial neural network, the learned decorrelation scheme performs well when evaluated against unseen (more complex) adversarial architectures. To prove this point, we show that the mutual information (MI) between the learned representations and the sensitive attribute is sufficiently small. A sufficiently small MI implies that no attacker (regardless of their computational power) can reliably learn the sensitive attribute from the learned representation (from Fano\\u2019s inequality [2]). This is again a novelty that didn't appear in prior works.\\n\\n5. While prior works have used a classical, non-generative auto-encoder type architecture for the creation of the fair/censored representations, we harness the power of generative models which have the capability to not only compress the data in certain ways but to also cleverly inject noise where it matters (see Figure 8 and 9 in Appendix E).\\n\\n6. Our set of experiments reveal that the learned representations are provably private/fair. For instance, on the GENKI dataset, we show how the gender has been stripped off by hiding mustaches, facial hair, lip color etc. (see Figure 4). At the same time, we show that the representations are still useful for other classification tasks. \\n\\n**Shortcomings of the distortion function**\\n\\nRegarding the concern about the use of distortion function, we want to point out that we are focusing on publishing datasets or meaningful representations that can be \\u201cuniversally\\u201d used for a variety of learning tasks which may not be known at the stage of publishing. The goal of our distortion constraint is to limit the perturbation of the data when trying to decorrelate the sensitive variable from the public variable. Thus, this distortion constraint preserves the utility of the learned representation of the data for other unknown machine learning tasks. For certain machine learning tasks, it is possible that the features related to the labels are orthogonal to the features related to the sensitive attributes. In this case, there exists a representation which completely removes the sensitive attribute with a very high distortion while the predictive qualities is still equivalent to the original representation. However, for publishing the learned representation of the data, we have to ensure this representation can also be used for a variety of learning tasks. Therefore, we impose a distortion constraint on the data to ensure that the learned representation does not deviate too much from the original data. We will fix the write up and add more discussions to this topic. \\n\\n**Limited experiments and some further discussions**\\n\\nYou are correct that our simulations are limited (greyscale images and motion sensor data). We are currently working on presenting more simulation results and will post a revised version with new experimental results and detailed discussions of our contributions and shortcomings of this approach soon.\", \"references\": \"[1] Kazuho Watanabe and Sumio Watanabe, Stochastic complexities of Gaussian mixtures in variational Bayesian approximation, Journal of Machine Learning Research, 7:625\\u2013644, 2006.\\n\\n[2] Thomas M Cover and Joy A Thomas, Elements of information theory. John Wiley & Sons, 2012.\"}",
"{\"title\": \"Authors' Response\", \"comment\": \"Dear AnonReviewer3,\\n\\nThank you for the detailed comments and observations. We are happy you found our paper and analysis interesting. We understand there is room for improvement in the write up. We are currently working on refining it; even as we do so, we respond here to your comments to address as precisely as we can.\\n\\n**Confusion about the term \\u201crepresentation learning\\u201d**\\n\\nWe agree that the term \\u201clearning private and fair representations\\u201d might be confused with the widely studied \\u201crepresentation learning\\u201d problem -- which we are not tackling in this work. While our framework can be generalized to a setting in which we can learn an arbitrary representation using an encode-decode structure, we are primarily interested in learning representations of the data (of the same dimension/shape/structure) that are fair and private. Thank you for pointing this out. We will fix our writeup to clarify things.\\n\\n**Difference between our work and Edwards & Storkey 2015 [1]**\\n\\nOur work departs (quite significantly) from other related works. Here is a list of the important differences. \\n\\n1. Our framework is not a special case of Edwards & Storkey 2015. For starters, our formulation is a minimax one subject to a \\\"hard distortion constraint\\\". Their formulation is a weighted combination of three loss functions, and zeroing out one of them (the one that measures how well you do in a given classification task of interest) does not recover our formulation because of the non-convexity/concavity of the minimax problem with respect to the decorrelator/adversary neural network parameters. The hard distortion constraint allows us to directly limit the amount of distortion added to learn the private/fair representation, which is crucial for preserving the utility of the learned representation. Moreover, notice that enforcing the hard distortion constraint calls for a new training process that relies on the Penalty method or Augmented Lagrangian method presented in Appendix C.\\n\\n2. All the relevant papers have exclusively focused on showing (via experiments) that this approach works well in practice when you design things with a particular classification task in mind (i.e., in a supervised fashion). This requires having access to additional training labels which may be unavailable during the training phase. Our paper shows (via experiments on medium-sized datasets) that this approach works even when the designer does not want to restrict their attention to one classification task (i.e., in an unsupervised fashion). Indeed, our experiments and simulations show that the learned representations work well on classification tasks that haven't (at all) been accounted for in the training process.\\n\\n3. We make very precise connections between the data-driven adversarial learning framework and the game- and information-theoretic setting (which assumes that the designer has access to the joint distribution between data and sensitive attributes, and the minimax optimization is performed over all theoretically possible randomized decorrelation and adversarial learning strategies). We also show how the change of the loss function in our framework leads to a variety of information-theoretic adversaries with different powers. This is an important novelty because it allows us to generalize conclusions that can be made upon learning representations from a finite number of samples and a computationally bounded adversary to the more important setting of infinite samples (i.e., access to distributional information) and infinite adversarial computational power. Indeed, this is explicitly shown in Section 3 for Gaussian mixture models. Notice that this section shows that decorrelation schemes that are learned in a data-driven fashion against a computationally bounded adversary perform well when evaluated against a maximum a posteriori probability (MAP) adversary that has access to distributional information and knows the applied decorrelation schemes. Further, this shows that there is no gap between the game-theoretically optimal decorrelation schemes and the ones that are learned via a generative neural network for binary variable S. This critical piece where one investigates what guarantees we can get against more potent adversaries is missing from prior works. \\n\\n4. Even though we learn our randomized decorrelation neural networks by training against a specific adversarial neural network, the learned decorrelation scheme performs well when evaluated against unseen (more complex) adversarial architectures. To prove this point, we show that the mutual information (MI) between the learned representations and the sensitive attribute is sufficiently small. A sufficiently small MI implies that no attacker (regardless of their computational power) can reliably learn the sensitive attribute from the learned representation (from Fano\\u2019s inequality [2]). This is again a novelty that didn't appear in prior works.\"}",
"{\"title\": \"Authors' Response Continued\", \"comment\": \"5. While prior works have used a classical, non-generative auto-encoder type architecture for the creation of the fair/censored representations, we harness the power of generative models which have the capability to not only compress the data in certain ways but to also cleverly inject noise where it matters. (see Figure 8 and 9 in Appendix E)\\n\\n6. Our experiments reveal that the learned representations are private/fair even to humans. For instance, on the GENKI dataset, we show how the gender has been stripped off by hiding mustaches, facial hair, lip color etc. (see Figure 4). At the same time, we show that the representations are still useful for other classification tasks. \\n\\n**Confusion in demographic parity subject to the distortion constraint**\\n \\t\\nWe would like to clarify what we meant by \\\"demographic parity subject to a distortion constraint.\\\" It is well known in the fairness community that enforcing demographic parity (or other notions of fairness) conflicts with the learning of well-calibrated classifiers ([3,4]). To circumvent this issue, we chose to \\\"partially\\\" decorrelate the data up to an allowed distortion. This helps in ensuring that the learned representations are useful in practice for learning good classifiers, while limiting the underlying correlations with the sensitive attributes. Our formulation implies demographic parity if the distortion budget is set to infinity (see the analysis in Appendix B, proof of proposition 1). \\n\\n **Introduce S is binary**\\n\\nWe would like to emphasize that the proposed framework is general and can be used for non-binary sensitive variable. However, in the theoretical analysis, we only consider binary sensitive variable. The analysis can be generalized to the non-binary case. Furthermore, we also consider non-binary sensitive variable in our simulation (the HAR dataset). We will fix our writeup to clarify this.\\n\\nWith all of the above in mind, we hope we have made a case for the innovation in our work and convinced you to reevaluate your assessment of our work. We are happy to further discuss and clarify any concerns you may still have.\", \"references\": \"[1] Harrison Edwards and Amos Storkey, Censoring representations with an adversary, In Proceedings of the International Conference on Learning Representations, San Juan, Puerto Rico, May 2016.\\n\\n[2] Thomas M Cover and Joy A Thomas, Elements of information theory, John Wiley & Sons, 2012.\\n\\n[3] Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Richard Zemel, Fairness through awareness. In Proceedings of the 3rd innovations in theoretical computer science conference, pp. 214\\u2013226. ACM, 2012.\\n\\n[4] Moritz Hardt, Eric Price, Nathan Srebro, Equality of opportunity in supervised learning. In Advances in neural information processing systems, pp. 3315\\u20133323, 2016.\"}",
"{\"title\": \"Interesting direction and formulation but no enough novelty\", \"review\": \"This paper present an adversarial-based approach for private and fair representations. This is done by learned distortion of data that minimises the dependency on sensitive variable while the degree of distortion is constrained. This problem is important, and the analysis from game-theory and information theory perspectives is interesting. However, the approach itself is similar to Edwards & Storkey 2015, and I find the presentation of this paper confusing at a few points.\\n\\nFirst, while both the title and abstract suggest it is about learning representation, the approach might be better considered as data-augmentation. As described a bit later: \\\"...modifying the training data is the most appropriate and the focus of this work\\\". This contradiction with more commonly accepted meaning of representation learning (learning abstract/high level representation of data) is confusing.\\n\\nAlthough the authours argued this work is different from Edwards & Storkey 2015, I think they are quite similar. The presented method is almost a special case of this previous work: it seems that one can obtain this model by modifying Edwards & Storkey's model as follows (referring to the equations in Edwards & Storkey's paper): (1) removing the task (Y) dependent loss in eq. 9. (2) assume the encoder transforms X to the same data space so the decoder can be removed, so eq. 7 become equivalent to the distortion measure in this paper. There are other small differences, such as adding noise and the exact way to impose constraint, but I doubt whether the novelty is significant in this case.\", \"other_places_that_are_unclear_include\": \"proposition 1 -- what does \\\"demographic parity subject to the distortion constraint\\\" mean? demographic parity was defined earlier as complete independence on sensitive variable, so how can \\\"complete independence\\\" subject to a constraint? In addition, it would be helpful introduce S is binary. This information was delayed to section 3 after the cross-entropy loss that assumes binary S was presented.\\n\\nOverall, I think this paper is interesting, and the analysis offers insights into related areas. However, the novelty is not enough for acceptance at ICLR, and the presentation can be improved.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Formalization of data driven GAN driven fairness methods.\", \"review\": \"The authors describe a framework of how to learn a \\\"fair\\\" (demographic parity) representation that can be used to train certain classifiers, in their case facial expression and activity recognition. The method describes an adversarial framework with a constraint that bounds the distortion of the learned representation compared to the original input.\", \"clarity\": \"The paper is well written and easy to follow. The appendix is rather extensive though and contains some important parts of the paper, though the paper can be understood w/o it.\\n\\nI didn't quite follow Sec 3. It is a bit sparse on the details and the final conclusion isn't entirely clear. It also isn't clear to me how general the conclusions drawn from the Gaussian mixture model are for more complex cases.\", \"novelty\": \"Adversarial fairness methods are not new, but in my opinion the authors do a good job of summarizing the literature and formalizing the problem. I am not fully familiar with the space to judge if this is enough novelty.\\n\\nUsing the distortion constraint is interesting and seems to work according to the experiments. Generally though, I think that distortion can be a very restrictive constraint. One could imagine representations with a very high distortion (e.g. by completely removing the sensitive attribute) and predictive qualities equivalent to the original representation. Some further discussion of this would be good.\", \"experiments\": \"The experiments are somewhat limited, but show the expected correlations (e.g. distortion vs predictiveness). \\n\\nOverall, I do believe that this work is in the right direction in this more and more popular area of great importance. I also think that contributions compared to other works could be made more clear, as well as additional experiments and discussions of the shortcomings of this approach may be added.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
HJeABnCqKQ | Generative Adversarial Self-Imitation Learning | [
"Junhyuk Oh",
"Yijie Guo",
"Satinder Singh",
"Honglak Lee"
] | This paper explores a simple regularizer for reinforcement learning by proposing Generative Adversarial Self-Imitation Learning (GASIL), which encourages the agent to imitate past good trajectories via generative adversarial imitation learning framework. Instead of directly maximizing rewards, GASIL focuses on reproducing past good trajectories, which can potentially make long-term credit assignment easier when rewards are sparse and delayed. GASIL can be easily combined with any policy gradient objective by using GASIL as a learned reward shaping function. Our experimental results show that GASIL improves the performance of proximal policy optimization on 2D Point Mass and MuJoCo environments with delayed reward and stochastic dynamics. | [
"gasil",
"generative adversarial",
"learning",
"past good trajectories",
"rewards",
"simple regularizer",
"reinforcement learning",
"agent",
"generative adversarial imitation",
"framework"
] | https://openreview.net/pdf?id=HJeABnCqKQ | https://openreview.net/forum?id=HJeABnCqKQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"BJg4IgUVlV",
"BkgjnbxFyV",
"rJxYb-Ad1E",
"r1lx8KKlkE",
"Sygjk6uFCm",
"rJl86nOFCX",
"Hyl3K2dYA7",
"ryeEwh_t0Q",
"SJgYyP5hhm",
"H1elPtU53m",
"Syelg-g92X"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544998988239,
1544253875232,
1544245505294,
1543702855595,
1543240931211,
1543240893884,
1543240835994,
1543240795636,
1541347041365,
1541200215826,
1541173479791
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1589/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1589/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1589/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1589/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1589/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1589/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1589/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1589/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1589/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1589/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1589/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper proposes an extension to reinforcement learning with self-imitation (SIL)[Oh et al. 2018]. It is based on the idea of leveraging previously encountered high-reward trajectories for reward shaping. This shaping is learned automatically using an adversarial setup, similar to GAIL [Ho & Ermon, 2016]. The paper clearly presents the proposed approach and relation to previous work. Empirical evaluation shows strong performance on a 2D point mass problem designed to examine the algorithms behavior. Of particular note are the insightful visualizations in Figure 2 and 3 which shed light on the algorithm's learning behavior. Empirical results on the Mujoco domain show that the proposed approach is particularly strong under delayed-reward (20 steps) and noisy-observation settings.\", \"the_reviewers_and_ac_note_the_following_potential_weaknesses\": \"The paper presents an empirical validation showing improvements over PPO, in particular in Mujoco tasks with delayed rewards and with noisy observations. However, given the close relation to SIL, a direct comparison with the latter algorithm seems more appropriate. Reviewers 2 and 3 pointed out that the empirical validation of SIL was more extensive, including results on a wide range of Atari games. The authors provided results on several hard-exploration Atari games in the rebuttal period, but the results of the comparison to SIL were inconclusive. Given that the main contribution of the paper is empirical, the reviewers and the AC consider the contribution incremental.\\n\\nThe reviewers noted that the proposed method was presented with little theoretical justification, which limits the contribution of the paper. During the rebuttal phase, the authors sketched a theoretical argument in their rebuttal, but noted that they are not able to provide a guarantee that trajectories in the replay buffer constitute an unbiased sample from the optimal policy, and that policy gradient methods in general are not guaranteed to converge to a globally optimal policy. The AC notes that conceptual insights can also be provided by motivating algorithmic or modeling choices, or through detailed analysis of the obtained results with the goal to further understanding of the observed behavior. Any such form of developing further insights would strengthen the contribution of the submission.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"combination of self-imitation and GAIL - needs more thorough development of conceptual insights\"}",
"{\"title\": \"Response to author's rebuttal\", \"comment\": \"The results provided on the ATARI games are not apples-to-apples with SIL[Oh et.al.], the baseline uses A2C and this paper uses PPO. Moreover, even in these comparisons, SIL[Oh et.al.] performs better on 4/6 games.\\n\\nUpon reviewing the author's responses and the update paper, I decided to keep my score the same. The paper may have good potential, but sufficient empirical evidence is needed to justify the proposed technique.\"}",
"{\"title\": \"thanks for the response\", \"comment\": \"I mostly agree with the rebuttal. I have updated my score, but still think the contribution is not good enough to pass the high bar of ICLR. The paper would be stronger if more theoretical aspects are provided.\"}",
"{\"title\": \"Thanks for the clarification!\", \"comment\": \"Thanks! I buy the explanation.\"}",
"{\"title\": \"Response to R3\", \"comment\": \"We thank Reviewer 3 for the detailed reviews and constructive feedback. We answer some questions specifically raised by the reviewer below. Please check the common response as well as our revised paper.\\n\\n- Regarding good trajectory buffer\\nThank you for pointing out unclear statements in the paper. Ideally, the good trajectories in the buffer are trajectories whose the discounted sum of rewards are greater than (or equal to) that of trajectories generated by the current policy in expectation. In practice, we proposed to store K-best trajectories (e.g., highest episode returns) in the buffer that the agent has collected during training. We observed that K-best trajectories satisfy the constraint described above with a proper K in all of our experiments. We have revised the paper to make this clear.\"}",
"{\"title\": \"Response to R2\", \"comment\": \"We thank Reviewer 2 for the detailed reviews and constructive feedback. We answer some questions specifically raised by the reviewer below. Please check the common response as well as our revised paper.\\n\\n- Regarding buffer size\\nThere is a trade-off between the number of samples in the buffer and the quality (i.e., average return) of the trajectories in the buffer, because the data in the buffer is collected by the agent as opposed to experts in standard imitation learning setup. More specifically, as the size of buffer increases, the average return of trajectories in the buffer generally decreases, while the samples become more diverse. So, the agent would not perform well if the buffer size is too small (due to lack of samples) or too large (due to low-quality data). To just clarify, the performance does not always decrease as the buffer size decreases (1000 is better than 500 in Figure 7a) in our experiment.\"}",
"{\"title\": \"Response to R1\", \"comment\": \"We thank Reviewer 1 for the detailed reviews and constructive feedback. We answer some questions specifically raised by the reviewer below. Please check the common response as well as our revised paper.\\n\\n\\n- Regarding exploration bonus and sparse reward\\nThank you for the great point. We removed the term \\u201csparse\\u201d to prevent confusion. Our method does not improve the exploratory behavior of the agent. Thus, if the agent fails to reach the sparse goal even once, GASIL would not improve the performance because there is no good trajectory to imitate. For this type of extremely sparse-reward task, advanced exploration methods (e.g., count-based exploration, stein-variational-policy-gradient) would be necessary to discover reward signals, which is not addressed in our paper. However, once the agent reaches the goal, GASIL would exploit such rare experiences and encourage the agent to reproduce the same behavior much more easily compared to baseline algorithms (e.g., PPO). A similar discussion was made by the related work on self-imitation learning [Oh et al.].\"}",
"{\"title\": \"Common response to all reviewers\", \"comment\": \"- Regarding the lack of result on Atari games\\nWe have updated the paper (Appendix A) with results on 6 hard exploration Atari games as discussed in [Oh et al.]. We have observed that PPO+GASIL significantly improves PPO on 3 out of 6 hard Atari games. This shows that GASIL is a useful RL regularizer that can be applied to a variety of domains with rich observations. On the other hand, we observed that the overall result is not clearly better than A2C+SIL [Oh et al.], though this is not a fair comparison as the actor-critic algorithms are different (PPO, A2c). In fact, GAIL [Ho et al.] has not been shown to be efficient on this type of domain. Thus, we conjecture that GASIL is more beneficial than SIL particularly for continuous control as shown in our MuJoCo experiments.\\n\\n- Regarding the lack of theoretical result\\nOur preliminary investigation shows that GASIL can guarantee policy improvement if 1) the average return of the trajectories in the buffer is higher than that of the agent\\u2019s policy, and 2) there exists a valid policy (i.e., occupancy measure) such that the trajectories in the buffer are unbiased samples from it. However, we did not manage to show that there exists a valid policy that can generate any trajectories in the buffer. Thus, we leave this for future work and present GASIL as an empirical study. \\n\\n- Regarding overfitting to local minima and exploration\\nNot only GASIL but also any policy gradient algorithms generally do not guarantee convergence to the global optimal policy and rely on the stochasticity of the policy for exploration (e.g., e-greedy, dithering, entropy regularization). In practice, however, we observed that the buffer in GASIL tends to be constantly updated with better trajectories as the policy improves and uses a form of exploration (e.g., stochasticity encouraged by entropy regularization). Introducing a better form of exploration (e.g., curiosity-based exploration bonus, SVPG [Liu et al.]) would improve the performance of any policy gradient algorithms including ours. Thus, we believe that this is an orthogonal research direction.\"}",
"{\"title\": \"No comparison/evaluation on discrete action tasks (i.e. ATARI games)\", \"review\": \"[Paper Summary]:\\nThis paper proposes a regularization technique for existing RL algorithms by encouraging them to learn to reproduce the best past trajectories which obtained higher reward than that of current policy. The proposed method in the paper has the same high-level idea as \\\"Self-imitation learning\\\" [Oh et.al. ICML 2018] with a different objective. Instead of performing imitation learning to distill the knowledge from past best trajectories, this proposes to use inverse reinforcement learning via GAIL objective [Ho and Ermano, 2016]. The best k trajectories from past experience are stored to train a discriminator which is then used to augment the external reward function with a discriminator reward.\\n\\n[Paper Strengths]:\\nThe paper combines ideas from GAIL and self-imitation learning to propose a method that leverages past best trajectories via inverse-RL. This combination allows one to interpret self-imitation of best trajectories as a mechanism for \\\"reward shaping\\\" where learned discriminator shapes the environmental reward using past experiences. This is an exciting perspective and needs further discussion.\\n\\n[Paper Weaknesses and Clarifications]:\\n=> This paper is very closely related to self-imitation learning [Oh et.al.], however, there is no theoretical justification provided (unlike [Oh et. al.]) whether the policy learned by optimizing Equation-11 is in anyway related to the optimal policy -- which was the case as shown in [Oh et.al.]. That being said, this is not a requirement for a paper to show theoretical justification as long as the paper justifies given approach with ample empirical evidence.\\n=> The main comparison point for the proposed approach, \\\"GASIL\\\", is \\\"SIL\\\" [Oh et.al.]. This paper provides a good comparison on continuous control tasks on Mujoco where \\\"GASIL\\\" performs slightly better than \\\"SIL\\\" in 3 out of 6 environments. However, \\\"SIL\\\" [Oh et. al.] showed extensive experiments on all Atari Games + Mujoco tasks. Since the proposed approach is mainly empirically motivated, the experiments should at least show a comparison on all the environments of the closely-related prior work. It would be much more convincing to see a bar chart across all 48 Atari Games showing relative improvement of \\\"GASIL\\\" over \\\"SIL\\\", as shown in Figure-4 of [Oh et. al.].\\n=> Other concerns:\\n - The paper mentions on multiple occasions that the proposed method would handle delayed and \\\"sparse\\\" reward. However, it is not clear how can past best trajectories help with \\\"sparse\\\" rewards (\\\"delayed-dense-rewards\\\" seems alright, but they are not the same as \\\"sparse\\\"!). For instance, suppose the agent gets only terminal-reward in a maze. In such a case, the agent would need to rely on some form of exploration bonus (count-based, curiosity etc.) to reach the sparse-goal even once.\\n - What prevents the learned policy from over-fitting to the local minima of the \\\"locally\\\" best trajectories seen so far?\\n\\n[Final Recommendation]:\\nI request the authors to address the comments raised above. The paper has good potential, but sufficient empirical evidence is needed to justify the proposed technique. If the results on all Atari games can be included and shown to improve over \\\"SIL\\\", I would update my final rating.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Good natural algorithm but significance and understanding of when to apply is unclear.\", \"review\": \"Summary:\\n\\nThis paper proposes a self-imitation learning technique which modifies GAIL such that top-k trajectories with high reward found by the agent are kept in a buffer (and updated as learning goes on) such that the discriminator tries to distinguish between trajectories generated by the generator and those in the buffer while the generator tries to fool the discriminator by trying to imitate trajectories present in the buffer.\", \"experiments_are_shown_on_two_domains\": \"1. a simple 2D domain where the agent must avoid orange circles (negative reward) and touch green and blue circles which yield positive reward and 2. on MuJoCo against PPO and variants of PPO as baselines where it is shown that GASIL performs better (even under increased stochastic noise in the dynamics.)\", \"comments\": \"- Generally well-written and easy to understand. Thanks!!\\n\\n- Intuitive algorithm and good experiments with ablation studies on MuJoCo. \\n\\n- My main concern is that the paper while offering a good self-imitation algorithm fails to really shine light on when/why this is expected to work. Especially in the following natural areas:\\n\\na. Why is it that performance decreases as buffer size B increases?\\nb. Why doesn't the policy get stuck imitating the first few good trajectories? the conjecture offered is that policy gradient strongly encourages greedy myopic behavior while GASIL does not. Wouldn't one expect GASIL to suffer more?\\nc. Does GASIL work better on rich observation spaces (e.g. Atari games) as well?\\n\\nWithout good answers (theoretical or empirical) to the above questions it is a bit hard to assess how significant of an improvement GASIL actually is and what is the prescription for using this over non-GAIL style self-imitation learning algorithms?\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"combination of GAIL and self-imitation learning, but not convincing\", \"review\": \"This paper presents an incremental extension to the Self-imitation paper by Oh, Junhyuk, et al. The previous paper combined self-imitation learning with actor-critic methods, and this paper directly integrates the idea into the generative adversarial imitation learning framework.\\n\\nI think the idea is interesting, but there remains some issues very unclear to me. In the algorithms, when updating the good trajectory buffer, it is said \\\"We define \\u2018good trajectories\\u2019 as any trajectories whose the discounted sum of rewards are higher than that of the policy\\\". What does \\\"that of the policy\\\" mean? How do you know the reward of the policy?\\n\\nSecond, without defining good trajectories, I don't think Algorithm 1 would work. Algorithm ` 1 misses the part of how to update buffer B. After introducing their own algorithm, the author did not provide much solid proof or analysis for why this self-imitation learning works.\\n\\nIn the experiment section, the author implemented GASIL for various applications and presented reasonable results and compared them with other methods. Nevertheless, without theoretical proof, it is hardly convincing that the results could be consistently reproduced instead of being merely accidental for some applications.\", \"update\": \"The rebuttal resolves some of my concerns. However, I still think the contribution is incremental. The current version looks too heuristic, more theoretical analysis or inspirations need to be added.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
SyVpB2RqFX | INFORMATION MAXIMIZATION AUTO-ENCODING | [
"Dejiao Zhang",
"Tianchen Zhao",
"Laura Balzano"
] | We propose the Information Maximization Autoencoder (IMAE), an information theoretic approach to simultaneously learn continuous and discrete representations in an unsupervised setting. Unlike the Variational Autoencoder framework, IMAE starts from a stochastic encoder that seeks to map each input data to a hybrid discrete and continuous representation with the objective of maximizing the mutual information between the data and their representations. A decoder is included to approximate the posterior distribution of the data given their representations, where a high fidelity approximation can be achieved by leveraging the informative representations.
We show that the proposed objective is theoretically valid and provides a principled framework for understanding the tradeoffs regarding informativeness of each representation factor, disentanglement of representations, and decoding quality. | [
"Information maximization",
"unsupervised learning of hybrid of discrete and continuous representations"
] | https://openreview.net/pdf?id=SyVpB2RqFX | https://openreview.net/forum?id=SyVpB2RqFX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"SkgXsgiT1E",
"Bkg6QFClk4",
"SkeGsXiy1N",
"r1eDov911V",
"Syx8CbKyyE",
"BJxMsuuJyV",
"HJlHo760AX",
"SygKzZnR0X",
"SJxThwjCCQ",
"ryxOw1q2CX",
"S1lkbOFhRm",
"Bke39kQsC7",
"rklyeyXiCQ",
"r1l-arks0X",
"H1eZTkUqRQ",
"BklalyU5AQ",
"HJxEtON50m",
"BJxtpDEqRm",
"Bye3TDn_hX",
"Hye1UdoD2X",
"HkxV_FHCsQ"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544560795103,
1543723301428,
1543644058185,
1543640991258,
1543635405960,
1543633050495,
1543586716918,
1543581968722,
1543579572573,
1543442272246,
1543440374879,
1543348116254,
1543347943308,
1543333304733,
1543294904648,
1543294708957,
1543288955576,
1543288769176,
1541093316444,
1541023814824,
1540409707908
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1588/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1588/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1588/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1588/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1588/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1588/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1588/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1588/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1588/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1588/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1588/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1588/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1588/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1588/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1588/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1588/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1588/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1588/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1588/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1588/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1588/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper proposes a principled modeling framework to train a stochastic auto-encoder that is regularized with mutual information maximization. For unsupervised learning, this auto-encoder produces a hybrid continuous-discrete latent representation. While the authors' response and revision have partially addressed some of the raised concerns on the technical analyses, the experimental evaluations presented in the paper do not appear adequate to justify the advantages of the proposed method over previously proposed ones, and the clarity (in particular, notation) needs further improvement. The proposed framework and techniques are potentially of interest to the machine learning community, but the paper of its current form fells below the acceptance bar. The authors are encouraged to improve the clarify of the paper and provide more convincing experiments (e.g., on high-dimensional datasets beyond MNIST).\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"A principled modeling framework hindered by inadequate experiments and confusion notation\"}",
"{\"title\": \"regarding the definition of posterior\", \"comment\": \"Thank you for bringing this question up for discussion! Yes, we agree that this is incorrect terminology if we interpret x as the observed data and z as the latent variable. However, an encoder model (e.g. IMAE) and a (generative) latent variable are conceptually different. To be more specific,\\n\\n1) In a latent variable model, like the one used in VAE, the distribution of data x is modeled as p(x) = \\\\int_z p_\\\\theta(x|z)p(z)dz, where the prior p(z) is predefined and the conditional probability p_\\\\theta(x|z) is explicitly modeled. We then estimate the parameter z by maximizing the likelihood function. \\n\\n2) In our IMAE model (and other encoder models that maximize MI [3,4]), z is a stochastic function of x that is explicitly modeled as p_\\\\theta(z|x) through a probabilistic encoder. The model is optimized by maximizing the mutual information I_\\\\theta(x,z) between x and z. In this setting, p_\\\\theta(x|z) can be interpreted as the probability of x given the (observed) representation z, which can be interpreted as the posterior probability. In our model, we assume p(z|x) is gaussian, z is \\u201cobserved\\u201d by interpreting it as the response/output of a gaussian channel in information theory. To be more specific, z = mu + epsilon * sigma can be interpreted as adding scaled gaussian noise \\u201csigma*epsilon\\u201d to the deterministic encoder mean \\u201cmu\\u201d (see section 4.2 in [5]). \\n\\nThe difference is induced by the fundamental conceptual difference between theses two models, and below are some relevant references. We list them here by no means saying the AC\\u2019s concern/confusion regarding the posterior is not right. Indeed, we have had an internal debate of how to make this discussion the most clear, accurate, and mathematically correct \\u2014 We now plan to either write an extensive justification of the overall point of view (even more precise and clear than in [3], which compares to VAE pointing out that the two are opposite) or to revisit the terminology, whatever it takes to make our model clear. We sincerely appreciate you bringing this up. \\n\\n[1] Barber, David and Agakov, Felix. The im algorithm: a variational approach to information maximization. In Proceedings of the 16th International Conference on Neural Information Processing Systems, pp. 201\\u2013208. 2003.\", \"https\": \"//pdfs.semanticscholar.org/f586/4b47b1d848e4426319a8bb28efeeaf55a52a.pdf\\n\\n[2] Variational Information Maximization in Stochastic Environments. Felix Agakov. PhD Thesis. http://aivalley.com/Papers/thesis_1sp.pdf\\n\\n\\n[3] Auto-Encoding Total Correlation Explanation. Shuyang Gao, Rob Brekelmans, Greg Ver Steeg, Aram Galstyan. https://arxiv.org/pdf/1802.05822.pdf\\n\\n\\n[4] Discovering structure in high-dimensional data through correlation explanation. Ver Steeg, Greg, and Aram Galstyan. In Advances in Neural Information Processing Systems, pp. 577-585. 2014.\\n\\n[5] Understanding disentangling in \\u03b2-VAE. Christopher P. Burgess and Irina Higgins et al. https://arxiv.org/pdf/1804.03599.pdf\"}",
"{\"title\": \"The definition of posterior is quite unconventional\", \"comment\": \"Since x is the OBSERVED data and z is the LATENT representation, calling p_\\\\theta(x|z) as the posterior is rather unconventional (if not completely wrong).\"}",
"{\"title\": \"Response to reviewer 1\", \"comment\": \"In our paper, we assumed p(z|x) is factorial, hence p(z_k) can be estimated in the same way used to estimate p(z).\\n\\nTo explicitly estimate p(z) or p(z_k), we need to pass the whole dataset, i.e., p(z) = sum_i p(z|x^i) / N. In this paper, we estimate p(z) using minibatch data with size B ( set to be 512 to 2048 for all methods). Based on this, we approximate all key quantities (the relevant expectations in KL divergence terms in (2)) using MC estimation, which are explicitly calculated via (3) (4) and (5). \\n\\nLet us know if you have any further questions. Thank you!\"}",
"{\"title\": \"Response to reviewer 2\", \"comment\": \"The reason for us to update the numerical section substantially is that we agree with the reviewers, the numerical evaluation in our initial submission is insufficient and cannot provide fairly informative comparisons between IMAE and the other models. We want to add the numerical results (on dSprites) suggested by you, since we do think it can improve the paper and provides more insights. However, we do understand if you prefer to not change the score due those changes.\", \"as_for_your_questions\": \"1) We do appreciate the suggestion! We will add it in a later version. In our current plots, for each row, the initial z is the average value of all z's (from testing data) whose labels are predicted as y by IMAE. Therefore, we are performing latent traverse based on the averaged z associated with each learnt category. \\n\\n2) Var(z_k) does not equal to sigma_k(x), since sigma_k(x) is the variance of the conditional distribution p_\\\\theta(z|x) and Var(z_k) is the variance of the marginal z_k. It can be proven that (Eq (21) in appendix), \\n Var(z_k) = E_x [sigma_k^2(x)] + Var[mu_k(x)] (*)\\nwhere mu and sigma denote the mean and variance of p_\\\\theta(z|x) respectively, and p_\\\\theta(z|x) is assumed to be gaussian. Now assume Var(z_k) is fixed, then \\n I_\\\\theta(x; z) <= 0.5 log Var(z_k) - E_x [log sigma_k^2(x)] (**)\\nVar(z_k) is calculated in (*), the upper bound in (**) is only attained when z_k is gaussian. ((*) and (**) are proved in the proof of proposition 1 in appendix).\\n\\nTherefore, we push p(z_k) towards a scaled normal distribution r(z_k) ~ N(0, alpha) with alpha being some prefixed value for Var(z_k), so as to achieve the maximal I_\\\\theta(x, z_k) among all possible solutions with Var(z_k) = alpha. \\n\\nWe discussed in section 3.2 (also in the previous response to you) about the reasons for restricting Var(z_k) being a (fixed) finite value. \\n \\nLet us know if you have any questions.\"}",
"{\"title\": \"Response to AC\", \"comment\": \"Dear AC,\\n\\nThank you for spending time reading the paper! We do apologize for the confusion caused by the notations used in our paper. The reason for us to use different notations is that the proposed IMAE approach is significantly different from VAE, or say IMAE is the reversed VAE. To be more specific (we focus on z for better clarification) \\n \\n (1) In IMAE, given data x, we start with an probabilistic encoder seek to map it into its representation z, or say we seek to learn the distribution of representation z via a stochastic function p_\\\\theta(z|x) from the very beginning. In this setting, the joint distribution is modeled as p_\\\\theta(x, z) = p(x) p_\\\\theta(z|x) and the posterior is therefore p_\\\\theta(x|z). In contrast, in VAE, the situation is reversed, where it starts from a generative model (decoder) p(x|z), the joint distribution is modeled as p(x,z) = p(z)p(x|z) correspondingly and the posterior is p(z|x). \\n \\n In summary, IMAE starts with a representation learning model, while VAE starts with a generative latent model, which are the opposite of each other. \\n \\n (2) Given (1), p_\\\\theta(z) = \\\\int p(x) p(z|x) is the marginal distribution of z in IMAE, instead of being the prior in VAE. Although, IMAE does push the marginal distribution p_\\\\theta(z) towards a gaussian prior r(z), the reason is to avoid degenerate solution while simultaneously maximizing I_\\\\theta(x; z) . Please see section 3.2 and our response to reviewer 2 (will be posted soon) for details. \\n\\n (3) As a high level summary, IMAE starts from an encoding model, and seeks to maximize the mutual information between the data x and its representation z. A decoder is included to approximate the posterior of the data given its representation, by leveraging the learnt informative representation, better decoding quality is attained. Moreover, we show that the information maximization objective inherently introduce a balance between informativeness of each representation factor z_k and statistical independence between them. \\n\\nLet us know if you have any questions.\", \"ps\": \"We have been hesitant since we first set those notations, since the new notations can reflect the significant difference between IMAE and VAE well, but it can cause confusion too. We are considering changing them back due to confusion induced there.\"}",
"{\"title\": \"confusion notation and definition\", \"comment\": \"Dear authors,\\n\\nI appreciate that you took the time to write a detailed response, which, however, does not seem to sufficiently convince the reviewers about the merit of your paper. \\n\\nI went through your paper a couple of times by myself, and I got quite confused from the very beginning:\\n\\nFirst, From Appendix C, it seems that the authors know very well about the convention of defining p_{\\\\theta}(x|z) as the decoder and q_{\\\\phi}(z|x) as the encoder in VAE, but why in the main body of the paper you choose to use p_{\\\\theta}(z|x) as the encoder and q_{\\\\phi}(x|z) as the decoder? Is there particular reason for this inconsistency and using a new set of notation?\\n\\nSecond, since x denotes the data and z denotes the latent variable, why the paper calls p(x|z) as the posterior?? Isn't that p(x|z) is the conditional likelihood, p(z) is the prior, p(x)=\\\\int p(x|z) p(z) dz is the marginal likelihood, and p(z|x) = p(x|z)p(z)/p(x) is the posterior? \\n\\nAm I missing some important definitions to justify these unconventional notation and definition?\\n\\nThanks,\\nAC\"}",
"{\"title\": \"Question not addressed\", \"comment\": \"I don't think your response addresses my question. How are you computing p(z_k)?\"}",
"{\"title\": \"Response to Author Response\", \"comment\": [\"The draft changed substantially from the original submission, especially the experimental section. As per the guidelines of the reviewing process on the conference webpage, it would not be fair to increase the score based on these drastic changes.\", \"Regarding discrete latent traversals, it seems like you\\u2019ve used different z\\u2019s across the rows of Figures 2b,c,d and for 2e,f it\\u2019s not clear whether you\\u2019re keeping z fixed across the latent traversal of y. Similar point holds for Figure 4a. It would be infromative to keep the z's fixed across the discrete latent traversals to show good disentangling.\", \"I can\\u2019t see how if you fix the variance of z_k, then increasing (6) leads to increasing (4). Can this be shown mathematically? Also don\\u2019t you learn the variance of z_k in practice? You seem to denote it as sigma_k(x), implying that it is learned.\"]}",
"{\"title\": \"typos got corrected\", \"comment\": \"We corrected some typos in the above response we just posted. Let us know if you have any questions. Thank you!\"}",
"{\"title\": \"Estimating the marginal distributions\", \"comment\": \"Thank you for bring this question up for discussion! We address your concerns one by one below.\\n\\n(I) -- We first show the marginals we need to estimate and how we estimate them. Note that r(z) is a factorial gaussian, then\\n\\n KL[p(z) || prod_k p(z_k)] + sum_k KL[p(z_k) || r(z_k)] = KL[p(z) || r(z)] (1)\\n\\nHence our objective is equivalent to \\n\\n L_IMAE = reconstruction error + beta * L(y) - beta* KL[p(z) || r(z)] + (beta - gamma) * KL[p(z) || prod_k p(z_k)] (2)\\n\\nTherefore, to optimize (2), we need to estimate KL[p(z) || r(z)] and total correlation KL[p(z) || prod_k p(z_k)]. Now we explain how to estimate these two terms.\\n\\n (a) To estimate KL[p(z) || r(z)], let B denote the batch size \\n KL[p(z) || r(z)] = E_p(z) log p(z) - E_p(z) log r(z) \\\\approx (1/B) * sum_i log p(z^i) - (1/B) * sum_i log r(z^i) (3)\\nwhere z^i is sampled from p_theta(z|x^i). We still need to approximate log p(z^i) in (3), which can be estimated as the following: \\n log p(z^i) \\\\approx log (1/B) * sum_j p(z^i |x^j) (4)\\n \\n (b) Similar arguments for the total correlation term, \\n KL[p(z) || prod_k p(z_k)] = E_p(z) log p(z) - E_p(z) log prod_k p(z_k) (5)\\n We have established the estimator for the first term E_p(z) log p(z), and the second term can be estimated in a similar way, i.e., \\n E_p(z) log prod_k p(z_k) = sum_k E_p(z) log p(z_k) \\n \\\\approx (1/B) * sum_k sum_i log p(z_k^i) \\n \\\\approx (1/B) * sum_k sum_i log (1/B) * sum_j p(z_k^i | x_j)\\nwhere z_k^i is sampled from p(z | x^i) as before. \\n\\n\\n(II)-- Regarding the concern of small variance for the conditional distribution p(z|x), we want to point out the following: \\n \\n (a) Minimizing KL divergence KL[p(z) || r(z)] will drive the variance away from being very small. As you can see in (3) and (4), if the variance of the conditional distribution is very small, then the KL divergence can be very large too. This is also numerically demonstrated in Figure 2 of our paper, where you can see the expectation of sigma_k^2 is not very small across dimensions, even for those informative dimensions the associated E[sigma_k^2] have reasonable values. Figure 1(a) is obtained with beta = 2, E[sigma_k^2] of the informative dimensions can be larger by using large beta values. \\n\\n (b) We propose to squeeze the marginal distribution p(z_k) within a gaussian distribution with finite mean so as to avoid the degenerate solution where p(z_k |x) can be delta distribution. By doing so, we can also achieve the maximum of I(x; z_k) among all possible solutions with the same variance of z_k, i.e. Var(z_k) being the same. Please refer to section 3.2 for more discussion. \\n Another advantage of pushing p(z_k) instead of the conditional distribution p(z_k|x) towards a gaussian distribution is that, pushing p(z_k |x^i) of each data sample i towards the same gaussian distribution can result in undesired overlap between different p(z_k | x), this would cause serious reconstruction problem and loss of informativeness in z_k as well. In the extreme case, when p(z_k|x) converged to the target gaussian distribution, the representation z_k carries zero information about the data (see Fig 1 in the WAE paper https://openreview.net/pdf?id=HkL7n1-0b ). Although, by requiring the variance of p(z_k|x) being larger than some prefixed value instead of pushing it towards a gaussian distribution might improve the issue a bit, this will introduce another data dependent hyperparameter to tune. Moreover, with a large set of training data, pushing p(z_k) towards r(z_k) while requiring the variance of the conditional distribution p(z_k|x) being larger than some fixed value can still induce overlap. For these two reasons, we prefer to not adding such constraint. By pushing p(z_k) towards a gaussian distribution with reasonable variance, it can balance well between pushing z_k|x^i apart for different samples x^i while maintaining reasonable variance for the conditional distribution.\"}",
"{\"title\": \"Response to Reviewer 2 (part 2): regarding your questions and comments:\", \"comment\": \"1)--We seek to learn interpretable representations together with a decoding/generative model, where informative representations can then be leveraged to generate high fidelity data. The relationship between I(x; (y,z)) and KL(p(x|y,z) || q(x|y,z)) can be interpreted according to the following:\\n KL(p(x|y,z) || q(x|y,z)) = CrossEntropy(p(x|y,z), q(x|y,z)) - Entropy(p(x|y,z)) (*)\\n I(x; y,z) = Entropy(x) - Entropy(p(x|y,z) (**)\\n (*) and (**) implies the following: \\n a) Since Entropy(x) is independent of the optimization procedure of (**), maximizing I(x; y,z) decreases Entropy(p(x|y,z)). On the other hand, Minimizing KL will push CrossEntropy towards Entropy(p(x|y,z)). Hence, jointly optimizing I(x; y,z) and KL can yield informative representations as well as good posterior approximation quality (both KL divergence and CrossEntropy(p(x|y,z), q(x|y,z)) are small). \\n b) A good balance can be obtained by setting the weight on I(x; y,z) larger than 1, since if the weight is one, \\n I(x; y,z) - KL(p(x; y,z) || q(x; y,z)) = Entropy(x) - CrossEntropy(p(x|y,z), q(x|y,z)) (***)\\n That is the model degenerate to a plain auto-encoder, optimizing (***) is equivalent to simply optimizing CrossEntropy(p(x|y,z), q(x|y,z)) (the reconstruction error). Therefore, by setting the weight larger than 1, we can simultaneously attain good posterior approximation quality and informative representations with desired distributions.\\n\\n2)-- You are right, without any assumptions, increasing (6) does not necessary increase (4). However, it is true if we restrict the variance of $\\\\zb_k$ to be a fixed value, given which increasing (6) does lead to the increase of (4). Specifically, the proposed objective (6) can be justified by the following: \\n a) with the same amount of variance in z_k (Var(z_k) is fixed), I(x, z_k) is maximized if p(z_k) is gaussian; \\n b) As we discussed in section 3.2, I(x, z_k) can be trivially maximized by pushing pushing the condtional means mu_k(x) being extremely farway from each other while diminishing the conditional variance sigma_k(x) to zeros. This can result in a severely fragmented latent space where the distribution of z_k are discontinuous. To remedy this issue, restricting the variance of the latent representation (Var(z_k)) to be some finite value is a natural resolution. Given this, squeezing the distribution of z_k within the domain a gaussian distribution with finite variance achieves the maximal mutual information (upper bound in (4)) among all possible solutions with the same variance of z_k. \\n\\n3)--Thank you for bring this question up for discussion, for which we want to point out the following:\\n a) The bound in proposition 2 depends on log(delta) and log (C ), therefore the required number of samples won't increase dramatically by pursuing high probability bound with less restrictive assumptions on p(y) and \\\\hat{p}(y). \\n b) The required number of samples N is on the order of K_2^2 for large K_2. However, unsupervised learning of categorical representation with large number of categories itself is very challenging. A possible resolution is to learn the representations over a multiple-stage procedure, at each stage we learn a small number of categories within the single parent category, the theoretical guarantee still valid.\\n c) Proposition 2 is proved by considering the worst case, hence the estimation error can be much better in practice. \\n\\n4)--Thank you for capturing this, which we corrected in the revision ( see Eq (9)). \\n\\n5)-- We do apologize for the confusion in the initial figure, where the indices of the left plot correspond with the sorted values of I(x; z_k), while k=8,3,1 denoting the indices of z_k without sorting them. To avoid the confusion, we use the indices w.r.t the sorted values for all four plots in the revision.\"}",
"{\"title\": \"Regarding your questions and comments:\", \"comment\": \"1) We seek to learn interpretable representations together with a decoding/generative model, where informative representations can then be leveraged to generate high fidelity data. The decoder is included to approximate the posterior distribution of the data given their representations, which together with the learnt distributions of the representations can be used to generate data after training. The relationship between I(x; (y,z)) and KL(p(x|y,z) || q(x|y,z)) can be interpreted according to the following:\\n \\n KL(p(x|y,z) || q(x|y,z)) = CrossEntropy(p(x|y,z), q(x|y,z)) - Entropy(p(x|y,z)) (*)\\n \\n (*) implies the following:\\n b) Note that I(x; y,z) = Entropy(x) - Entropy(p(x|y,z) where Entropy(x) is independent of the optimization procedure, hence maximizing I(x; y,z) decreases the value of Entropy(p(x|y,z)). On the other hand, Minimizing KL(p(x|y,z) || q(x|y,z)) will push CrossEntropy(p(x|y,z), q(x|y,z)) towards Entropy(p(x|y,z)). Therefore, jointly optimizing I(x; y,z) and KL(p(x|y,z) || q(x|y,z)) can simultaneously yield informative representations as well as good posterior approximation quality. \\n\\n c) A good balance can be obtained by putting comparatively large weight (>1) on I(x; y,z), since if the weight is one, then \\n I(x; y,z) - KL(p(x; y,z) || q(x; y,z)) = Entropy(x) - CrossEntropy(p(x|y,z), q(x|y,z)) (**)\\n That is the model degenerate to a plain auto-encoder, optimizing (**) is equivalent to simply optimizing CrossEntropy(p(x|y,z), q(x|y,z)) (the reconstruction error). \\n In contrast, by using large weight on I(x; y, z), we can simultaneously attain informative representations with desired distributions as well as good posterior approximation quality (see (b)). \\n\\n\\n2): You are right, increasing (6) does not necessary increase (4) without any assumptions. It's true if we restrict the variance of $\\\\zb_k$ to be a fixed value, given which increasing (6) does lead to the increase of (4). \\n\\n The proposition of (6) can be justified by the following: \\n a) with the same amount of variance in z_k (Var(z_k) is a fixed value), the mutual information I(x, z_k) is maximized if p(z_k) is gaussian; \\n b) It's reasonable (even necessary) to restrict the variance of the latent representation factor to be some finite value so as to avoid degenerate solution. Moreover, I(x, z_k) can be trivially maximized by pushing pushing the condtional means mu_k(x) being extremely farway from each other while simultaneously diminishing sigma_k(x) to zeros. This can result in a severely fragmented latent space where the distribution of z_k are discontinuous. \\n Therefore, restrict the variance of the latent representation (Var(z_k)) to be some reasonable finite value is a natural resolution to avoid undesired representations. Given this, squeezing the distribution of z_k within the domain a gaussian distribution with finite variance achieves the maximal mutual information (upper bound in (4)) among all possible solutions with the same variance of z_k. \\n\\n3)-Thank you for bring this question up for discussion, for which we want to point out the following:\\n a): The bound in proposition 2 depends on log(delta) and log (C ), therefore the required number of samples won't increase dramatically by pursuing high probability bound with less restrictive assumptions on p(y) and \\\\hat{p}(y). \\n b): The required number of samples N is on the order of K_2^2 for large K_2. It does require a large batch size when we consider large K_2. However, unsupervised learning of categorical representation with large number of categories itself is very challenging. A possible resolution is to learn the representations over a multiple-stage procedure, at each stage we learn a small number of categories within the single parent category. By doing so, we are still able to learn the categorical representation for data with a large number of categories by using the proposed method with theoretical guarantee. \\n\\n4)-Thank you for capturing this, which we corrected in the revision ( see Eq (9)). \\n\\n5)- We do apologize for the confusion induced by the initial submission, where the indices of the left plot actually index the sorted values of I(x; z_k), while k=8,3,1 denotes the indices of z_k without sorting them. To avoid the confusion, we use the indices of the sorted values for all four plots in the revision.\"}",
"{\"title\": \"Empirical evaluation still insufficient\", \"comment\": \"I have looked at the revision and also simply looked at the paper more carefully. My earlier review was somewhat careless because, in any case, the empirical evaluation is quite weak. Although I believe that a stronger empirical evaluation is required, the fundamental issues being addressed are important and it is nice to get things clarified.\\n\\nYour only parameters are the encoder and decoder parameters. These parameters do not support computing marginal distributions on continuous latent variables. Consider the term KL(P_Theta(z_k),r(z_k)) appearing in equation (6) and ultimately appearing in the overall loss function. This term equals\\n\\nE_{z_k} log (P_Theta(z_k)/r(z_k))\\n\\nWe can sample z_k by sampling x and sampling z_k from P_Theta(z|x). Since z_k is one dimensional we could try to model P_Theta(z_k) as a one dimensional Gaussian by empirically measuring its mean and variance. But this would not be a good model if P_Theta(z|x) is trending toward delta distributions. While the problem of P_Theta(z|x) drifting toward delta distributions is discussed, the method proposed does not seem to address the problem. A simple fix is to require that P_Theta(z|x) has a minimum variance. This possibility is mentioned but not formally placed in the objective.\\n\\nThis problem is much more serious in the total variation term in the final objective (10). To optimize this term we need to be able to assign a joint marginal probability P_Theta(z) to a particular sample of the vector z. ???\"}",
"{\"title\": \"Clarifying the objective as well as the motivations for the proposed approach\", \"comment\": \"We'd like to thank the reviewer for the comments. Below we further clarify our objective and address your concern regrading the numerical results.\\n\\n1) First of all, we want to point out the reviewer could misunderstand our objective. To be more specific, proposition 1 is regrading a single continuous representation factor z_k (which is a scalar variable), therefore the trivial solution z = x suggested by the reviewer does not exists. In other words, the degenerate solution we seek to avoid is respect to each dimension of the continuous representation instead of z itself. Moreover, in this paper, we focus on learning low dimensional yet interpretable representations of the data. \\n\\nUsing the setting you provided above, suppose z in R^K and the conditional distribution P(z|x) is factorial (which is a typical assumption is the VAE literature), we first decompose I(x, z) as the following: \\n \\n I(x, z) = sum_k=1^K I(x, z_k) - KL(P(Z) || product_k=1^K P(Z_k)) \\n\\nwhere the first term of RHS quantifies the informativeness of each dimension z_k, and the second term is often referred as the \\\"total correlation\\\" of z which achieves the minimum (zero) if all dimensions of z are independent of each other. That is the mutual information I(x, z) inherently involves two keys terms that quantify the informativeness of each representation factor and the statistical dependence between these factors. We then propose to maximize informativeness of each (scalar) representation factor z_k while simultaneously encourage statistical independence across latent factors by minimizing the \\\"total correlation\\\" term. By doing so, we are expected to learn informative yet more disentangled representations (see figure 5). \\n\\nFor each scalar representation factor z_k, we (mathematically) show in section 3.2 that the trivial solution of maximizing I(x, z_k) can be obtained by severely fragmenting the latent space. To be more specifically, the mutual information I(x, z_k) can be trivially maximized by mapping each data sample to a deterministic value of z_k, while dispersing the different z_k values associated with different data samples within a dramatically large space. This can results in discontinuity of the latent representations, which is not desired. A natural resolution for this problem is to restrict the variance the z_k to be a reasonable value, with which we propose to push the the marginal distribution of P(z_k) towards a gaussian distribution so as to achieve the upper bound of I(x, z_k) (eq (4)) in proposition 1. \\n\\n\\n2) We also want to emphasize that, we propose a framework to learn a hybrid discrete-continuous representations of the data. We seek to learn semantically meaningful discrete representations while maintaining disentanglement of the continuous representations that capture the variations shared across categories. Unsupervised joint learning of disentangled continuous and discrete representations is a challenging problem due to the lack of prior for semantic awareness and other inherent difficulties that arise in learning discrete representations. \\n\\nTo the best of our knowledge, our work is, apart from JointVAE, the only framework for jointly learning discrete and continuous representations in a completely unsupervised setting in the VAE literature. \\n\\n3) We update the paper by considering more challenging dataset and incorporating more quantitative evaluations regarding the trade-off between interpretability of representations and decoding quality. Please refer to \\\"summarization of revision\\\" provided above.\"}",
"{\"title\": \"Update the revision by providing more quantitative evaluations regrading the interpretability vs. decoding quality over various datasets\", \"comment\": \"We sincerely thank the reviewer for the positive feedback and the constructive comments/questions. In order to address the main concerns, we incorporate more quantitative comparisons and provide more comprehensive numerical results to evaluate IMAE, which we summarized above. Below are our answers for your questions.\\n\\n1)* It seems a little strange to me to incorporate the VAT regularization to the IMAE framework in Section 4.2, as this is not included in the overall objective in Equation (10) and earlier analysis (Proposition 1 and 2). Will the conclusions in Proposition 1 and 2 change accordingly due to the inclusion of VAT regularization?\\n\\n VAT is proposed to resolve the inherent difficulty of learning interpretable discrete representations using neural network. As we mentioned at the beginning of section 4.2, the high capacity of neural network makes it easy to learn a non-smooth function p(y|x) that can abruptly change its predictions without guaranteeing similar data samples will be mapped to similar y. VAT is proposed as a regularization to encourage local smoothness of the conditional distribution p(y|x) for discrete representations. \\n\\n In our experimental results, we found that using VAT are significantly helpful for learning interpretable discrete representations for all methods considered in this paper except betaVAE. More interpretable continuous representations can be obtained when the method is capable of learning discrete representations that match the true categorical information of data better, since less overlap between the manifolds of each category is induced. This in turn can better help the continuous representations to capture the variation (feature) information shared over different categories while simultaneously reducing the possibility for the continuous representations to encode the nuisance information between separated manifolds of each category. \\n\\n Propositions 1&2 are provided without considering VAT. As we discussed above, VAT is proposed as a regularization term. Based on our discussions above, we hypothesize that similar statements can still be true under mild assumption ( e.g., the categorical data are comparatively separated and there does exist common feature information shared over different categories. Since VAT is incorporated to promote the local smoothness of p(y|x), this shouldn't influence proposition 1 where the I(x, y) is defined w.r.t the global information between x & y. Proposition 2 is true in general regardless of y. (Intuitively, including VAT can help continuous representations to better focus on learning feature information shared across categories, since VAT helps learn more interpretable y. \\n\\n2) * The paper states that IMAE has better trade-off among interpretability and decoding quality. But it is still unclear how a user can choose a good trade-off according to different applications. More discussion along this direction would be helpful.\\n\\n In the revision, we comprehensively evaluate IMAE against the other three methods on various datasets. For each dataset, we train each method with a wide range of hyperparameter values. The corresponding results are summarized in Figure 3 (MNIST and Fashion MNIST) and Figures 5&7 (dSprites). \\n\\n As shown in Figures 3, IMAE consistently outperforms the other three methods in terms of learning more interpretable (accurate) discrete representations over a wide range of hyperparameter values, while simultaneously achieving comparatively better decoding quality and more informative representations. This is further demonstrated in Figure 5 where we evaluate IMAE on a more challenging dataset (dSprites) and quantitatively evaluate the disentanglement vs. decoding quality trade-off. Still, IMAE performs better regarding the disentanglement score vs. reconstruction trade-off over a wide range of beta, gamma values.\\n\\n Moreover, as demonstrated in both figure 3 and figure 5, in the region of interest where both the reconstruction error and the informativeness of representations are fairly good, IMAE achieves a much better reconstruction error vs. interpretability trade-off. We attribute this to the effects of 1) maximizing mutual information I(X, Y) is capable of learning more interpretable discrete representations that match the natural labels of data better; 2) explicitly promoting statistically independent continuous latent factors by minimizing the total correlation term in our objective. By using comparatively large weight on the total correlation term (we set gamma = 2*beta in this paper.), we are able to achieve better disentanglement without sacrificing the decoding quality too much. \\n\\n\\n3)* I guess the L(y) term in Equation (10) is from Equation (9), but this is not stated explicitly in the paper.\\n\\n Thank you for capturing the typo, which we corrected in the updated version.\"}",
"{\"title\": \"Response to Reviewer 2 (part 1): updated the numerical results by considering more complext dataset and incorporating more quantitative comparisons\", \"comment\": \"We sincerely thank the reviewer for the thoughtful comments and suggestions. In order to address the main concerns, we updated the numerical results by considering a more complex dataset and incorporating the suggested quantitative evaluations. Below, we start by addressing your concerns one by one.\\n\\n1) * There is not enough quantitative comparison of the quality of disentanglement across the different methods.\\n\\n We fully agree that a quantitative comparison of the disentanglement quality regrading both continuous and discrete representations significantly improves the paper. We provide a quantitative comparison in terms of the disentanglement quality vs. reconstruction error trade-off on dSprites. The corresponding results are summarized in Figures 5&7, where we train each method over a wide range of hyperparameter values, for each value we train over 8 random seeds. \\n\\n We found that, IMAE consistently performs better in terms of achieving better disentanglement quality vs. reconstruction error trade-off over a wide range of beta, gamma values. We attribute this to the effect of explicitly promoting statistical independent continuous latent factors in our objective. Compared to InfoVAE, by using comparatively large weight on the total correlation terms (we set gamma=2*beta in this paper), we are able to achieve better disentanglement quality without sacrificing the decoding quality too much. This allows us to obtain better disentanglement vs. reconstruction error trade-off, especially in the region where both the informativeness of latent representations and the decoding quality are fairly good. \\n\\n Although JointVAE attains better decoding quality as well as more informative (overall) representations with large beta values, the associated disentanglement quality is poor. We suspect that simply pushing the upper bound the mutual information towards a target value does not explicitly encourage disentanglement across the representation factors. \\n\\n\\n2)* Shouldn\\u2019t you be doing a hyperparameter sweep for each model and choose the best value of hyperparameters for each? \\n\\n Thank you for the comments! For the updated numerical results, we do sweep over a wide range of hyperparameter values and for each value we run every method over 10 random seeds (8 for dSprites due to the limited computational resource, will increases it to 10 or 15 later).\\n\\n3)* Looking at Appendix D, it seems like VAT makes a big difference in terms of I(y;y_true), so I\\u2019m guessing it will also have a big impact on the accuracy. Thus JointVAE + VAT might beat IMAE in terms of accuracy as well, at which point it will be hard to argue that IMAE is superior in learning the discrete factor. \\n\\n In the initial version, we actually augmented all models with VAT in the numerical section, which can significantly improve all methods except betaVAE. The comparison between the results obtained by using (solid) and not using (dashed) VAT is provided in appendix. We do apologize if we didn't make it clear in the original submission. Same for the updated numerical results, we augment all models with VAT. \\n\\n We also provide one more result (Figure 8 in Appendix F) for JointVAE by running it with different target values C_y&C_z. Although JointVAE can achieve better reconstruction error by using larger target values C_y&C_z, the corresponding disentanglement / interpretability of representation factors can be very poor (Figs 5&7). \\n\\n\\n4)* In the first paragraph of Section 4, the authors claim results on CelebA, but these are missing from the paper. Testing the approach on datasets more complex than (Fashion)Mnist would have been desirable.\\n\\n We were not able to do conduct experiments on celebA due to the time constraint and the limited computational resource. As for the updated version, we do quantitatively evaluate our approach on more challenging dataset (dSprites) against the other three approaches. However, we are not able to conduct comprehensive experiments on celebA for the same reason. We do apologize for that, we will incorporate the corresponding results (hopefully also 3D chairs) for the final version. \\n\\n5)* There aren\\u2019t any latent traversals for the discrete latents - this would be a useful visualisation to complement the accuracy plots in Figure 3.\\n\\n Thank you for the suggestion! We incorporate the associated results in both Figure 2 and Figure 4. \\n\\n\\nWe answered your questions in a separate note.\"}",
"{\"title\": \"Summarization of the revision\", \"comment\": \"We sincerely thank all reviewers for the thoughtful comments and suggestions. To address your main concerns, we updated our numerical results by considering a more challenging dataset and incorporating more quantitative comparisons. Here is a summarization of the revision:\\n\\n1) We updated the results on MNIST and Fashion MNIST by sweeping over a range of hyperparameter values for all methods considered in the paper. For each value, we ran each method over 10 random seeds, and the results are summarized in Figure 3. \\n\\nAs you can see, IMAE consistently outperforms the other three methods in terms of learning more interpretable discrete representations over a wide range of hyperparameter values, while simultaneously achieving comparatively better decoding quality and more informative representations. Moreover, in the region of interest where both decoding quality and informativeness of representations are fairly good, IMAE achieves a much better decoding quality vs. interpretability trade-off. (This is also true when we consider more complex dataset (dSprites), please refer to Figures 5&7.)\\n\\nOn the other hand, being capable of learning more interpretable discrete representations that match the ground truth of the categorical information better will also induce more interpretable continuous representations followed by better decoding quality, since there is less overlap between the manifolds corresponding with each category. (see figure 4).\\n\\n\\n2) We added another quantitative comparison regrading the disentanglement vs. decoding quality trade-off. We trained all four methods on 2D shapes (dSprites) over a wide range of hyperparameter values, and evaluated the disentanglement quality by using the metric proposed by (Chen et al 2018). The associated results are provided in Figures 5&7, where for each hyperparameter value we ran over 8 different random seeds. (Currently, we are only able to provide the results over 8 random initializations due to the limited computational resource we have, we will update the results with 10-15 random seeds later.)\\n\\nAs shown in Figures 5&7, IMAE performs well regarding the disentanglement score vs. decoding quality trade-off which is especially better in the in the region of interest where the decoding quality as well as the informativeness of latent representations are fairly good. We attribute this to the effect of explicitly seeking statistically independent latent factors by minimizing the total correlation term in our objective. In other words, by putting comparatively larger weight on the total correlation term (we set gamma = 2*beta in this paper), we are able to achieve better disentanglement along with good decoding quality.\"}",
"{\"title\": \"Idea is promising and the derivation of the loss is informative but the evaluation seems insufficient.\", \"review\": \"Summary: the paper proposes a method for unsupervised disentangling of both discrete and continuous factors of variation in image data. It uses an autoencoder learned by optimising an additive loss composed of Mutual Information (MI) I(x;y,z) between the image x and the discrete+cts latents (y,z) and the reconstruction error. The mutual information is shown to decompose into I(x,y), I(x,z) and TC(y;z), and the I(x,z) is treated in a different manner to I(x,y). With Gaussian p(z|x), and it is shown that I(x,z_k) is maximal when p(z_k) is Gaussian. So KL(p(z_k)||N(0,1)) is optimised in lieu of optimising I(x,z), and I(x,y) (and TC(y;z)) is optimised by using mini-batch estimates of marginal distributions of y (and z). The paper claims improved disentangling of discrete and continuous latents compared to methods such as JointVAE and InfoVAE.\", \"pros\": [\"The derivation of the loss shows a nice link between Mutual information and total correlation in the latents.\", \"It is a sensible idea to treat the MI terms of the discrete latents differently to the continuous latents\", \"The mathematical and quantitative analysis of MI and its relation to decoder means and variances are informative.\"], \"cons\": [\"There is not enough quantitative comparison of the quality of disentanglement across the different methods. The only values for this are the accuracy scores of the discrete factor, but for the continuous latents there are only qualitative latent traversals of single models, and I think these aren\\u2019t enough for comparing different disentangling methods - this is too prone to cherry-picking. I think it\\u2019s definitely necessary to report some metrics for disentangling that are averaged across multiple models trained with different random seeds. I understand that there are no ground truth cts factors for Mnist/FashionMnist, but this makes me think that a dataset such as dSprites (aka 2D Shapes) where the factors are known and has a mix of discrete and continuous factors would have been more suitable. Here you can use various metrics proposed in Eastwood et al, Kim et al, Chen et al for a quantitative comparison of the disentangled representations.\", \"In figure 4, it says beta=lamda=5 for all models. Shouldn\\u2019t you be doing a hyperparameter sweep for each model and choose the best value of hyperparameters for each? It could well be that beta=5 works best for IMAE but other values of beta/lambda can work better for the other models.\", \"When comparing against JointVAE, the authors point out that the accuracy for JointVAE is worse than that of IMAE, a sign of overfitting. You also say that VAT helps maintain local smoothness so as to prevent overfitting. Then shouldn\\u2019t you also be comparing against JointVAE + VAT? Looking at Appendix D, it seems like VAT makes a big difference in terms of I(y;y_true), so I\\u2019m guessing it will also have a big impact on the accuracy. Thus JointVAE + VAT might beat IMAE in terms of accuracy as well, at which point it will be hard to argue that IMAE is superior in learning the discrete factor.\", \"In the first paragraph of Section 4, the authors claim results on CelebA, but these are missing from the paper. Testing the approach on datasets more complex than (Fashion)Mnist would have been desirable.\", \"There aren\\u2019t any latent traversals for the discrete latents - this would be a useful visualisation to complement the accuracy plots in Figure 3.\"], \"qs_and_comments\": [\"It\\u2019s not clear why posterior approximation quality (used as a starting point for motivating the loss) is an important quantity for disentangling.\", \"I see that the upper bound to I(x;z_k) in (4) and the objective in (6) have the same optimum at p(z_k) being Gaussian, but it\\u2019s not clear that increasing one leads to increasing the other. Using (6) to replace (4) seems to require further justification, whether it be mathematical or empirical.\", \"In proposition 2, I\\u2019m sceptical as to how meaningful the derived bound is, especially when you set N to be the size of the minibatch (B) in practice. It also seems that for small delta (i.e. to ensure high probability on the bound) and large K_2 (less restrictive conditions on p(y) and \\\\hat{p}(y)), the bound can be quite big.\", \"\\\\mathcal{L}_theta(y) in equation (10) hasn\\u2019t been introduced yet.\", \"The z dimension indices in the latent traversal plots of Figure 2 don\\u2019t seem to match the x-axis of the left figure. It\\u2019s not clear which are the estimates of I(x;z_k) for k=8,3,1 in the figure.\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Principled framework for auto-encoding\", \"review\": [\"This paper proposed a principled framework for auto-encoding through information maximization. A novel contribution of this paper is to introduce a hybrid continuous-discrete representation. The authors also related this approach with other related work such as \\\\beta-VAE and info-VAE, putting their work in context. Empirical results show that the learned representation has better trade-off among interpretability and decoding quality.\", \"It seems a little strange to me to incorporate the VAT regularization to the IMAE framework in Section 4.2, as this is not included in the overall objective in Equation (10) and earlier analysis (Proposition 1 and 2). Will the conclusions in Proposition 1 and 2 change accordingly due to the inclusion of VAT regularization?\", \"The paper states that IMAE has better trade-off among interpretability and decoding quality. But it is still unclear how a user can choose a good trade-off according to different applications. More discussion along this direction would be helpful.\", \"I guess the L(y) term in Equation (10) is from Equation (9), but this is not stated explicitly in the paper.\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Not Compelling.\", \"review\": \"This paper proposes an objective function for auto-encoding they\\ncall information maximizing auto encoding (IMAE). To set the stage\\nfor my review I will start with the following \\\"classical\\\" formulation\\nof auto-encoding as the minimization of the following where we are\\ntraining models for P(z|x) and P(x|z).\\n\\nbeta H(z) + E_{x,z sim P(z|x)} -log P(x|z) (1)\\n\\nHere H(z) is defined by drawing x from the population and then drawing\\nz from P(z|x). This is equivalent to classical rate-distortion coding\\nwhen P(x|z) is an isotropic Gaussian in which case -log P(x|z) is just\\nthe L2 distortion between x and its reconstruction. The parameter\\nbeta controls the trade-off between the compression rate and the L2\\ndistortion.\\n\\nThis paper replaces minimizing (1) with maximizing\\n\\nbeta I(x,z) + E_{x,z sim P(z|x)} log P(x|z) (2)\\n\\nThis is equivalent to replacing H(z) in (1) by -I(x,z). But (2)\\nadmits a trivial solution of z=x. To prevent the trivial solution this\\npaper proposes to regularize P(z) toward a\\ndesired distribution Q(z) and replacing I(x,z) with KL(P(z),Q(z))\\nby minimizing\\n\\nbeta KL(P(z),Q(z)) + E_{x,z sim P(z|x)} - log P(x|z) (3)\\n\\nThe paper contains an argument that this replacement is reasonable\\nwhen Q(z) and P(z|x) are both Gaussian with diagonal covariances. I\\ndid not verify that argument but in any case it seems (3) is better than (2). \\nFor beta large (3) forces P(z) = Q(z) which fixes H(z) and the a-priori value\\nH(Q). The regularization probably has other benefits.\\n\\nBut these suggestions are fairly simple and any real assessment of their\\nvalue must be done empirically. The papers experiments with MNIST\\nseem insufficient for this.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
SJf6BhAqK7 | Variadic Learning by Bayesian Nonparametric Deep Embedding | [
"Kelsey R Allen",
"Hanul Shin",
"Evan Shelhamer",
"Josh B. Tenenbaum"
] | Learning at small or large scales of data is addressed by two strong but divided frontiers: few-shot learning and standard supervised learning. Few-shot learning focuses on sample efficiency at small scale, while supervised learning focuses on accuracy at large scale. Ideally they could be reconciled for effective learning at any number of data points (shot) and number of classes (way). To span the full spectrum of shot and way, we frame the variadic learning regime of learning from any number of inputs. We approach variadic learning by meta-learning a novel multi-modal clustering model that connects bayesian nonparametrics and deep metric learning. Our bayesian nonparametric deep embedding (BANDE) method is optimized end-to-end with a single objective, and adaptively adjusts capacity to learn from variable amounts of supervision. We show that multi-modality is critical for learning complex classes such as Omniglot alphabets and carrying out unsupervised clustering. We explore variadic learning by measuring generalization across shot and way between meta-train and meta-test, show the first results for scaling from few-way, few-shot tasks to 1692-way Omniglot classification and 5k-shot CIFAR-10 classification, and find that nonparametric methods generalize better than parametric methods. On the standard few-shot learning benchmarks of Omniglot and mini-ImageNet, BANDE equals or improves on the state-of-the-art for semi-supervised classification. | [
"meta-learning",
"metric learning",
"bayesian nonparametrics",
"few-shot learning",
"deep learning"
] | https://openreview.net/pdf?id=SJf6BhAqK7 | https://openreview.net/forum?id=SJf6BhAqK7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"S1gy4rlzx4",
"B1lb42diAX",
"H1xqRlNo0m",
"BkgRXxNoR7",
"ByeUA1EsA7",
"ryx26SYORm",
"rylxGxO_A7",
"BkxOEjwOA7",
"HyxmaXDd0Q",
"rJgo9Hiuam",
"BJxN_rsOT7",
"SyltxBsuTm",
"HJxdpNodTX",
"B1eAoVj_6m",
"S1xCeCI9nm",
"S1xc8-Uchm",
"rJeDOoul2m"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544844583392,
1543371817481,
1543352530024,
1543352358317,
1543352270103,
1543177668212,
1543172103939,
1543170864252,
1543168955420,
1542137235185,
1542137195537,
1542137073508,
1542137024039,
1542136998122,
1541201397788,
1541198162247,
1540553582715
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1587/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1587/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1587/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1587/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1587/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1587/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1587/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1587/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1587/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1587/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1587/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1587/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1587/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1587/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1587/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1587/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1587/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"All reviewers wrote strong and long reviews with good feedback but do not believe the work is currently ready for publication.\\nI encourage the authors to update and resubmit.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Good but not good enough\"}",
"{\"title\": \"Summary of revision\", \"comment\": \"Thank you to all reviewers for useful feedback on the submission. We have posted a revision with the following changes:\", \"method\": \"Overall, we edited the method section to make the algorithm more clear, give a clearer introduction to meta-learning and episodic optimization, and better delineate our contributions relative to DP-means and prototypical networks.\\n\\n--Sec 3.1 \\u201cFoundations\\u201d section was removed and incorporated into the introduction of the section, under the headings \\u201cfew-shot meta-learning\\u201d, \\u201cprototypes\\u201d and \\u201cmulti-modal clustering\\u201d.\\n--Sec 3.2 (\\u201cmulti-modal clustering\\u201d) was wrapped into section 3 with the bolded \\u201cmulti-modal clustering\\u201d.\\n--Sec 3.3 (\\u201ccumulative supervision\\u201d) was moved to Sec 3.2\\n--Sec 3.4 was moved to Sec 3.1 (\\u201cProbabilistic Interpretations and Alternatives\\u201d to \\u201cProbabilistic interpretations of hard and soft clustering\\u201d)\\n--Sec 3.5 (Implementation details) was redistributed closer to where it was referred to (as suggested by reviewers).\\n--Algorithm 1 was significantly expanded, to include detailed loss computation, cluster creation and assignment steps, and clearer definitions for all variables.\", \"results\": \"We re-ordered the results section to highlight the importance of multi-modality for super-class classification and unsupervised clustering (section 4.1), followed by our variadic setting with extreme-way and extreme-shot results (section 4.2) and finally confirming SOTA performance for few-shot learning (section 4.3). Captions are expanded throughout to ease standalone interpretation of the tables and figures.\"}",
"{\"title\": \"Alternative terminology, novelty, and clustering quality\", \"comment\": \"> the framework does not present any property of the Bayesian methodology such as the possibility of inference over parameters, uncertainty quantification, or model comparison\\n> Even the very standard k-means clustering, or a linear model, can be seen as the limit of Bayesian counterparts, but it would be awkward, and not justified\\n\\nWe thank the reviewer for articulating this definition of a bayesian method. As we have explained in our first response, unlike k-means, the dp-means algorithm of Kulis et al. 2012 was derived via bayesian nonparametrics and relies on this mathematical framework for inferring the number of clusters, and so we reference this to make the origin and properties of the clustering clear. We welcome the reviewer to suggest an alternative to \\\"bayesian nonparametric\\\" (as we did before), but to be more concrete we would like to ask if \\\"infinite mixture modeling\\\" would be more apt from the reviewer's perspective?\\n\\n> I stil think that there is a substantial problem of novelty and clarity\\n> the difference with respect to the state of the art resides in the use of algorithm 1 for adaptively adding new centres if needed\\n\\nThe technical novelty of our work is in sec. 3 and algorithm 1: we make use of dp-means for inferring the number of clusters, define and experiment on three multi-modal clustering variants, develop a method to choose the cluster distance threshold lambda episodically, and mask assignments and the loss to handle both labeled and unlabeled data.\\n\\nIn our first response, we also highlighted our empirical novelty in exploring any-shot/any-way generalization in our proposed variadic setting and theoretical novelty in deriving a theoretical connection to semi-supervised prototypical networks. Could the reviewer please let us know if they are aware of existing work with these experiments and theory?\\n\\n> critical aspects: stability, dependence on the parameter \\\\lambda, uniqueness of the solution\\n> proposed use of the method is different from the original formulation of [Kulis 2012], as it is not iterated\\n> that the authors \\\"have not found the method to be sensitive to this [the order of data] in practice\\\" does not represent a strong motivation\\n\\nWe thank the reviewer for their consideration of clustering quality. We agree this is important, and so we summarize where it is addressed in our work.\\n\\n- stability and iterations: The clustering converges and multiple iterations maintain the quality of the results for classification (tables 2, 4, and 5 for example) and unsupervised clustering (table 3). While we noted in the text that one iteration was sufficient to achieve our state-of-the-art results, we will edit to explain that multiple iterations are stable in the camera ready version of the paper.\\n- dependence on lambda: Our method includes a procedure for choosing lambda that we make use of throughout our experiments (please see section 3, last paragraph). The quality of our results supports this procedure. \\n- robustness and order: Noting the lack of sensitivity to the order of the data was a direct response to the reviewer's concern that different orderings might affect the results. Our empirical results show this is not a weakness.\\n\\nWe thank the reviewer for these points, which can be further highlighted in a final revision.\\n\\nLast, we would like to note that we have posted a revision incorporating feedback from the reviews, and request to know if the reviewer finds it to be more clear.\"}",
"{\"title\": \"Incorporation of latest feedback (thanks!)\", \"comment\": \"Thank you again to the reviewer for their attention to detail. We have incorporated these comments into the algorithm in the revision.\\n\\n>There are also key sentences in the paper that are misleading/unclear.\\n>For example, the sentence \\\" Unlike DP-means, we include cluster variances\\\" is a bit odd.... \\n\\nWe have clarified the language in the methods section with respect to DP-means and our contributions. For example, we changed the above sentence to \\u201cWhile we use DP-means for cluster creation, we include cluster variances for reassignment.\\u201d Is the latest revision clearer?\\n\\nWe would like to ask the reviewer, given their significant feedback on the theoretical components of the work, if they could comment on the practical contributions of the work for meta-learning. We are thankful for the improvements in the clarity and accessibility of the work due to the reviewer's comments, and we would appreciate further comments on the contributions of the work with respect to prototypical networks and meta-learning methods more generally.\"}",
"{\"title\": \"Hard/soft clarification, empirical justification, and verification of experiment correctness\", \"comment\": \"We thank the reviewer for their continued attention to the theoretical aspects of the work.\\n\\n> still not satisfied by the proposed hard-soft hybrid\\n> an approach that is more complicated and not very well founded compared to one that has a much better interpretation\\n\\nWe sympathize with the reviewer\\u2019s hope for the empirical dominance of the theoretically pure variants of our method, but in practice we have found that their accuracies are worse than existing results, as well as our hard-soft hybrid. We have incorporated the reviewer\\u2019s feedback into our exposition of the hard-soft hybrid and the theoretical ramifications of our choices in the revision. We hope our theoretical coverage and empirical investigation of these variants sets the stage for future work to further reconcile theory and practice.\\n\\n> though I only see one test on one dataset\\n> (and thus more confidence it will work on other datasets)\\n\\nWe have also experimented on mini-ImageNet to cover both standard few-shot learning benchmarks. In the 5-way 1-shot setting with 5 unlabeled examples per class the results are: BANDE is 49.2%, Ren et al. is 48.6%, soft-soft is 47.1%, and hard-hard is 43%. In further experiments at different shot and way the methods keep this ordering. We included only the most common Omniglot setting in the revision for brevity, but can include a full appendix in the camera-ready version. As an alternative, we could compare all three variants throughout our experimental section. Would the reviewer find that more clear? In the existing text we focused on hard-soft for simplicity of description and comparison with prior work, but could revise this for the camera-ready.\\n\\n> not clear that a fair, correct experiment was done between \\\"soft-soft\\\", \\\"hard-hard\\\", and \\\"hard-soft\\\"\", \"thank_you_for_raising_this_important_point\": \"we assure the reviewer that the experiments comparing the three variants are correct and fair w.r.t. operation over labeled and unlabeled data and new cluster creation. We have clarified this in the revision of appendix A.4. All three variants have the same labeled and unlabeled scope for reassignment. The difference in clustering condition for algorithm 2 derives from the approximation to the Chinese Restaurant Process as a draw from the base distribution and as such is part of the algorithm. The extended A.4 in the revision better delineates the soft-soft variant, which operates on both labeled and unlabeled examples, and its connection to an infinite mixture extension of Ren et al., which is only valid when iterating over unlabeled examples alone and holding variances constant.\\n\\n> better approach would be to use the MARGINAL likelihood of x_i being assigned to a new cluster, as in the Gibbs sampler for DP mixtures\\n\\nWe have clarified A.4 to better indicate that algorithm 2 is the \\u201csoft-soft\\u201d method, which follows exactly the suggestion of using the marginal probabilities to create a new cluster as in the Neal citation.\\n\\nTo review, we locate the hard-soft, hard-hard, and soft-soft clustering variants in the text of the revision for further consideration:\\n\\n- hard-soft is our chosen method given by algorithm 1 and explained in sec. 3.\\n- hard-hard is a variant that does not include cluster variances and differs only in the \\\"UpdateAssignments\\\" step of algorithm 1, as detailed in sec. 3.1.\\n- soft-soft is a variant that incorporates variances throughout, and because it requires more derivation and explanation it is found in appendix A.4 and its algorithm 2, which are referenced from sec. 3.1.\\n\\nAll three clustering variants presented in this work are novel approaches for end-to-end, multi-modal clustering that extend the accuracy and scope of prototypical networks. We do not intend or claim this work to have a complete theory for nonparametric meta-learning methods, but instead seek to explain the theoretical aspects of our own contributions and existing work on semi-supervised prototypical networks (Ren et al. 2018) that did not have a theoretical interpretation. We leave further theoretical investigation to future work.\"}",
"{\"title\": \"reply\", \"comment\": \"I thank the authors for their clarification. However, I am still not very convinced about the use of the terminology made in this work.\\n\\nThe proposed scheme builds upon an algorithm which was obtained as the zero variance limit of a Bayesian mixture model.\\nThis does not justify the term Bayesian non-paramteric for the proposed method. As stated in my first review, the framework does not present any property of the Bayesian methodology such as the possibility of inference over parameters, uncertainty quantification, or model comparison. Even the very standard k-means clustering, or a linear model, can be seen as the limit of Bayesian counterparts, but it would be awkward, and not justified, to present a work using these tools as Bayesian.\\n\\nI stil think that there is a substantial problem of novelty and clarity. As also noted by reviewer 3, the difference with respect to the state of the art resides in the use of algorithm 1 for adaptively adding new centres if needed. While being of interest, this part deserves further clarifications on the many critical aspects: stability, dependence on the parameter \\\\lambda, uniqueness of the solution. In the current version of the manuscript these aspects are lightly mentioned and not discussed. The fact that the authors \\u201c have not found the method to be sensitive to this in practice\\u201d does not represent a strong motivation in favour of the method. Moreover, as already mentioned in my previous review, the proposed use of the method is different from the original formulation of [Kulis 2012], as it is not iterated (\\u201cIn this clustering scheme a single pass is sufficient \\u2026\\u201d). This raises further concerns about stability and robustness of the proposed procedure.\\n\\nFinally, I apologise for the previous use of the term \\u201cuni-modal distribution\\u201d. I acknowledge that the proposed method is explicitly built to account for multi-modal ones, and I made a mistake while typing my previous review.\"}",
"{\"title\": \"Hard/soft approach still lacks justification ....\", \"comment\": \"Thanks for clarification on many details. I'm glad you have planned a code release. I hope the revised manuscript also includes enough details that readers don't have to go look at code for every detail.\\n\\nI'm still not satisfied by the proposed hard-soft hybrid. I suppose it may be \\\"marginally more accurate in experiments\\\", though I only see one test on one dataset, where soft-soft accuracy is 98.4 and hard-soft accuracy is 99.0, a difference that seems too small to justify an approach that is more complicated and not very well founded compared to one that has a much better interpretation (and thus more confidence it will work on other datasets).\\n\\nIt's also not clear that a fair, correct experiment was done between \\\"soft-soft\\\", \\\"hard-hard\\\", and \\\"hard-soft\\\". Alg. 1 (hard-soft) can reassign both labeled and unlabeled data to new clusters. However, Alg. 2 (the hard-hard method) in A.4 only operates on unlabeled examples. (Also, the value of \\\\sigma_0 is unclear). No formal algorithm is given at all for the soft-soft method. Note also that the condition for creating a new cluster in Alg. 1 is that the distance to the closest cluster exceeds some threshold. However, in the written hard-hard algorithm, the condition is different: distance to some \\\\mu_0 which is fixed to 0.0. The better approach would be to use the MARGINAL likelihood of x_i being assigned to a new cluster, as in the Gibbs sampler for DP mixtures (see Eq. 3.7 of Radford Neal's \\\"Sampling methods for Dirichlet Process Mixture Models\\\" (http://www.stat.columbia.edu/npbayes/papers/neal_sampling.pdf).\\n\\nGiven all these concerns, it's unclear if there's really a fair comparison here.\"}",
"{\"title\": \"Thanks for your revisions! Quality is improving, but there still some issues that make me reluctant to accept\", \"comment\": [\"P1: I appreciate the expanded Alg. 1. Definitely an improvement, but there are still some significant issues.\", \"The first line uses \\\"C\\\", but \\\"C\\\" hasn't been defined. I think you mean the total number of labeled classes in the dataset \\\"n_s\\\".\", \"You should clarify that p(x | mu, sigma) is the Gaussian PDF (e.g. a specific function that evaluates to a probability density)\", \"The distance computation of d_ic as written asks if y_i == c. But I think you really want to test if y_i == \\\\ell_c (the label of class c). Otherwise you'll never be able to reuse new clusters you create, since those clusters will have c > n_s and thus y_i == c will always be false.\", \"In the final cross entropy expression, the variable \\\"c\\\" is unbound in the right hand term. You want it to be defined by the max, but as written it is a separate variable.\", \"There are also key sentences in the paper that are misleading/unclear. For example, the sentence \\\" Unlike DP-means, we include cluster variances\\\" is a bit odd.... the paper does NOT use any variances for the DP-means part of Alg. 1. However, it does use some variance parameters for later stages. So they haven't changed the DP-means algorithm to include variances, they just use variances in a post-processing step\"]}",
"{\"title\": \"Proposed method's multi-modality seems distinct from previous work to me\", \"comment\": \"R3, any revised thoughts on novelty based on this careful feedback from the authors?\\n\\nSeems to me that the difference between previous methods and the current approach is given in Fig. 1.... previous methods assume each class has a single center in the learned feature space (e.g. the left panel in Fig. 1), while the proposed approach allows each class to have multiple centers if needed (the far right panel). This multi-modality makes the proposed method more flexible.\"}",
"{\"title\": \"Bayesian Nonparametric Name, Terminology, and Clustering Details\", \"comment\": \"> use of the term \\u201cBayesian nonparametric\\u201d is inappropriate\\n\\nThe DP-means clustering method of Kulis et al., which our work adapts to end-to-end optimization for metric learning, is derived through bayesian nonparametric infinite mixture modeling in the limit of zero variance. The existence of the method, and others that share this mathematical framework (Broderick et al. 2013, Roychowdhury et al. 2013, Wang & Zhu 2015), are due to bayesian nonparametrics and identify as such in their titles and text. Not acknowledging this connection could obscure the origin and properties of the method. Does the reviewer have an alternate term in mind?\\n\\n> paper makes often use of abstract terms and jargon\\n\\nCould the reviewer please be more precise on this point? We have made our best effort to follow the standard terminology for meta-learning and few-shot learning (Vinyals et al. Finn et al., Snell et al, Ren et al.), but would appreciate knowing specifically where this is confusing, so that it can be more clear for a broader audience.\\n\\n> procedure is also known to be sensitive to the order by which the data is provided, and this point is not addressed in this work. \\n\\nWhile it is true that the clustering is dependent on the order of the data, we simply have not found the method to be sensitive to this in practice, although we can include this result in the revision. We note that this dependence is likewise mentioned in Kulis et al. 2012 but they make no mention of it impacting the quality of their results.\\n\\n> proposed method can adapt to account for uni-modal distributions\\n\\nOur method critically allows for *multi-modality* in the data distribution for both labeled and unlabeled data, adaptively choosing the number of clusters, unlike the prior work by Snell et al. and Ren et al. that assume fixed numbers of clusters, as do [2, 3, 4, 5] cited in the review. This is explained in Section 3.2 and shown to be crucial for diverse classes like alphabets in Section 4.3 Table 4.\"}",
"{\"title\": \"Relation/Contrast to Deep Subspace Embedding, Novelty, and Breadth of Results\", \"comment\": \"> proposes a learning method based on deep subspace clustering\\n> substantial amount of literature on deep subspace embeddings that proposes very similar methodologies to the one of this paper (e.g. [2-5])\\n\\nWe thank the reviewer for bringing up deep subspace embedding. While our work and these are generally related by metric learning, they are quite separate in approach and purpose. Ours is a meta-learning approach for multi-modal representation (that is, having an adaptive number of centroids per class) of labeled and unlabeled data, it is optimized for classification tasks, and it is evaluated by generalization to new data and tasks. The cited [2-5] address unsupervised clustering, have fixed numbers of clusters, and are evaluated by clustering metrics on the same data they are optimized on.\\n\\nMost significantly, these works *do not consider generalization*: the clustering methods are optimized on the data that is to be clustered and do not experiment on held-out tasks/classes as in meta-learning settings like ours. Only [5] can incorporate labeled data, and in their experiments they train and test on the same classes, without generalization, on a tiny synthetic dataset and the Oxford flowers dataset of 17 classes and <1000 images.\\n\\n[2, 3, 4, 5] learn and evaluate unsupervised and zero-shot clustering models on the same train/test data with the same classes without generalization experiments. [2] cannot incorporate labeled data, requires pre-training, and shows results on the toy datasets of MNIST and STL-10. [3] cannot incorporate labeled data and is only evaluated on the simple face and object datasets Yale B, ORL, and COIL. [4] addresses generative modeling and unsupervised clustering for problems, not few-shot learning and classification, and its experiments are restricted to small-scale datasets with 10 or fewer clusters. [5] focuses on zero-shot learning with a linear auto-encoder on off-the-shelf features, and its \\\"supervised clustering\\\" section has only a 3-class synthetic dataset and a 17-class dataset of flower images where the clustering is optimized for the same 17 flower species it is evaluated on.\\n\\n> novelty of the proposed contribution is questionable\\n\\nHere is a brief summary of our key, novel contributions:\", \"technical_novelty\": \"our method is capable of adaptive, multi-modal clustering unlike the fixed, uni-modal clustering of Ren et al. and Snell et al. by our reconciliation of DP-means from Kulis et al. with end-to-end learning (section 3.2).\", \"empirical_novelty\": \"we propose and thoroughly investigate our \\\"variadic\\\" setting of any-shot/any-way generalization (section 4.2), find that several popular methods degrade in this setting (MAML, Reptile, few-shot graph nets), show that it is possible to learn a large-scale classifier (1692-way character recognition) from small-scale episodic optimization (5-way 1-shot tasks), show that episodic optimization of a prototypical method rivals the accuracy from large-scale SGD optimization of a strong fully-parametric baseline optimized by SGD on CIFAR-10/100, and evaluate few-shot learning of alphabets instead of characters to examine accuracy on more complex data distributions.\", \"theoretical_novelty\": \"We shed further light on prototypical network methods with the lens of probabilistic interpretation. We derive an approximate interpretation of Ren et al. (Appendix A4), which lacked theoretical justification, and explain the direct interpretation of the hard variant of our own method (Section 3.4).\\n\\n> method is tested on several scenarios and datasets, showing promising results in prediction accuracy\\n\\nWe thank the reviewer for commenting on our breadth of evaluation and promising results. To reinforce this point, we note that our experiments cover several problem statements: few-shot fully-supervised/semi-supervised classification (Section 4.1, Tables 1 & 2), our proposed variadic setting of any-shot/any-way generalization (Section 4.2), purely unsupervised clustering (Section 4.3, table 3) and transfer learning from super-class training to sub-class recognition (Section 4.3, table 4). We approach each of these problems by meta-learning through episodic optimization of classification tasks, and these experiments focus on generalization to new tasks (of held-out classes, different settings of shot and way, or discovery of sub-classes from super-class training).\"}",
"{\"title\": \"Contrast with Ren et al., Significance of Multi-modal (Many-to-One) Clustering, and Variadic Setting\", \"comment\": \"We thank the reviewer for raising three key points of our work: (1) clustering algorithm choices and our difference with Ren et al., (2) our technical contribution of extending prototypical methods to multi-modal representation for handling more complicated data distributions, and (3) our empirical contribution of proposing and thoroughly investigating the variadic setting of any-shot/any-way generalization.\\n\\n> the contrast to Ren et al, is not provided to the degree it should be\\n> only differing in the choice of a different clustering algorithm\", \"the_difference_in_choice_of_clustering_is_crucial\": \"- our method is capable of adaptive, multi-modal clustering unlike the fixed, uni-modal clustering of Ren et al. and Snell et al. This gives an improvement of +3 points accuracy on the standard few-shot benchmark of 5-way, 5-shot mini-ImageNet classification (Table 2), extends prototypical nets to problems without any labeled data (see next bullet point), and for more diverse classes like alphabets our accuracy is ~25 points higher.\\n- our method handles labeled data by the same clustering rule unlike the heuristics of Ren et al. for unlabeled data, making inference in our method possible for zero labeled examples (of any kind, including meta-data as in zero-shot learning) whereas Ren et al. and Snell et al. are undefined in this setting. Section 4.3 shows high quality clustering without labels (Table 3), and 10-25 point improvements on prior work for learning more diverse classes like alphabets instead of single characters (Table 4), underlining the importance of multiple modes.\\n- We shed further light on the choice of clustering with the lens of probabilistic interpretation: we derive an approximate interpretation of Ren et al. (Appendix A4), which lacked theoretical justification, while explaining the direct interpretation of the hard variant of our own method (Section 3.4).\\n\\n> significance of \\\"multi-model clustering\\\" \\n\\nMulti-modality is a key and distinguishing property of our method that is necessary for the quality of our results. Please refer to figure 1 for a schematic of the difference among Snell et. al, Ren et al., and BANDE (ours): note that having multiple modes lets BANDE more accurately cluster the labeled and unlabeled data alike. Among these methods, only BANDE can adjust its capacity to model simple, compact classes with a single mode while simultaneously modeling diverse, complicated classes with multiple modes. We achieve higher accuracy than Ren et al. for semi-supervised few-shot learning (Table 2). Furthermore, Table 4 in particular highlights the needs for multi-modal representation: a full alphabet is not uni-modal in the learned embedding, unlike a single character, and here we show major (10-25) point gains over the prototypical nets of Snell et al. and Ren et al. that assume each class has a uni-modal data distribution.\\n\\n> by their definition of \\\"variadic\\\", how is this more variadic than Ren et al. or Snell et al.?\\n\\nSnell et al., Ren et al., and our method do indeed generalize better across shot and way as we show (Figure 2). Our first contribution is in evaluating this generalization at all in our novel experiments: we cover extreme way at 1692 Omniglot classes (Figure 3), extreme shot at zero labeled examples for clustering (Table 3) and at scaling episodic optimization to the supervised learning regime of 50k labeled examples on CIFAR-10 and CIFAR-100. Existing work was restricted to the few-shot settings of Section 4.1 with training/testing on the same way and shot.\\n\\nBANDE (ours) is more variadic than Ren et al. and Snell et al. in 1. handling the case of purely unlabeled data and 2. handling more diverse data with complicated class distributions such as alphabet classes instead of character classes (section 4.3). We forecast that meta-learning, as it scales to more diverse data distributions, will encounter more tasks like our alphabet recognition experiments in the variety and even hierarchy of classes, where our adaptive, multi-modal clustering helps significantly (Table 4). While we expect further progress to improve on Ren et al., Snell et al., and our own method, the main point here is to encourage this kind of shot/way generalization to reconcile the distant poles of small-scale and large-scale learning.\"}",
"{\"title\": \"Incorporation of Presentation Feedback (Thanks!)\", \"comment\": \"We now turn to the reviewer's thorough feedback on presentation.\", \"p1\": \"numbers of iterations. In principle, BANDE can be iterated multiple times, as in DP-means. However, in our experiments we found accuracy does not improve with more iterations. We will modify the text to make this more clear. We will also correct the algorithm description to appropriately update n and c with the required two lines (thank you for catching this).\", \"p2\": \"Tables 1 and 2 captions and details. The metric is indeed accuracy percentage, as is standard for these benchmarks, which we are clarifying in our revision (to be posted during the rebuttal period). We appreciate that the semi-supervised setting of Ren et al. has a number of details, which is why we placed the paragraph on semi-supervised episode composition under table 2, and we will incorporate more of this text into the caption to make it easier to find.\", \"p3\": \"episodic learning. We would like to thank the reviewer for commenting on the clarity of our work for readers who do not specialize in meta-learning and few-shot learning. We tried to follow the standard summary in this field (see Ren et al., Finn et al.). While fuller tutorial coverage of few-shot learning would be the most clear, we are constrained by the page limit when explaining the existing settings and our contributions of multi-modality and any-shot/any-way generalization in new settings. We are clarifying few-shot details in captions and the main text in the revision.\", \"p4\": \"setting \\u03bb. The technique for setting \\u03bb is our own, which we summarize at the end of 3.2: \\\"We estimate \\u03c1 as the variance in the labeled cluster means within an episode, while \\u03b1 is treated as a hyperparameter.\\\" The algebraic expression of \\u03bb in terms of \\u03c1, \\u03b1 is what we borrow from Kulis et al., and we are rewording this for clarity in the revision.\\n\\n\\\"computed in the same way as standard prototypical networks\\\" (section 3.2). This is explained in the last paragraph of 3.1, so we remove it here to avoid redundancy and potential confusion.\"}",
"{\"title\": \"Clarity, Reproducibility, and Details to Resolve Technical Concerns\", \"comment\": \"We thank the reviewer for their detailed feedback, in particular the attention to the technical aspects of the clustering steps and probabilistic interpretations in our work, and the comments on clarity and accessibility for audiences less familiar with meta-learning and few-shot learning. We agree with the reviewer on the importance of multi-modal clustering as \\\"better than rigid, one-to-one assumptions\\\" of prior work, which we show by experiment on alphabet recognition in section 4.3 and improved semi-supervised few-shot classification in section 4.1. We likewise agree that methods that \\\"really succeed across various variadic settings would be significant\\\" which is why we propose it in this work and investigate it by experiment in section 4.2.\\n\\nRegarding concerns of clarity and reproducibility, we are incorporating the feedback of the reviews into a revision to be posted during the rebuttal period and will release code after decision (omitted here only to preserve anonymity). Our comprehensive code release will cover our model, experimental evaluation and training settings, all few-shot baselines (including prototypical networks, semi-supervised prototypical networks, and our variadic extensions of MAML and few-shot graph networks), and datasets. This will help safeguard reproducibility for future work and serve as a reference implementation of the variadic setting.\\n\\nWe now clarify our method and experiments to address the reviewer's technical concerns.\\n\\nhard/soft assignments and probabilistic interpretation: We thank the reviewer for their theoretical precision. We are in full agreement, and wish to point out that we identify and experiment with fully hard (sec. 3.4) and fully soft variants of our method (appendix A4) for this reason of probabilistic justification. We choose the hard-soft hybrid for our main results, as mentioned in the paper, because it is marginally more accurate in experiments. We appreciate the feedback on this point, and are revising the text to make these variants more clear.\\n\\nnumber of passes/clustering steps: We will clarify our language to use the term \\u201cclustering iteration\\u201d instead of passes/clustering steps. In the fully hard model, an iteration corresponds to the assignment of all labeled and unlabeled points to clusters, and then an update of the means of all clusters. In the fully soft model, an iteration corresponds to computing soft assignments for all points, and then updating the means. In the hard-soft hybrid, we use the \\u201chard\\u201d step to compute a set of cluster means, and then perform a \\\"soft\\\" clustering step in order to update these cluster means.\", \"cluster_specific_variances\": \"\\\\sigma and \\\\sigma_u are learned and are shared across all labeled and all unlabeled clusters respectively. \\\\sigma_c was a typo for \\\\sigma as it is the variance of class clusters. The only exception to learning these variances, as noted, is Section 4.3 where they are fixed.\\n\\ninternal baselines/ablations: We agree with the reviewer on this list of ablations/internal baselines, so much so that we have already experimented with them in the development of the method: the selected multi-modal clustering with hard-soft assignment was best. For exposition we chose to focus on the hard-soft variant as our method and compare to competing works like Ren et al. and Finn et al., but for completeness we will include these ablation experiments in our revision to the text (to be posted during the rebuttal period).\\n\\n\\u201cour method can only be used for classification, and not regression\\u201d: While true, this weakness holds for prior prototypical methods too by Snell et al. and Ren et al. so our work is no more and no less limited in this regard.\"}",
"{\"title\": \"Hard to read and relies on unjustified, shifting assumptions\", \"review\": \"Update after Author Rebuttal\\n--------------\\nAfter reading the rebuttal, I'm pleased that the authors have made significant revisions, but I still think more work is needed. The \\\"hard/soft\\\" hybrid approach still lacks justification and perhaps wasn't compared to a soft/soft approach in a fair and fully-correct way (see detailed reply to authors). I also appreciate the efforts on revising clarity, but still find many clarity issues in the newest version that make the method hard to understand let alone reproduce. I thus stand by my rating of \\\"borderline rejection\\\" and urge the authors to prepare significant revisions for a future venue that avoid hybrids of hard/soft probabilities without justification. \\n\\n(Original review text below. Detailed replies to authors are in posts below their responses).\\n\\nReview Summary\\n--------------\\nWhile the focus on variadic learning is interesting, I think the present version of the paper needs far more presentational polish as well as algorithmic improvements before it is ready for ICLR. I think there is the potential for some neat ideas here and I hope the authors prepare stronger versions in the future. However, the current version is unfortunately not comprehensible or reproducible.\\n\\nPaper Summary\\n-------------\\n\\nThe paper investigates developing an effective ML method for the \\\"variadic\\\" regime, where the method might be required to perform learning from few or many examples (shots) and few or many classes (ways). The term \\\"variadic\\\" comes from use in computer science for functions that can a flexible number of arguments. There may also be unlabeled data available in the few shot case, creating semi-supervised learning opportunities.\", \"the_specific_method_proposed_is_called_bande\": \"Bayesian Nonparametric Deep Embedding. The idea is that each data point's feature vector x_i is transformed into an embedding vector h(x_i) using a neural network, and then clustering occurs in the embedding space via a single-pass of the DP-means algorithm (Kulis & Jordan 2012). Each cluster is assumed to correspond to one \\\"class\\\" in the eventual classification problem, though each class might have multiple clusters (and thus be multi-modal).\\n\\nLearning occurs in an episodic manner. After each episode (single-pass of DP-means), each point in a query set is embedded to its feature vector, then fed into each cluster's Gaussian likelihoods to produce a normalized cluster-assignment-probability vector that sums to one. This vector is then fed into a cross-entropy loss, where the true class's nearest cluster (largest probability value) is taken to be the true cluster. This loss is used to perform gradient updates of the embedding neural network.\\n\\nThere is also a \\\"cumulative\\\" version of the method called BANDE-C. This version keeps track of cluster means from previous episodes and allows new episodes to be initialized with these.\\n\\nExperiments examine the proposed approach across image categorization tasks on Omniglot, mini-ImageNet, and CIFAR datasets.\\n\\n\\nStrengths\\n---------\\n* I like that many clusters are used for each true class label, which is better than rigid one-to-one assumptions.\\n\\n\\nLimitations\\n-----------\\n* Can only be used for classification, not regression\\n* The DP-means procedure does not account for the cluster-specific variance information that is used at other steps of the algorithm\\n\\n\\nSignificance and Originality\\n----------------------------\\nTo me, the method appears original. Any method that could really succeed across various variadic settings would be significant.\\n\\n\\n\\nPresentation Concerns\\n---------------------\\n\\nI have serious concerns about the presentation quality of this paper. Each section needs careful reorganization as well as rewording.\\n\\n## P1: Algo. 1 contains numerous omissions that make it as written not correct.\\n\\n* the number of clusters count variable \\\"n\\\" is not updated anywhere. As writting this algo can only update one extra cluster beyond the original n.\\n* the variable \\\"c\\\" is unbound in the else clause. You need a line that clarifies that c = argmin_{c in 1 ... n} d_ic\\n\\nWould be careful about saying that \\\"a single pass is sufficient\\\"... you have *chosen* to do only one pass. When doing k-means, we could also make this choice. Certainly the DP-means objective could keep improving with multiple passes.\\n\\n## P2: Many figures and tables lack appropriate captions/labels\", \"table_1\": \"What metric is reported? Accuracy percentage? Not obvious from title/caption. Should also make very clear here how much labeled data was used.\", \"table_2\": \"What metric is reported? Accuracy percentage? Not obvious from title/caption. Should also make how many labeled and unlabeled examples were used easier to find.\\n\\n## P3: Descriptions of episodic learning and overall algorithm clarity\\n\\nReaders unfamiliar with episodic learning are not helped with the limited coverage provided here in 3.1 and 3.2. When exactly is the \\\"support\\\" set used and the \\\"query\\\" set used? How do unlabeled points get used (both support and query appear fully labeled)? What is n? What is k? What is T? Why are some points in Q denoted with apostrophes but not others? Providing a more formal step-by-step description (perhaps with pseudocode) will be crucial.\\n\\nIn Sec. 3.2, the paragraph that starts with \\\"The loss is defined\\\" is very hard to read and parse. I suggest adding math to formally define the loss with equations. What parameters are being optimized? Which ones are fixed?\\n\\nAdditionally, in Sec. 3.2: \\\"computed in the same way as standard prototypical networks\\\"... what is the procedure exactly? If your method relies on a procedure, you should specify it in this paper and not make readers guess or lookup a procedure elsewhere.\\n\\n\\n## P4: Many steps of the algorithm are not detailed\\n\\nThe paper claims to set \\\\lambda using a technique from another paper, but does not summarize this technique. This makes things nearly impossible to reproduce. Please add such details in the appendix.\\n\\nMajor Technical Concerns\\n------------------------\\n\\n## Alg. 1 concerns: Requires two (not one) passes and mixes hard and soft assingments and different variance assumptions awkwardly\\n\\nThe BANDE algorithm (Alg. 1) has some unjustified properties. Hard assignment decisions which assume vanishing variances are used to find a closest cluster, but then later soft assignments with non-zero variances are used. This is a bit heuristic and lacks justification... why not use soft assignment throughout? The DP means procedure is derived from a specific objective function that assumes hard assignment. Seems weird to use it for convenience and then discard instead of coming up with the small fix that would make soft assignment consistent throughout.\\n\\nFurthermore, The authors claim it is a one pass algorithm, but in fact as written in Alg. 1 it seems to require two passes: the first pass keeps an original set of cluster centers fixed and then creates new centers whenever an example's distance to the closest center exceeds \\\\lambda. But then, the *soft* assignment step that updates \\\"z\\\" requires again the distance from each point to all centers be computed, which requires another pass (since some new clusters may exist which did not when the point was first visited). While the new soft values will be close to zero, they will not be *exactly* zero, and thus they matter. \\n\\n## Unclear if/how cluster-specific variance parameters learned\\n\\nFrom the text on top of page 4, it seems that the paper assumes that there exist cluster-specific variances \\\\sigma_c. However, these are not mentioned elsewhere, only a general (not cluster-specific) label variance \\\\sigma and fixed unlabeled variance sigma_u are used.\\n\\n## Experiments lack comparison to internal baselines\\n\\nThe paper doesn't evaluate sensitivity to key fixed hyperparameters (e.g. \\\\alpha, \\\\lambda) or compare variants of their approach (with and without soft clustering step, with and without multimodality via DP-means). It is difficult to tell which design choices of the method are most crucial.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Novelty is unclear\", \"review\": \"The paper proposes a meta-learning method that utilizes unlabeled examples along with labeled examples. The technique proposed is very similar to the one by (Ren et al. 2018), only differing in the choice of a different clustering algorithm (Kulis and Jordan, 2012) instead of soft k-means as used by Ren et al.\\n\\nI feel the contrast to Ren et al, is not provided to the degree it should be. The Appendix paragraph A4 is not sufficient in terms of explaining why this method is conceptually different or significantly better than the related approach. It is hard for me to certify the merits of their work, including explaining the experimental results.\\n\\nI also do not understand the significance of \\\"multi-model clustering\\\" in this context. Also, by their definition of \\\"variadic\\\", how is this more variadic than Ren et al. or Snell et al.?\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"A work lacking clarity\", \"review\": \"This work proposes a learning method based on deep subspace clustering. The method is formulated by identifying a deep data embedding, where clustering is performed in the latent space by a revised version of k-means, inspired by the work [1]. In this way, the proposed method can adapt to account for uni-modal distributions. The authors propose some variations of the framework based on soft cluster assignments, and on cumulative learning of the cluster means.\\nThe method is tested on several scenarios and datasets, showing promising results in prediction accuracy.\\n\\nThe idea presented in this work is reasonable and rather intuitive. However, the paper presentation is often unnecessarily convoluted, and fails in clarifying the key points about the proposed methodology. The paper makes often use of abstract terms and jargon, which sensibly reduce the manuscript clarity and readability. For this reason, in my opinion, it is very difficult to appreciate the contribution of this work, from both methodological and applicative point of view. \\n\\nRelated to this latter point, the use of the term \\u201cBayesian nonparametric\\u201d is inappropriate. It is completely unclear in which sense the proposed framework is Bayesian, as it doesn\\u2019t present any element related to parameters inference, uncertainty estimation, \\u2026 Even the fact that the method uses an algorithm illustrated in [1] doesn\\u2019t justifies this terminology, as the clustering procedure used here only corresponds to the limit case of a Dirichlet Process Gibbs Sampler when the covariance parameters goes to zero. Moreover, the original procedure requires the iteration until convergence, while it is here applied with a single pass only. The procedure is also known to be sensitive to the order by which the data is provided, and this point is not addressed in this work. \\n\\nFinally, the novelty of the proposed contribution is questionable. To my understanding, it may consist in the use of embedding methods based on the approach provided in [1]. However, for the reasons illustrated above, this is not clear. There is also a substantial amount of literature on deep subspace embeddings that proposes very similar methodologies to the one of this paper (e.g. [2-5]). For this reason, the paper would largely benefit from further clarifications and comparison with respect to these methods. \\n\\n\\n\\n\\n\\n[1] Kulis and Jordan, Revisiting k-means: New Algorithms via Bayesian Nonparametrics, ICML 2012\\n\\n[2] Xie, Junyuan, Ross Girshick, and Ali Farhadi. \\\"Unsupervised deep embedding for clustering analysis.\\\" International conference on machine learning. 2016.\\n[3] Ji, Pan, et al. \\\"Deep subspace clustering networks.\\\" Advances in Neural Information Processing Systems. 2017.\\n[4] Jiang, Zhuxi, et al. \\\"Variational deep embedding: An unsupervised and generative approach to clustering.\\\" IJCAI 2017\\n[5] Kodirov, Elyor, Tao Xiang, and Shaogang Gong. \\\"Semantic autoencoder for zero-shot learning. CVPR 2017.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
H1faSn0qY7 | DL2: Training and Querying Neural Networks with Logic | [
"Marc Fischer",
"Mislav Balunovic",
"Dana Drachsler-Cohen",
"Timon Gehr",
"Ce Zhang",
"Martin Vechev"
] | We present DL2, a system for training and querying neural networks with logical constraints. The key idea is to translate these constraints into a differentiable loss with desirable mathematical properties and to then either train with this loss in an iterative manner or to use the loss for querying the network for inputs subject to the constraints. We empirically demonstrate that DL2 is effective in both training and querying scenarios, across a range of constraints and data sets. | [
"neural networks",
"training with constraints",
"querying networks",
"semantic training"
] | https://openreview.net/pdf?id=H1faSn0qY7 | https://openreview.net/forum?id=H1faSn0qY7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"BklzRClXlE",
"HylwrZHa1E",
"Hye-hTp4CX",
"Syxwc66ECQ",
"BkgUupaVCX",
"SJl8YjQzRX",
"ryl5-oQMCX",
"SkxcDq7f0m",
"Hye6GmQfC7",
"H1ldgwo3nX",
"HJllffWqn7",
"HJx46y-q37"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544912586396,
1544536383257,
1542933929072,
1542933903336,
1542933870503,
1542761342135,
1542761218205,
1542761057647,
1542759188956,
1541351151601,
1541177864513,
1541177275996
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1586/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1586/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1586/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1586/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1586/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1586/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1586/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1586/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1586/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1586/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1586/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1586/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"Unfortunately, this paper fell just below the bar for acceptance. The reviewers all saw significant promise in this work, stating that it is intriguing, \\\"novel and provides an interesting solution to a challenging problem\\\" and that \\\"many interesting use cases are clear\\\". AnonReviewer2 particularly argued for acceptance, arguing that the proposed approach provides a very flexible method for incorporating constraints in neural network training. A concern of AnonReviewer2 was that there was no guarantee that this loss would be convex or converge to an optimum while statisfying the constraints. The other two reviewers unfortunately felt that while the proposed approach was \\\"interesting\\\", \\\"promising\\\" and \\\"intriguing\\\", the quality of the paper, in terms of exposition, was too low to justify acceptance. Arguably, it seems the writing doesn't do the idea justice in this case and the paper would ultimately be significantly more impactful if it was carefully rewritten.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"A promising approach to include logical constraints in neural network training, but the writing is not quite ready yet.\"}",
"{\"title\": \"Updated score\", \"comment\": \"I've read the new version of the paper, the comments of other reviewers and the answers of the authors and I've decided to increase my score.\"}",
"{\"title\": \"Update of PDF\", \"comment\": \"We now again updated the PDF with a new abstract and introduction which put the work in context and provide examples of DL2's benefits.\"}",
"{\"title\": \"Update of PDF\", \"comment\": \"\\u2192 Presentation of the paper could be improved.\", \"a\": \"We now again updated the PDF with a new abstract and introduction which put the work in context and provide examples of DL2's benefits. We hope this improves the clarity of the examples and we are happy to take further suggestions into account.\"}",
"{\"title\": \"Update of PDF\", \"comment\": \"\\u2192 Abstract, introduction and main body need work to motivate the work and be more clear.\", \"a\": \"We now again updated the PDF with a new abstract and introduction which put the work in context and provide examples of DL2's benefits. We believe we now motivate the approach and its strengths better and we are happy to take further suggestions into account.\"}",
"{\"title\": \"Clarification of Key Questions\", \"comment\": \"\\u2192 Presentation of the paper could be improved.\", \"a\": \"We clarified this point in the write-up now: \\\\delta is not a constant, but a function of \\\\epsilon. Please take a look at the updated notation in our Theorem 1, which should make this explicit. We also provided a proof of Theorem 1 in the Appendix A which provides a constructive proof for the existence of \\\\delta(\\\\epsilon).\"}",
"{\"title\": \"Clarification of Key Questions\", \"comment\": \"\\u2192 Abstract, introduction and main body need work to motivate the work and be more clear.\", \"a\": \"We have investigated this in further experiments, found in Appendix F in the updated version. We investigated how DL2 runtime scales for a simple query with a different number of variables. To explore opposing constraints we optimize over the disjunction of directly opposing constraints with a single variable. By changing parameters we increase how far these two solutions are apart.\\n\\nFinally, to study the run-time behavior in the number of constraints we start from a query for an adversarial example and add up to 1512 additional constraints. We found that DL2 scales linear in most of these dimensions. For up to 8000 variables and most of the opposing constraints, all queries successfully finished in < 0.2s. We found that even adversarial examples with 1000 additional constraints still finish for all queries in < 160s. For 1500 additional constraints we had 4 of 9 queries complete successfully in about 240s each. The others hit the timeout of 300s.\\n\\nIf the reviewer has further suggestions for experiments, we would be be happy to include these.\"}",
"{\"title\": \"Clarification of Key Questions\", \"comment\": \"\\u2192 Clarification on Theorem 1 and \\\\epsilon\", \"a\": \"If our solver fails to find a solution to a query, we cannot determine whether there is no solution or our approach failed to find one. To mitigate it, we run each query several times with different initialization points. In general, determining whether a formula over our fragment is satisfiable is not tractable as it is an instance of an SAT/SMT problem, with a very large number of variables and complex interactions (potentially multiple neural networks).\", \"dl2_is_more_general_than_psl_hl\": \"(i) PSL-HL is restricted to encoding of linear combinations of atoms (see Def. 15 in Bach et al. (2017)) and has closed-form solution, while DL2 is not as restrictive: it allows functions such as neural networks, constraining their outputs and relies on numerical optimization to find a solution, (ii) PSL-HL does not support disjunction over arithmetic rules while DL2 does, (iii) DL2 allows the domain of variables to be \\u211d and not [0,1] as PSL-HL, (iv) more minor: PSL-HL considers a specific instantiation of d(t^1 , t^2), namely |t^1 \\u2212 t^2|, while DL2 can consider other instantiations.\\n\\nOverall, while DL2 and PSL-HL permit similar encodings for some problems, DL2 is more general and more suitable to the domain of interacting with neural networks.\\n\\nLL indeed suffers from the problems outlined in Appendix B. Using PSL-HL arithmetic rules rather than LL in the particular example of Appendix B indeed produces the same encoding as DL2. Note that this is precisely because the example is over [0, 1] and does not contain disjunction. \\n\\nIn the semi-supervised experiment on CIFAR-100 we used the LL variant of PSL. However, as the constraint is logical and not numerical, both LL and PSL-HL produce the same loss.\\n\\nAn example of a query which cannot be encoded in PSL-HL can be found in our unsupervised learning experiment which contains disjunctions and uses R as a domain.\", \"references\": \"[1] Bach, Stephen H., et al. \\\"Hinge-loss markov random fields and probabilistic soft logic.\\\" arXiv preprint arXiv:1505.04406(2015).\\n\\n[2] Hu, Zhiting, et al. \\\"Harnessing deep neural networks with logic rules.\\\" arXiv preprint arXiv:1603.06318 (2016).\\n\\n\\u2192 DL2 and \\\"Adversarial Sets for Regularising Neural Link Predictors\\\" (Minervini et al., UAI17)\"}",
"{\"title\": \"Summary of Provided Clarifications\", \"comment\": [\"We thank the reviewers for their insightful comments. Based on the reviews, we clarified the following key questions and updated the paper with a new revision:\", \"Added a proof of Theorem 1 in both directions, provided an example and updated notation to clarify what \\\\delta is (Appendix A).\", \"Provided all architecture details used in our experiments (Appendix D).\", \"Performed additional experiments on scalability of DL2 as requested (Appendix F).\", \"Changed \\\\epsilon to \\\\xi in our notation to avoid confusion (Section 3).\", \"Fixed typos and minor notation issues pointed out by the reviewers.\", \"Clarified relation of DL2 to prior work, both logic (PSL) and training in our response to AnonReviewer2. If the reviewer is satisfied, we will update the paper with that.\", \"Finally, we will provide a new introduction and abstract this week. We hope this can help put the work in context and make it more understandable. If the reviewers are satisfied with our answers, we will also update the paper to incorporate these as well. We are happy to answer more questions.\"]}",
"{\"title\": \"Interesting generalization of work on incorporating logical queries into neural networks, many compelling use cases\", \"review\": \"Summary\\n-------\\nThis paper proposes DL2, a framework for turning queries over parameters and input, output pairs to neural networks into differentiable loss functions, and an associated declarative language for specifying these queries. The motivation for this work is twofold. The first is to allow for the specification of additional domain knowledge during training. For example, if a user expects that the predicted probabilities of some output classes should be correlated for all predictions, this constraint can be enforced during weight learning. Second, it allows users to search for specific inputs that satisfy specified conditions. In this way, DL2 can capture popular applications like searching for adversarial examples by querying for inputs close to a known input of class A but that the network predicts is class B with high confidence.\\n\\nThe paper provides a concise specification of the query language (a mixture of logical and numeric operators) and asserts a theorem that the given procedure for constructing the query loss produces a function such that anytime the function is 0, the constraints are satisfied. No proof is given, but I cannot see a counterexample. There is also a statement about the converse relationship, that when the loss is above some threshold it implies that the query is not satisfied. \\n\\nExperiments are conducted on supervised, semi-supervised, and unsupervised computer vision tasks. I particularly liked the experiment on semi-supervised learning with CIFAR-100. By replacing labeled examples with domain knowledge about the relationships among classes in CIFAR-100, the paper demonstrates a compelling use case for DL2.\\n\\nThe primary technical challenge is the non-convex optimization required to search for a solution to a query. Experiments show that the loss functions created by DL2 are often solved quickly and correctly, but not always\\n\\nStrengths\\n---------\\nThe framework is expressive enough that many interesting use cases are clear, from specifying background knowledge during training to model inspection. The experiments cover a range of these use cases, demonstrating that the constructed optimization objectives usually work as intended.\\n\\nWeaknesses\\n-----------\\nThe statement in Theorem 1 regarding the converse case is unclear, because it says that the limit of \\\\delta as \\\\epsilon approaches zero is zero, but it is not explained what \\\\epsilon is or how it changes. If \\\\epsilon is the threshold that can often be used in the query, it is not obvious that every query contains exactly one \\\\epsilon. If other cases exist, it is unclear how Theorem 1 applies.\\n\\nIt remains unknown how to handle the case when queries fail. AS the paper points out, if a query fails, it cannot be determined whether no solution exists or if the optimization simply failed to find a solution. Of course, this is a computationally hard in general.\\n\\nRelated Work\\n------------\\nThere are a couple of points from related work that would be good to add to the paper.\\n\\nFirst, the paper \\\"Adversarial Sets for Regularising Neural Link Predictors\\\" (Minervini et al., UAI17) is a prior paper that generates adversarial examples to handle restrictions on inputs which may not exist in the training set. The paper claims DL2 is the first to do this, but I believe this paper is an earlier example that does so, albeit for a particular problem. DL2 is certainly more general.\\n\\nSecond, the description of the limitations of rule distillation (Hu et al., ACL16), particularly in Appendix A is not fully accurate. The expressivity of PSL is greater than stated (see Bach et al., JMLR17 for a full description). In particular, the DL2 loss function for z = (1, 1) can be expressed exactly in PSL using what it calls arithmetic rules. It is not clear that this affects the findings of the semi-supervised learning experiment significantly, although I would appreciate a clarification of the authors. PSL by construction produces convex loss functions, and so the constraint that all outputs for a group of classes is either high OR low would probably not work well.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"still needs improvement\", \"review\": \"The paper tackles the interesting problem of combining logical approaches with neural networks in the form of translating a logical formula into a non-negative loss function for a neural network. \\nThe approach is novel and more general than previous approaches and the math is sound. However, I feel that the method is not well presented. Sadly the introduction does not set the method into context or give a motivation. The abstract is very short and misses key information. Indeed, even the more technical parts sometimes lack clarity and assume familiarity with a wide range of methods. \\n\\nThe experiments are well thought out and show the promise of the method when encoding performance measures such as entropy into the constraints. It would have been interesting to additionally see other kinds of constraints such as purely logical formulas that do not have a specific aim (robustness or performance or otherwise) but simply state preconditions that should be fulfilled. It would furthermore be interesting to inspect the corner cases of the proposed method such as what happens if two constraints are nearly opposing each other and so on. \\n\\n\\n\\nTo conclude, the presented method is clearly novel and provides an interesting solution to a challenging problem. However the paper in the current form does not fully adhere to the standards of conferences such as ICLR. I suggest rewriting especially the abstract and the introduction and then submitting to a different venue as the approach itself seems promising. Additionally, as only very limited comparison experiments can be performed the method itself should be more thoroughly inspected by performing, for example, edge-case or time/number of constraints inspections.\", \"minor_remarks\": \"Hyperparameters such as batch size not reported\\nSpelling mistake in line 2, page 2 \\u201cLipschitz condition\\u201d\\nWhen mentioning \\u201cprior work\\u201d in the introduction a citation is needed.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Review\", \"review\": \"In this paper the authors propose DL2 a system for training and querying neural networks with logical constraints\\n\\nThe proposed approach is intriguing but in my humble opinion the presentation of the paper could be improved. Indeed I think that the paper is bit too hard to follow. \\nThe example at page 2 is not clearly explained.\\n\\nIn Equation 1 the relationship between constants S_i and the variables z is not clear. Is each S_i an assignment to z?\\n\\nI do not understand the step from Eq. 4 to Eq. 6. Why does arg min become min?\\n\\nAt page 4 the authors state \\\"we sometimes write a predicate \\\\phi to denote its indicator function 1_\\\\phi\\\". I\\u2019m a bit confused here, when is the indicator function used in equations 1-6?\\n\\nWhat kind of architecture is used for implementing DL2? Is a feedforward network used? How many layers does it have? How many neurons for each layer? No information about it is provided by authors.\\n\\nIt is not clear to me why DL2/training is implemented in PyTorch and DL2/querying in TensorFlow. Are those two separate systems? And why implementing them using different frameworks?\\n\\nIn conclusion, I\\u2019m a bit insecure about the rating to give to this paper, the system seems interesting, but several part are not clear to me.\\n\\n[Minor comments]\\nIt seems strange to me to use the notation L_inf instead of B_\\\\epsilon to denote a ball.\\n\\nIn theorem 1. \\\\delta is a constant, right? It seems strange to me to have a limit over a constant.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}"
]
} |
|
HJgTHnActQ | Local Image-to-Image Translation via Pixel-wise Highway Adaptive Instance Normalization | [
"Wonwoong Cho",
"Seunghwan Choi",
"Junwoo Park",
"David Keetae Park",
"Tao Qin",
"Jaegul Choo"
] | Recently, image-to-image translation has seen a significant success. Among many approaches, image translation based on an exemplar image, which contains the target style information, has been popular, owing to its capability to handle multimodality as well as its suitability for practical use. However, most of the existing methods extract the style information from an entire exemplar and apply it to the entire input image, which introduces excessive image translation in irrelevant image regions. In response, this paper proposes a novel approach that jointly extracts out the local masks of the input image and the exemplar as targeted regions to be involved for image translation. In particular, the main novelty of our model lies in (1) co-segmentation networks for local mask generation and (2) the local mask-based highway adaptive instance normalization technique. We demonstrate the quantitative and the qualitative evaluation results to show the advantages of our proposed approach. Finally, the code is available at https://github.com/AnonymousIclrAuthor/Highway-Adaptive-Instance-Normalization | [
"image to image translation",
"image translation",
"exemplar",
"mutlimodal"
] | https://openreview.net/pdf?id=HJgTHnActQ | https://openreview.net/forum?id=HJgTHnActQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"HkxMU5dPeE",
"Syg6sFuweN",
"B1eKjl23k4",
"S1l0L62kkE",
"Bke2XVeJy4",
"rkxbCWI9RQ",
"HyxSg-Lc0Q",
"Byx_lTB5AQ",
"SyxsQhH9RQ",
"BygNzp6j3X",
"S1eex_jDhm",
"SJxpmdAlnQ"
],
"note_type": [
"official_comment",
"official_comment",
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1545206345567,
1545206180915,
1544499361363,
1543650646095,
1543599140223,
1543295433102,
1543295212860,
1543294192375,
1543293987048,
1541295372008,
1541023719954,
1540577317137
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1585/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1585/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1585/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1585/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1585/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1585/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1585/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1585/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1585/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1585/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1585/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1585/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"A response addressing the issues\", \"comment\": \"\\u2018Issue of multimodality depending on given exemplars\\u2019\\nTo address this issue, we improved Fig. 6, as can be found at http://123.108.168.4:5000/figure/page/6\\nIn The left macro column of the figure, (a) indicates the hair color translation result from brown to blonde while (b) represents the translation from non-facial hair to facial hair. LOMIT shows an outstanding performance compared to the baseline models in both reflecting the style of an exemplar and keeping the irrelevant region, such as a background and a face in the hair color translation, intact. Specifically, the third and the fifth columns of (a) and the second and the fifth columns of (b) show the results due to the noise in the style information extracted from the background. Besides, they also apply the extracted style to the irrelevant region of the input images, distorting the color and the tone of the face. On the other hand, the results of the baseline models in (b) show the inconsistent appearances of the facial hair with the exemplars, and less diversity in the results, compared to LOMIT.\\n\\n\\u2018Topic of inception score\\u2019\\nWhen using multiple attributes, the associated region generally tends to be large. For example, transferring not just a facial hair but also a gender attribute would involve almost all the face region as the generated local mask. Thus, the idea of using only the partial region of an image involved in image translation may not have much impact on the performance improvement in the case of multi-attribute translation, compared to a single attribute translation. Nonetheless, the ablation test demonstrates a better performance though not outstanding. Furthermore, through a mask used for an exemplar, LOMIT enables the user to choose a style to transfer from the exemplar, which can be found at the rightmost column of the first and the second macro columns in Fig. 4 (please refer to http://123.108.168.4:5000/figure/page/4). We believe this approach has great potentials in diverse applications. For example, the technique can be effectively applied when the different styles in the same attribute (e.g., brown and blonde hair colors) co-exist in an exemplar. By explicitly specifying a style to transfer, a user can designate a concrete target style, and the model can conduct an appropriate translation in terms of the user.\\n\\n[1] Multimodal unsupervised image-to-image translation. \\n[2] Diverse image-to-image translation via disentangled representations.\"}",
"{\"title\": \"Point-by-point response addressing the issues\", \"comment\": \"\\u2018Updates on Fig. 4\\u2019\\nReflecting the reviewer\\u2019s comments, we significantly updated Fig. 4, as shown in http://123.108.168.4:5000/figure/page/4\\nFirst, we included the resulting masks after user edits. Regarding makeup-lipstick example, the reason for a small difference is because the `lipstick\\u2019 and `makeup\\u2019 attributes, which are highly correlated, are difficult to disentangle, and thus the noticeable difference is unlikely when applying only one of them. Instead, we replaced this example with the new combination of attributes (young and makeup), as seen in the last macro column of Fig. 4. Additionally, we strengthened the comparison results of LOMIT with other baseline methods, putting them separately in Fig. 6, which can be found at http://123.108.168.4:5000/figure/page/6\\n\\n\\u2018Topic of interaction\\u2019\\nDirectly indicating the region of interest in a given image may be a good alternative approach, but doing so from scratch may take much time especially when such a region has a complex shape, e.g., detailed hair regions. A user may have not have a clear idea on the region boundaries in the case of facial expressions. In this respect, partially editing the mask initially generated by the model can potentially take less time and give a better idea on where to edit, compared to the manual generation of the mask from scratch. \\n\\n\\u2018How co-segmentation module is trained\\u2019\\nAs the reviewer mentioned, we do not provide any groundtruth segmentation labels corresponding to the output of the co-segmentation module. Instead, our co-segmentation module is trained indirectly in an end-to-end manner using the image reconstruction loss (Section 4.2) and the auxiliary classifier loss (Section 4.4). That is, in order to preserve the original image as much as possible for the image reconstruction while still being properly classified as having the target attribute, the segmentation output is generated as the minimum possible region to clearly transfer the target attribute. \\n\\n\\u2018Applicability of semantic segmentation\\u2019\\nOne can definitely use a semantic segmentation approach in the place of a co-segmentation module in LOMIT, provided that pixel-level, pre-defined class labels are available. The co-segmentation approach we proposed in LOMIT can still work even when such labels are not available. We will include this discussion in the camera-ready version.\\n\\n\\u2018Issue of pink colors\\u2019\\nTo solve the issue, we changed the color scale of the overlaid mask to a grayscale one, as seen in http://123.108.168.4:5000/figure/page/23\\n\\n\\u2018Missing description about AU 20\\u2019\\nAU 20 indicates a specific facial muscle corresponding to 'Lip stretcher,' which extends both corners of the mouth sideways. We will clarify this in the camera-ready version.\"}",
"{\"metareview\": \"The paper received mixed ratings. The proposed idea is quite reasonable but also sounds somewhat incremental. While the idea of separating foreground/background is reasonable, it also limits the applicability of the proposed method (i.e., the method is only demonstrated on aligned face images). In addition, combining AdaIn with foreground mask is a reasonable idea but doesn\\u2019t sound groundbreakingly novel. The comparison against StarGAN looks quite anecdotal and the proposed method seems to cause only hairstyle changes (but transfer with other attributes are not obvious). In addition, please refer to detailed reviewers\\u2019 comments for other concerns. Overall, it sounds like a good engineering paper that might be better fit to computer vision venue, but experimental validation seems somewhat preliminary and it\\u2019s unclear how much novel insight and general technical contributions that this work provides.\", \"confidence\": \"3: The area chair is somewhat confident\", \"recommendation\": \"Reject\", \"title\": \"metareview\"}",
"{\"title\": \"rating updated, more improvement needed\", \"comment\": \"Thanks for your response and the revision. My rating is updated. However, here are further confusions/questions regarding to the rebuttal answers. These may be helpful for improving the paper further.\\n\\n\\n1) It is still hard to read Figure 4. For example, it is always better to show the mask with a real manipulation, rather than manually drawing like 2nd row non-smile column. It is very hard to see what changes in the makeup-Lipstick macro column. Although it is good to see authors make Figure 4 more self-contained and specify the demonstration of interactive transfer, it is straightforward to ask why manually removing regions over the heatmap (mask), why not directly indicate the regions of interest to transfer directly from the RGB image?\\n\\n2a) It is a little confusing that, on the one hand the authors say \\\"the co-segmentation module is trained in a completely end-to-end manner, without any direct supervision\\\", on the other \\\"the used attribute label during training consists of `Smile\\u2019 and `Hair color\\u2019\\\". Maybe how to train such a model is straightforward to other people, I don't understand how it is trained by reading the paper. Should I interpret training the cosegmentation as a weakly supervised learning that the attribute labels are provided during training, while the pixel-wise annotations are not provided? \\n\\n2b) If interactive transfer is possible as discussed above in (1), a semantic segmentation should be also able to provide the region guide for style transfer on specified regions (by the segmentation model). Please explain clearly why such a segmentation method is incapable of targeted region transfer?\\n\\n4) As for the pink colors, it should be improved further, even though the images are changed to alleviate this problem. I'm not sure if directly showing the heatmaps makes more sense, as overlaid images really effects the readability of the figure (as the pink color dominates the images denoted by 1-m, 1-m1 and 1-m2).\\n\\n5) What is the meaning of AU 20? As readers (including me) may not be familiar with this dataset, missing the explanation of the index make it hard to follow.\"}",
"{\"title\": \"Rating unchanged\", \"comment\": \"Thanks for your response and the revision. Some issues are fixed but the results are still not convincing. As to the comparison with StarGAN in Fig. 6, the output hair color is not so consistent to the exemplar image in some cases, and the diversity in the blonde attribute is naturally very limited. Considering that one core contribution of this paper is the diversity controlled by the exemplar image, the paper should show how the output will vary given different exemplar images in the core experiments.\\nIn addition, the inception score improvement from LOMIT_single to LOMIT is also limited, especially for the multi-attribute translation (FH+G) setting.\\nTherefore, I keep my initial rating.\"}",
"{\"title\": \"Issue of github link\", \"comment\": \"We sincerely apologize for the mishap, and we have changed the URL to an anonymous one.\\nThank you for the notification.\"}",
"{\"title\": \"Point-by-point response addressing the issues\", \"comment\": \"1)\\n\\u2018The capability of sigle-attribute translation '\\nIf LOMIT is trained for a multi-attribute translation, it does not normally support a single-attribute translation. However, as discussed in the introduction as well as pointed out by reviewer, LOMIT allows a user to perform this task by manually editing the mask. For example, to transfer only the facial expression but not the hair color, one can remove the hair region in the mask of an exemplar while keeping the mouth and eye regions. The details are shown in Fig. 4 as well as in Subsection 5.3.\\n\\n\\u2018Issue of semantic segmentation'\\nThe proposed idea of adopting the semantic segmentation is reasonable because it may generate different masks for different attributes. However, to perform it, pre-defined labels are explicitly required while LOMIT learns to extract the masks for both an input and an exemplar images without any direct supervison on the masks. In this sense, the co-segmentation module we proposed in LOMIT has more flexibility and extensibility in extracting masks corresponding to diverse domains than semantic segmentation. \\n\\n2)\\n\\u2018How to train co-segmentation module'\\nThe co-segmentation module is trained in a completely end-to-end manner, without any direct supervision. Similar to the previous studies (Pumarola et al. [1], Chen et al. [2], Yang et al. [3], Ma et al. [4], Mejjati et al. [5]), how it works is related to the domain adversarial loss and the multi-attribute translation loss. Suppose we are translating the hair color from black to blonde. Both losses encourage the model to generate a blonde person by minimizing each loss. However, LOMIT performs translation only to the target region of a mask (extracted by the co-segmentation module) through the highway adaptive instance normalization. Thus, in order to minimize the losses, the networks learn to generate a proper region as a mask.\\n\\n\\u2018Explanation on capturing the eyes and mouth in Fig. 2 and Fig. 3'\\nThe masks in Fig. 2 and Fig. 3 only capture the mouth, eyes and hair because the used attribute label during training consists of `Smile\\u2019 and `Hair color\\u2019.\\n\\n\\u2018Issue of capturing different shape in Fig. 4'\\nThe input of the co-segmentation module is a content code, so the region of mouth from two other content codes does not contain a different shape. Note that a content encoder of LOMIT is trained for encoding a common underlying structure except a style, such as the mustache.\\n\\n'Possibility of using semantic segmentation'\\nDue to the reasons discussed above, we do not think that the semantic segmentation can be an alternative to the proposed co-segmentation module in LOMIT.\\n\\n3)\\n\\u2018Issue of disentangled representations'\\nFollowing MUNIT (Huang et al. [6]) and DRIT (Lee et al. [7]), we decompose an input image into the content and style codes. We define \\u201ccontent\\u201d as common underlying structure across all domains (e.g., pose of a face, location and shape of eyes), and \\u201cstyle\\u201d as a representation of the structure (e.g., color and facial expression). We have added the definitions of each content and style. Please refer to Section 2.\\n\\n\\u2018Discussion on extendibility'\\nAs other models (MUNIT [6], DRIT [7], StarGAN (choi et al. [8])), LOMIT cannot perform translation involving unseen labels. However, our model can cover an intra-domain variation though an unseen style is taken from an exemplar. Fig. 6 we have updated can clarify the point.\\n\\n4)\\n\\u2018Topic of pink color in heatmap image'\\nThe pink color in the figure comes from the background color of an image. Specifically, to visualize the heatmap image, we overlaid a mask on a corresponding image. Thus, the background color (pink) of the image affected the heatmap image. To alleviate the problem, we have replaced the image of the figures. Please refer to Fig. 2 and Fig. 3.\\n\\n5)\\n\\u2018Explanation on dark pattern' \\t\\nThe dark patterns on the mouth corresponding to AU 20 have been generated because the region on the mouth was not involved in the translation during training. To clarify the points, we have updated Fig. 5 and its description in Subsection 5.3.\\n\\n6)\\n\\u2018Issue of github page'\\nWe sincerely apologize for the mishap, and we have changed the URL to an anonymous one. Furthermore, we supplement readme and ipynb file to avoid any uncertainty.\\n \\n\\nWe have uploaded an updated paper containing several improvements which we have highlighted.\\nFinally, the demo website of LOMIT can be found at http://123.108.168.4:5000\\n \\n[1] Ganimation: Anatomically-aware facial animation from a single image.\\n[2] Attention-gan for object transfiguration in wild images.\\n[3] Unsupervised image translation with self-regularization and attention.\\n[4] Exemplar guided unsupervised image-to-image translation.\\n[5] Unsupervised attention-guided image to image translation.\\n[6] Multimodal unsupervised image-to-image translation.\\n[7] Diverse image-to-image translation via disentangled representations.\\n[8] Stargan: Unified generative adversarial networks for multi-domain image-to-image translation.\"}",
"{\"title\": \"The principle of performing a single-attribute translation\", \"comment\": \"\\u2018Topic of single-attribute translation'\\nTo be concrete, LOMIT is trained for a multi-attribute translation (Gender and Facial Hair) while the output masks are interactively manipulated and forwarded into the networks to conduct a single-attribute translation (Gender or Facial Hair). We apologize for a confusing description of the figure. In order to clarify the figure, we have updated the figure and its description. Please refer to Fig. 4 and Subsection 5.3.\\n\\n\\u2018Issue of github page\\u2019\\nWe sincerely apologize for the mishap, and we have changed the URL to an anonymous one.\\n\\nWe have uploaded an updated paper containing several improvements which we have highlighted. \\nFinally, the demo website of LOMIT can be found at http://123.108.168.4:5000\"}",
"{\"title\": \"Point-by-point response addressing the issues\", \"comment\": \"1)\\n\\u2018Issue of conflict in introduction'\\nWe understand this issue, so we have improved the part as follows:\\n\\u201cPrevious studies have achieved such multimodal outputs by adding a random noise (BicycleGAN, Zhu et al. [1]) or taking a user-selected exemplar image (MakeupGAN, Chang et al. [2]). Recently, MUNIT (Huang et al. [3]) and DRIT (Lee et al. [4]) combine those two approaches \\u2026\\u201d\\n\\n2)\\n\\u2018Topic of baseline (StarGAN)'\\nThe reason we excluded StarGAN (Choi et al. [5]) from a baseline is because it does not support multimodal outputs. However, as the reviewer advised, StarGAN is a state-of-the-art method in attribute translation, so we have conducted an additional experiment and updated the results in the paper. Please refer to Subsection 5.3 and Fig. 6 in the paper.\\n\\n3)\\n\\u2018Discussion on local mask'\\nCompared to other approaches utilizing an attention mask [6, 7, 8, 9, 10], LOMIT jointly generates another attention mask for an exemplar. It plays a role of determining a relevant region from which to extract out a style. In order to justify the effectiveness of separating the style into a foreground and a background style, we have conducted an ablation test between LOMIT (equipped with a mask for an exemplar) and LOMIT_single (without the mask for an exemplar).\\nFirst, we train each model for translating multi-attributes (Facial Hair and Gender). We then conduct the test using the inception score, the results are shown as follows.\\n\\n \\t \\t Facial Hair \\t Gender \\t FH+G\", \"lomit\": \"(0.3105, 0.2697) | (0.2348, 0.2173) | (0.2069, 0.2323)\", \"lomit_single\": \"(0.3040, 0.2556) | (0.2260, 0.2150) | (0.2029, 0.2343)\\n\\n, where the numbers in parentheses denote the mean and the standard deviation respectively. As can be seen, LOMIT shows the better results compared with LOMIT_single in both single- and multi-attribute translation. Based on these results, we verify that utilizing the mask of an exemplar improves the performance of our model.\\n\\n4)\\n\\u2018Diverse, different outputs due to exemplar images within a single domain'\\nThe role of an exemplar image is to convey the variation information within a particular attribute (bright vs. relatively dark in the case of a blonde hair), which allows to generate multiple possible translation outputs within a single attribute. This is so-called multi-modality in image translation, which our method as well as other existing methods, such as DRIT and MUNIT, has in common. Specifically, these models including outs decompose (or disentangle) the image into a content and a style features. Due to its reconstruction loss not only between x_1 and x_{1->2->1} but also between x_2 and x_{2->1->2}, the style feature (say, in x_2) should convey not only the domain label information but also the details within its domain. We clarified such discussion in Subsection 5.3.\\n\\n5)\\n\\u2018Issue of github page'\\nWe sincerely apologize for the mishap, and we have changed the URL to an anonymous one.\\n \\n\\nWe have uploaded an updated paper containing several improvements which we have highlighted.\\nFinally, the demo website of LOMIT can be found at http://123.108.168.4:5000\\n \\n[1] Toward multimodal image-to-image translation.\\n[2] Pairedcyclegan: Asymmetric style transfer for applying and removing makeup.\\n[3] Multimodal unsupervised image-to-image translation.\\n[4] Diverse image-to-image translation via disentangled representations.\\n[5] Stargan: Unified generative adversarial networks for multi-domain image-to-image translation.\\n[6] Ganimation: Anatomically-aware facial animation from a single image.\\n[7] Attention-gan for object transfiguration in wild images.\\n[8] Unsupervised image translation with self-regularization and attention.\\n[9] Exemplar guided unsupervised image-to-image translation.\\n[10] Unsupervised attention-guided image to image translation.\"}",
"{\"title\": \"Very structured but seemingly effective image 2 facial image translation.\", \"review\": \"The paper deals with image to image (of faces) translation solving two main typical issues: 1) the style information comes from the entire region of a given exemplar, collecting information from the background too, without properly isolating the face area; 2) the extracted style is applied to the entire region of the target image, even if some parts should be kept unchanged. The approach is called LOMIT, and is very elaborated, with source code which is available (possible infringement of the anonymity, Area Chair please check). In few words, LOMIT lies on a cosegmentation basis, which allows to find semantic correspondences between image regions of the exemplar and the source image. The correspondences are shown as a soft mask, where the user may decide to operate on some parts leaving unchanged the remaining (in the paper is shown for many alternatives: hair, eyes, mouth). Technically, the paper assembles other state of the art techniques, (cosegmentation networks, adaptive instance normalization via highway networks) but it does it nicely. The major job in the paper lies in the regularization part, where the authors specify each of their adds in a proper way. Experiments are nice, since for one of the first times provide facial images which are pleasant to see. One thing I did not like were on the three set of final qualitative results, where gender change results in images which are obviously diverse wrt the source one, but after a while are not communicating any newer thing. Should have been better to explore other attributes combo.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"paper should be improved before publishing\", \"review\": \"Summary--\\nThe paper tries to address an issue existing in current image-to-image translation at the point that different regions of the image should be treated differently. In other word, background should not be transferred while only foreground of interest should be transferred. The paper propose to use co-segmentation to find the common areas to for image translation. It reports the proposed method works through experiments.\\n\\nThere are several major concerns to be addressed before considering to publish.\\n\\n1) The paper says that \\\"For example, in a person\\u2019s facial image translation, if the exemplar image has two attributes, (1) a smiling expression and (2) a blonde hair, then both attributes have to be transferred with no other options\\\", but the model in the paper seems still incapable of transferring only one attribute. Perhaps an interactive transfer make more sense, while co-segmentation does not distinguish the part of interest to the user. Or training a semantic segmentation make more sense as the semantic segment can specify which region to transfer.\\n\\n2) As co-segmentation is proposed to \\\"capture the regions of a common object existing in multiple input images\\\", why does the co-segmentation network only capture the eye and mouth part in Figure 2 and 3, why does it capture the mouth of different shape and style in the third macro column in Figure 4 instead of eyes? How to train the co-segmentation module, what is the objective function? Why not using a semantic segmentation model?\\n\\n3) The \\\"domain-invariant content code\\\" and the \\\"style code\\\" seem rather subjective. Are there any principles to design content and style codes? In the experiments, it seems the paper considers five styles to transfer as shown in Table 1. Is the model easy to extend to novel styles for image translation?\\n\\n4) What does the pink color mean in the very bottom-left or top-right heatmap images in Figure 2? There is no pink color reference in the colorbar.\\n\\n5) Figure 5: Why there is similariy dark patterns on the mouth? Is it some manual manipulation for interactive transfer?\\n\\n6) Though it is always good to see the authors are willing to release code and models, it appears uncomfortable that github page noted in the abstract reveals the author information. Moreover, in the github page,\\neven though it says \\\"an example is example.ipynb\\\", the only ipynb file contains nothing informative and this makes reviewers feel cheated.\\n\\nMinor--\\nThere are several typos, e.g., lightinig.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Comparison and experiment setting are not explained well\", \"review\": \"This paper proposes an unpaired image-to-image translation method which applies the co-segmentation network and adaptive instance normalization techniques to enable the manipulation on the local regions.\", \"pros\": [\"This paper proposes to jointly learn the local mask to make the translation focus on the foreground instead of the whole image.\", \"The local mask-based highway adaptive instance normalization apply the style information to the local region correctly.\"], \"cons\": \"* There seems a conflict in the introduction (page 1): the authors clarify that \\u201cprevious methods [1,2,3] have a drawback of ....\\u201d and then clarify that \\u201c[1,2,3] have taken a user-selected exemplar image as additional input ...\\u201d. \\n* As the main experiments are about facial attributes translation, I strongly recommend to the author to compare their work with StarGAN [4]. \\n* It is mentioned in the introduction (page 2) that \\u201cThis approach has something in common with those recent approaches that have attempted to leverage an attention mask in image translation\\u201d. However, the differences between the proposed method with these prior works are not compared or mentioned. Some of these works also applied the mask technique or adaptive instance normalization to the image-to-image translation problem. I wonder the advantages of the proposed method compared to these works.\\n* The experiment setting is not clear enough. If I understand correctly, the face images are divided into two groups based on their attributes (e.g. smile vs no smile). If so, what role does the exemplar image play here? Since the attribute information has been modeled by the network parameters, will different exemplar image lead to different translation outputs? \\n* The github link for code should not provide any author information.\\n\\n[1] Multimodal Unsupervised Image-to-Image Translation\\n[2] Diverse Image-to-Image Translation via Disentangled Representations\\n[3] Exemplar Guided Unsupervised Image-to-Image Translation\\n[4] StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation\\n\\nOverall, I think the proposed method is well-designed but the comparison and experiment setting are not explained well. My initial rating is weakly reject.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
|
HylTBhA5tQ | The Limitations of Adversarial Training and the Blind-Spot Attack | [
"Huan Zhang*",
"Hongge Chen*",
"Zhao Song",
"Duane Boning",
"Inderjit S. Dhillon",
"Cho-Jui Hsieh"
] | The adversarial training procedure proposed by Madry et al. (2018) is one of the most effective methods to defend against adversarial examples in deep neural net- works (DNNs). In our paper, we shed some lights on the practicality and the hardness of adversarial training by showing that the effectiveness (robustness on test set) of adversarial training has a strong correlation with the distance between a test point and the manifold of training data embedded by the network. Test examples that are relatively far away from this manifold are more likely to be vulnerable to adversarial attacks. Consequentially, an adversarial training based defense is susceptible to a new class of attacks, the “blind-spot attack”, where the input images reside in “blind-spots” (low density regions) of the empirical distri- bution of training data but is still on the ground-truth data manifold. For MNIST, we found that these blind-spots can be easily found by simply scaling and shifting image pixel values. Most importantly, for large datasets with high dimensional and complex data manifold (CIFAR, ImageNet, etc), the existence of blind-spots in adversarial training makes defending on any valid test examples difficult due to the curse of dimensionality and the scarcity of training data. Additionally, we find that blind-spots also exist on provable defenses including (Kolter & Wong, 2018) and (Sinha et al., 2018) because these trainable robustness certificates can only be practically optimized on a limited set of training data. | [
"Adversarial Examples",
"Adversarial Training",
"Blind-Spot Attack"
] | https://openreview.net/pdf?id=HylTBhA5tQ | https://openreview.net/forum?id=HylTBhA5tQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"ryl8ns2-l4",
"SylEXOI9AX",
"HkgrM_e9R7",
"SylN6rJ5Cm",
"B1gGnv0Y07",
"BylEr05YAX",
"HkeeRxhmAX",
"rkeMtGiyCQ",
"BJlit-i1Rm",
"rkxhfWj1C7",
"BylRCJCM6m",
"Hygsieas2X",
"rJe1e3jj3X",
"H1gVYFVS3X"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544829870060,
1543297052219,
1543272460846,
1543267771855,
1543264170331,
1543249468153,
1542861000453,
1542595193964,
1542594947226,
1542594836151,
1541754837757,
1541292194596,
1541286887357,
1540864380131
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1584/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1584/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1584/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1584/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1584/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1584/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1584/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1584/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1584/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1584/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1584/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1584/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1584/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1584/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"Reviewers are in a consensus and recommended to accept after engaging with the authors. Please take reviewers' comments into consideration to improve your submission for the camera ready.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Paper decision\"}",
"{\"title\": \"Additional small edits before the revision period closes\", \"comment\": \"We have addressed all the concerns of AnonReviewer3. During the discussion with AnonReviewer3, we found that there might be some confusions on how we generate adversarial examples from blind-spot images, and how we calculate the $\\\\ell_p$ distortions for adversarial examples. Thus we slightly revise Section 3.3 and 4.4 to make things clear. We hope this will make our paper easier to follow.\\n\\nAgain we thank all the reviewers for the encouraging and constructive comments!\\n\\nThanks,\\nPaper1584 Authors\"}",
"{\"title\": \"Thank you so much for your comments and considerations\", \"comment\": \"We really appreciate the reviewer's fruitful suggestions, and we see where the confusion is.\\n\\nIn Sec. 3.3, blind-spot attack uses scaling and shifting to generate new natural reference images x' = \\\\alpha * x + \\\\beta. We still apply C\\\\&W L_inf attacks on x\\u2019 to generate adversarial images x'_adv for all \\\\alpha and \\\\beta. We will revise our paper to make this clearer.\\n\\nThank you again for your comments and we will make the writing better.\\n\\nThank you!\\nPaper 1584 Authors\"}",
"{\"title\": \"Further discussion on 'blind-spot' attack\", \"comment\": \"Thanks for the clarification.\\n\\n\\\"We use alpha and beta to obtain new natural reference images instead of adversarial images.\\\"\\n\\nThis is a key point which makes reviewer confusing, since in Sec. 3.3, blind-spot attacks seem to generate adversarial images only using scaling and shifting. However, in experiments x\\u2019 = \\\\alpha * x + \\\\beta is used to generate natural reference image. I am Okay with that only if the experiment is consistent, e.g., applying C\\\\&W attack on x\\u2019 = \\\\alpha * x + \\\\beta for all \\\\alpha and \\\\beta discussed in this paper. \\n\\nPlease carefully revise Sec. 3.3. and experiment section to make the aforementioned point clearer. \\n\\nBased on the authors's current response, I increase my score to 6.\"}",
"{\"title\": \"Thank you for the questions! Here are our further clarifications.\", \"comment\": \"Dear AnonReviewer3,\\n\\nThank you for your response and further questions. We would like to answer them as below:\\n\\n\\u201cI assumed that the distortion condition will be examined as $| \\\\alpha x + \\\\beta |_infty \\\\leq \\\\eps$, right?\\u201d\\nNo, this is not how we examine the Linf distortion success condition in Table 2.\\n\\nWe use alpha and beta to obtain new natural reference images instead of adversarial images. For example, for an original image x from the test set, we scale and shift this image to obtain a new natural reference image x\\u2019 = \\\\alpha * x + \\\\beta. Then we run C&W attack on x\\u2019 to obtain its adversarial image x\\u2019_adv. Note that x\\u2019 = \\\\alpha * x + \\\\beta is not considered as an adversarial image but as a natural image since in the blind-spot attack we are finding the blind-spots (where the model do not have good robustness) in the natural data distribution.\\n\\nThe distortion condition is examined as the distance between x\\u2019 and x\\u2019_adv: $|x\\u2019 - x\\u2019_adv|_\\\\infty \\\\leq \\\\eps$, but not $| \\\\alpha x + \\\\beta |_infty \\\\leq \\\\eps$. We will try to make this clearer in our revision.\\n\\n\\u201cIn the last column of Table 2, alpha = 0.7 & beta = 0.15, I wonder why ASRs under thr = 0.3 and thr = 0.21 are the same.\\u201d\\nThe reason is that most adversarial examples generated from blind-spot images with alpha=0.7 and beta=0.15 have small distortions, less than both 0.3 and 0.21. So they are considered successful in both criteria. \\n\\n\\u201cit quite surprising that ASRs for the two cases (alpha = 0.7, beta = 0, thr = 0.21) and (alpha = 0.7, beta = 0.15, thr = 0.21) have a large gap. Any rationale behind that?\\u201d\\nThe ASR for the case with non-zero beta is much higher than beta=0 case indicates that scaling+shifting is more effective than scaling alone to reduce the robustness of the model under attack. Scaling+shifting is a more powerful blind-spot attack.\\n\\nWe are glad to discuss further with you if you have any additional questions. Thanks again for the constructive feedback!\\n\\nThank you!\\nPaper 1584 Authors\"}",
"{\"title\": \"Responses clarified the reviewer's previous questions\", \"comment\": \"\\\"We want to emphasize that the \\u201cblind-spot attack\\u201d is a class of attacks, which exploits the gap between training and test data distributions (see our definition in Section 3.3). The linear transformation used in our paper is one of the simplest attacks in this class. If we know the details of this specific attack before training, it is possible defend against this specific simple attack.\\\"\\n\\nOk, I agree with the authors at this point. \\n\\n\\\"The stricter criterion actually makes our attack success rates *lower* rather than higher. Finding adversarial examples with smaller distortions is harder than finding adversarial examples with large distortions. As an extreme case, if the criterion is distortion<=0, the attack success rate will always be zero, since we cannot fool the model using unmodified natural images. In Table 2, the success rates under the column 0.27 are strictly lower than the numbers under the column 0.3. We consider this additional stricter criterion because images after scaling are within a smaller range, so we also restrict the noise to be smaller, to keep the same signal-to-noise ratio and make an absolutely fair comparison. If we don\\u2019t use this stricter criterion, our attack success rates will look even better.\\n\\\"\\n\\nYes, the authors are correct that finding adversarial examples with smaller distortions is harder than finding adversarial examples with large distortions, thus $\\\\alpha \\\\epsilon$ will make attack success rate (ASR) LOWER. Based on that, I checked Table 2, which is still unclear to me. \\n\\nIn the last column of Table 2, alpha = 0.7 & beta = 0.15, I wonder why ASRs under thr = 0.3 and thr = 0.21 are the same. Since an attack is considered as successful if its Linf distortion is less than given thrs, I assumed that the distortion condition will be examined as $| \\\\alpha x + \\\\beta - x |_infty \\\\leq \\\\eps$, right? If so, it quite surprising that ASRs for the two cases (alpha = 0.7, beta = 0, thr = 0.21) and (alpha = 0.7, beta = 0.15, thr = 0.21) have a large gap. Any rationale behind that?\\n\\n\\nI will adjust my score based on the authors' further clarification.\"}",
"{\"title\": \"We will really appreciate it if you could provide us more feedback before the revision period ends\", \"comment\": \"Dear AnonReviewer3,\\n\\nThank you again for your insightful and constructive comment!\\n\\nWe hope that we have addressed your questions. We understand you may be discussing our paper with other reviewers and you can take your time. As the revision period is closing soon, we will really appreciate it if you could let us know if you find anything unclear in our response, or have any further concerns about our paper. We will try our best to revise our paper based on your suggestions before the revision period ends.\\n\\nThank you!\\nPaper 1584 Authors\"}",
"{\"title\": \"Reply to All Reviewers\", \"comment\": \"During the rebuttal period, we further enhanced our experiments by conducting blind-spot attacks on two certified, state-of-the-art adversarial training methods, including (Wong & Kolter 2018) and (Singha et al. 2018). Surprisingly, although they can provably increase robustness on the training set, they still suffer from blind-spot attacks by slightly transforming the test set images. See Tables 4, and 5 in the Appendix. The attack success rates go significantly higher after a slight scale and shift on both MNIST and Fashion MNIST test sets, for both two defense models.\\n\\nAdditionally, we also add results for a relatively larger dataset, GTS (german traffic sign) in Appendix (Section 6.2). The results (in histograms) we observed are similar to the ones we observed on CIFAR.\\n\\nWith these new results, our conclusion is not limited to the adversarial training method proposed by (Madry et al. 2018). Our paper uncovers the weakness of many state-of-the-art adversarial training methods, even including those with theoretical guarantees on the training dataset. By identifying a new class of adversarial attacks, even in its simplest form (small shift + scale), many good defense methods become vulnerable again. \\n\\nIn conclusion, we show that many state-of-the-art strong adversarial defense methods, even including those with robustness certificates on training datasets, cannot well generalize their robustness on unseen test data from a very slightly changed domain. This partially explains the difficulty in applying adversarial training on larger datasets like CIFAR and ImageNet. We believe that our results are significant. We also think these experiments are important to further understanding adversarial examples and proposing better defenses.\"}",
"{\"title\": \"Thank you for the questions! We have updated our paper and answered your questions below.\", \"comment\": \"Thank you for the encouraging comments. First of all, we would like to mention that we add more experiments on two additional state-of-the-art strong and certified defense methods, and observe that they are also vulnerable to blind-spot attacks. Please see our reply to all reviewers.\\n\\nWe agree that the K-L based method is complicated and computationally extensive. Fortunately, we only need to compute it once per dataset. To the best of our knowledge, currently, there is no perfect metric to measure the distance between a training set and a test set. Ordinary statistical methods (like kernel two-sample tests) do not work well due to the high dimensionality and the complex nature of image data. So the measurement we proposed is a best-effort attempt that can hopefully give us some insights into this problem. \\n\\nAs suggested by the reviewer, we added a new metric based on the mean of \\\\ell_2 distance on the histogram in Section 4.3. The results are shown in Table 1 (under column \\u201cAvg. normalized l2 Distance\\u201d). The results align well with our conclusion: the dataset with significant better attack success rates has noticeably larger distance. It further supports the conclusion of our paper and indicates that our conclusion is distance metric agnostic.\\n\\nWe hope that we have made everything clear, and we again appreciate your comments. Let us know if you have any additional questions.\\n\\nThank you!\\nPaper 1584 Authors\"}",
"{\"title\": \"Thank you for the questions! We have updated our paper and answered your questions below.\", \"comment\": \"Thank you for your insightful comments to help us improve our paper. First of all, we would like to mention that we add more experiments on two additional state-of-the-art strong and certified defense methods, and observe that they are also vulnerable to our proposed attacks. Please see our reply to all reviewers.\\n\\nHere are our responses to your concerns in \\u201cCons\\u201d and \\u201cMinor comments\\u201d.\\n\\nAlthough we were not able to provide theoretical analysis in this paper, our proposed attacks are very effective on state-of-the-art adversarial training methods, and we believe our conclusions\\nCurrently, there is relatively few theoretical analysis in this field in general, and many analysis makes unpractical assumptions. We believe our results can inspire other researcher\\u2019s theoretical research.\\n\\nRegarding the \\u201cblind-spot attack\\u201d phrase, we are open to suggestions from the reviewers. Other phrases we considered including \\u201cevasion attack\\u201d, \\u201cgeneralization gap attack\\u201d and \\u201cscaling attack\\u201d. Which one do you think is a better option?\", \"regarding_the_distances_in_figure_3\": \"Thanks for raising this concern. We have added a note to clarify this issue. The difference in distance can be partially explained by the sparsity in an adversarially trained model. As suggested in [1], the adversarially trained model by Madry et al. tends to find sparse features (see Figure 5 in [1]), where many components are zero. Thus, the distances tend to be overall smaller.\", \"regarding_the_results_in_table_1\": \"In our old version, we only used the adversarially trained network. In our revision, we added K-L divergence computed from both adversarially trained and naturally trained networks. Additionally, we also add a new distance metric proposed by AnonReviewer1. The K-L divergences by both networks, as well as the newly added distance metric, show similar observations.\", \"regarding_adding_more_visualizations\": \"We added some more visualizations in Fig 10 in the appendix. It is worth noting that the Linf distortion metric used in adversarial training is sometimes not a good metric to reflect visual differences. However, the test images under our proposed attack indeed have much smaller Linf distortions.\\n\\nWe hope that we have answered all your questions, and we are glad to discuss with you if you have any further concerns about our paper.\\n\\n[1] Tsipras, Dimitris, et al. \\\"Robustness may be at odds with accuracy.\\\" arXiv preprint arXiv:1805.12152 (2018).\\n\\nThank you!\\nPaper 1584 Authors\"}",
"{\"title\": \"Thank you for the questions! We have updated our paper and answered your questions below.\", \"comment\": \"Dear AnonReviewer3,\\n\\nThank you for your insightful questions. They are very helpful for us to improve the paper. We would like to answer your 4 questions as below.\\n\\na) We added more figures with k=10, 100, 1000 in the appendix (in main text, we used k=5). Our main conclusion does not change regardless the value of k: there is a strong correlation between attack success rate and the distance between test examples to training dataset. A larger distance usually implies a higher attack success rate. The rational to use this metric is that it is simple, and nearest neighbour based methods are usually robust to hyper-parameter selection. We don\\u2019t want our observations depend on hyper-parameters during distance measurement.\\n\\nb) Song et al. (2018) does not have ordinary metrics like distortion or (ordinary) attack success rates to compare with. In their attack, the input is a random noise for GAN, and they generate adversarial images from scratch. In typical adversarial attacks, people start from a specific reference (natural) image x and add adversarial distortion to obtain x_adv. In their paper, adversarial images are generated by GANs directly and there is no reference images at all, so distortion cannot be calculated (see definitions 1 and 2 in their paper). They have to conduct user study to determine what is the true class label for a generated image, and see if the model will misclassify it. The success rate is the model\\u2019s misclassification rate from user study.\\n\\nIn our paper, our attacks first conduct slight transformations on a natural test image x to obtain x\\u2019, and then run ordinary gradient based adversarial attacks on x\\u2019 to obtain x\\u2019_adv. We have a reference image x\\u2019, so we can compute the distortion between x\\u2019 and x\\u2019_adv, and determine the success by a certain criterion on distortion. This setting is different from Song et al. (2018) so we cannot directly compare distortion and success rates with them.\\n\\nc) We want to emphasize that the \\u201cblind-spot attack\\u201d is a class of attacks, which exploits the gap between training and test data distributions (see our definition in Section 3.3). The linear transformation used in our paper is one of the simplest attacks in this class. If we know the details of this specific attack before training, it is possible defend against this specific simple attack. However, it is always possible to find some different blind-spot attacks (for example, by using a generative model). Rather than starting a new arm race between attacks and defenses, our argument here is to show the fundamental limitations of adversarial training -- it is hard to cover all the blind-spots during training time because it is impossible to eliminate the gap between training and test data especially when data dimension is high. \\n\\nd) The stricter criterion actually makes our attack success rates *lower* rather than higher. Finding adversarial examples with smaller distortions is harder than finding adversarial examples with large distortions. As an extreme case, if the criterion is distortion<=0, the attack success rate will always be zero, since we cannot fool the model using unmodified natural images. In Table 2, the success rates under the column 0.27 are strictly lower than the numbers under the column 0.3. We consider this additional stricter criterion because images after scaling are within a smaller range, so we also restrict the noise to be smaller, to keep the same signal-to-noise ratio and make an absolutely fair comparison. If we don\\u2019t use this stricter criterion, our attack success rates will look even better.\\n\\n\\nIn our updated revision, we also include additional experiments on GTS dataset, as long as two other state-of-the-art adversarial training methods by Wong et al. and Sinha et al.. We observe very similar results on all these methods and datasets, further confirming the conclusion of our paper.\\n\\nWe hope our answers resolve all the doubts you had with our paper. We would like to further discuss with you if you have any unclear things or additional questions, and hope you can reconsider the rating of our paper. \\n\\nThank you!\\nPaper 1584 Authors\"}",
"{\"title\": \"An interesting paper analyzing the effect of the distance between training and test set on robustness of adversarial training\", \"review\": \"This paper provides some insights on influence of data distribution on robustness of adversarial training. The paper demonstrates through a number of analysis that the distance between the training an test data sets plays an important role on the effectiveness of adversarial training. To show the latter, the paper proposes an approach to measure the distance between the two data sets using combination of nonlinear projection (e.g. t-SNE), KDE, and K-L divergence. The paper also shows that under simple transformation to the test dataset (e.g. scaling), performance of adversarial training reduces significantly due to the large gap between training and test data set. This tends to impact high dimensional data sets more than low dimensional data sets since it is much harder to cover the whole ground truth data distribution in the training dataset.\", \"pros\": [\"Provides insights on why adversarial training is less effective on some datasets.\", \"Proposes a metric that seems to strongly correlate with the effectiveness of adversarial training.\"], \"cons\": [\"Lack of theoretical analysis. It could have been nice if the authors could show the observed phenomenon analytically on some simple distribution.\", \"The marketing phrase \\\"the blind-spot attach\\\" falls short in delivering what one may expect from the paper after reading it. The paper would read much better if the authors better describe the phenomena based on the gap between the two distribution than using bling-spot. For some dataset, this is beyond a spot, it could actually be huge portion of the input space!\"], \"minor_comments\": [\"I believe one should not compare the distance shown between the left and right columns of Figure 3 as they are obtained from two different models. Though the paper is not suggesting that, it would help to clarify it in the paper. Furthermore, it would help if the paper elaborates why the distance between the test and training dataset is smaller in an adversarially trained network compared to a naturally trained network.\", \"Are the results in Table 1 for an adversarially trained network or a naturally trained network? Either way, it could be also interesting to see the average K-L divergence between an adversarially and a naturally trained network on the same dataset.\", \"Please provide more visualization similarly to those shown in Fig 4.\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Clear and simple idea, insightful experiments.\", \"review\": \"The paper is well written and the main contribution, a methodology to find \\u201cblind-spot attacks\\u201d well motivated and differences to prior work stated clearly.\\n\\nThe empirical results presented in Figure 1 and 2 are very convincing. The gain of using a sufficiently more complicated approach to assess the overall distance between the test and training dataset is not clear, comparing it to the very insightful histograms. Why for example not using a simple score based on the histogram, or even the mean distance? Of course providing a single measure would allow to leverage that information during training. However, in its current form this seems rather complicated and computationally expensive (KL-based). As stated later in the paper the histograms themselves are not informative enough to detect such blind-spot transformation. Intuitively this makes a lot of sense given that the distance is based on the network embedding and is therefore also susceptible to this kind of data. However, it is not further discussed how the overall KL-based data similarity measure would help in this case since it seems likely that it would also exhibit the same issue.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"Reviewer's summery: interesting idea/findings but with questions\", \"review\": \"In this paper, the authors associated with the generalization gap of robust adversarial training with the distance between the test point and the manifold of training data. A so-called 'blind-spot attack' is proposed to show the weakness of robust adversarial training. Although the paper contains interesting ideas and empirical results, I have several concerns about the current version.\\n\\na) In the paper, the authors mentioned that \\\"This simple metric is non-parametric and we found that the results are not sensitive to the selection of k\\\". Can authors provide more details, e.g., empirical results, about it? What is its rationale?\\n\\nb) In the paper, \\\"We find that these blind-spots are prevalent and can be easily found without resorting to complex\\ngenerative models like in Song et al. (2018). For the MNIST dataset which Madry et al. (2018) demonstrate the strongest defense results so far, we propose a simple transformation to find the blind-spots in this model.\\\" Can authors provide empirical comparison between blind-spot attacks and the work by Song et al. (2018), e.g., attack success rate & distortion? \\n\\nc) The linear transformation x^\\\\prime = \\\\alpha x + \\\\beta yields a blind-spot attack which can defeat robust adversarial training. However, given the linear transformation, one can further modify the inner maximization (adv. example generation) in robust training framework so that the $\\\\ell_infty$ attack satisfies max_{\\\\alpha, \\\\beta} f(\\\\alpha x + \\\\beta) subject to \\\\| \\\\alpha x + \\\\beta \\\\|\\\\leq \\\\epsilon. In this case, robust training framework can defend blind-spot attacks, right? I agree with the authors that the generalization error is due to the mismatch between training data and test data distribution, however, I am not convinced that blind-spot attacks are effective enough to robust training. \\n\\nd) \\\"Because we scale the image by a factor of \\\\alpha, we also set a stricter criterion of success, ..., perturbation must be less\\nthan \\\\alpha \\\\epsilon to be counted as a successful attack.\\\" I did not get the point. Even if you have a scaling factor in x^\\\\prime = \\\\alpha x + \\\\beta, the universal perturbation rule should still be | x - x^\\\\prime |_\\\\infty \\\\leq \\\\epsilon. The metric the authors used would result in a higher attack success rate, right?\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
B1gTShAct7 | Learning to Learn without Forgetting by Maximizing Transfer and Minimizing Interference | [
"Matthew Riemer",
"Ignacio Cases",
"Robert Ajemian",
"Miao Liu",
"Irina Rish",
"Yuhai Tu",
"and Gerald Tesauro"
] | Lack of performance when it comes to continual learning over non-stationary distributions of data remains a major challenge in scaling neural network learning to more human realistic settings. In this work we propose a new conceptualization of the continual learning problem in terms of a temporally symmetric trade-off between transfer and interference that can be optimized by enforcing gradient alignment across examples. We then propose a new algorithm, Meta-Experience Replay (MER), that directly exploits this view by combining experience replay with optimization based meta-learning. This method learns parameters that make interference based on future gradients less likely and transfer based on future gradients more likely. We conduct experiments across continual lifelong supervised learning benchmarks and non-stationary reinforcement learning environments demonstrating that our approach consistently outperforms recently proposed baselines for continual learning. Our experiments show that the gap between the performance of MER and baseline algorithms grows both as the environment gets more non-stationary and as the fraction of the total experiences stored gets smaller. | [
"transfer",
"performance",
"continual learning",
"interference",
"mer",
"future gradients",
"likely",
"experiments",
"interference learning",
"interference lack"
] | https://openreview.net/pdf?id=B1gTShAct7 | https://openreview.net/forum?id=B1gTShAct7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"BJe0zK4beE",
"HkxQpF3zk4",
"B1xXbsgkJ4",
"rJl0zU-cCQ",
"SkeOiAJ9RQ",
"SJlIlAJqCm",
"Bkxpap1cAm",
"rye41gla2Q",
"H1l7x3R_h7",
"B1eZ4nh_nm",
"ByxHXBo8h7",
"Hyxl43qlnQ"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_comment",
"official_comment"
],
"note_created": [
1544796438482,
1543846330657,
1543600891020,
1543276053553,
1543270047832,
1543269870018,
1543269828542,
1541369820403,
1541102570895,
1541094441337,
1540957469226,
1540561960330
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1583/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1583/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1583/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1583/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1583/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1583/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1583/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1583/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1583/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1583/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1583/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1583/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": [\"Pros:\", \"novel method for continual learning\", \"clear, well written\", \"good results\", \"no need for identified tasks\", \"detailed rebuttal, new results in revision\"], \"cons\": \"- experiments could be on more realistic/challenging domains\\n\\nThe reviewers agree that the paper should be accepted.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Meta-review\"}",
"{\"title\": \"Updated review\", \"comment\": \"Thank you for your thorough reply. I'm satisfied with the updated draft, it's much cleaner and easy to follow. Most of my comments have been addressed and incorporated in the updated draft. I am upgrading my rating.\"}",
"{\"title\": \"Keeping my score\", \"comment\": \"I'm satisfied with the extra information provided by the authors and I'm keeping my score. The improvements suggested by the other reviewers will substantially help the manuscript and should be implemented, but I believe this paper should be accepted.\"}",
"{\"title\": \"Response to Good paper, more RL experiments and ablations would improve it substantially\", \"comment\": \"Thank you for your great suggestions about the RL experiments. We have made substantial revisions to the RL experiment sections in the main text and appendix. Additionally, we will still add more ablation experiments that we have performed to our charts for the final draft.\\n\\nTo clarify, the y axis in Catcher refers to the number of fruits caught during the full game span. We have tried to make this more clear within our reinforcement learning experiment details in Appendix M.1. \\n\\nIn the final draft we will provide charts that details the single-task performances after 25k steps, 150k steps and asymptotic performance. For example, we report some of our results for Flappy Bird averaged across runs below to give you an idea of the comparative performance of DQN-MER. \\n\\nSingle Task DQN on Flappy Bird\\n=======================\", \"25k_step_single_task_results\": \"DQN Task 0 at 25k steps =-1.13\\nDQN Task 1 at 25k steps =-0.47\\nDQN Task 2 at 25k steps =-2.66\\nDQN Task 3 at 25k steps =-3.95\\nDQN Task 4 at 25k steps =-4.14\\nDQN Task 5 at 25k steps =-4.95\", \"150k_step_single_task_results\": \"DQN Task 0 at 150k steps = 23.73\\nDQN Task 1 at 150k steps = 19.34\\nDQN Task 2 at 150k steps = 13.65\\nDQN Task 3 at 150k steps = 6.91\\nDQN Task 4 at 150k steps = 8.02\\nDQN Task 5 at 150k steps = -0.92\", \"1m_step_single_task_results\": \"DQN Task 0 at 1M steps = 28.08\\nDQN Task 1 at 1M steps = 25.56\\nDQN Task 2 at 1M steps = 17.72\\nDQN Task 3 at 1M steps = 17.72\\nDQN Task 4 at 1M steps = 14.49\\nDQN Task 5 at 1M steps = 10.00\\n\\nContinual Learning with DQN-MER on Flappy Bird\\n=======\", \"continual_learning_results_after_25k_steps_on_the_task\": \"DQN-MER Task 0 after training on Task 0 (at 25k steps) = 1.32\\nDQN-MER Task 1 after training on Task 1 (at 50k steps) = 11.92\\nDQN-MER Task 2 after training on Task 2 (at 75k steps) = 19.42\\nDQN-MER Task 3 after training on Task 3 (at 100k steps) = 21.98\\nDQN-MER Task 4 after training on Task 4 (at 125k steps) = 15.30\\nDQN-MER Task 5 after training on Task 5 (at 150k steps) = 8.46\", \"continual_learning_results_after_training_on_all_6_tasks\": \"DQN-MER Task 0 at 150k steps = 36.63\\nDQN-MER Task 1 at 150k steps = 26.72\\nDQN-MER Task 2 at 150k steps = 19.83\\nDQN-MER Task 3 at 150k steps = 14.63\\nDQN-MER Task 4 at 150k steps = 11.06\\nDQN-MER Task 5 at 150k steps = 8.46\\n\\nClearly DQN-MER performs better at training the first task and experiences positive forward transfer for the remaining tasks over what is possible just training for 25k steps on a single task. In most cases, DQN-MER achieves similar performance to the DQN that takes 1 million steps and achieves asymptotic performance. On the first three tasks DQN-MER performs better and it performs a bit worse for the later tasks where it has less time to train. There does not seem to a price paid by DQN-MER in these experiments for not forgetting the easier tasks. Actually, we find that on the final tasks, DQN-MER achieves significant transfer from easier tasks and achieves better performance than the single task DQN does after even training on those tasks alone for 150k steps. We have also conducted experiments performed using a DQN with reservoir sampling, finding that it consistently underperforms a DQN with typical recency-based sampling in the RL settings we explore. In the final draft, we will include updated charts with the results of DQN with reservoir sampling and DQN-MER with recency-based sampling added. We really appreciate you suggesting these kinds of experiments and we look forward to improving our charts in the final draft to provide much more context for understanding our RL results.\"}",
"{\"title\": \"Response to A promising approach to continual learning that combines experience replay with meta-learning\", \"comment\": \"Thank you for your detailed review and comments about our work.\\n\\nYou bring up an interesting question related to the effect of varying buffer sizes. Based on our experiments, we found that the train-test generalization gap has a complicated relationship with buffer size. The network tends to learn the data that is in the memory buffer at the end of training to approximately perfect accuracy. Intuitively, the network will tend to overfit even more on the buffer data as the buffer becomes smaller. The test set accuracy tends to be higher when the buffer is larger and generalization becomes better as overfitting on the items in the buffer is less of an issue. That being said, the training set accuracy does not necessarily follow the pattern of the accuracy on the items in the buffer. As the model has been potentially trained as little as one step on some examples many training steps ago, models that tend to generalize poorly to the test set also generalize poorly to some parts of the training set that are not included in the buffer. \\n\\nIn order to address your comments about our ablation studies, we have revamped Table 6 of Appendix K to include more experiments to help make our findings clearer. We included, based on your suggestion, experiments demonstrating that adaptive optimizers like Adam and RMSProp do not account for the gap between ER and MER. Particularly for smaller buffer sizes, these approaches seem to overfit more on the buffer and actually hurt generalization in comparison to simple SGD. We also added detail on the performance of the different variants of MER proposed in algorithms 1, 5, and 6. Additionally, we have included new experiments about the impact of the buffer strategy, including those showing how reservoir sampling can also improve GEM although it still slightly underperforms ER. We have also conducted experiments performed using a DQN with reservoir sampling, finding that it consistently underperforms a DQN with typical recency-based sampling in the RL settings we explore. In the final draft, we will include updated charts with the results of DQN with reservoir sampling and DQN-MER with recency sampling added. \\n\\nThank you for your comment about the ambiguity in our experiments. In addition to retained accuracy, we have now also included learned accuracy (LA) which represents the average accuracy for each task directly after learning that task. As you can see in our updated experiments, MER consistently achieves the best performance for this metric as well as retained accuracy. While it is true that attempting to approximate the multi-task setting could potentially result in interference from other tasks, our proposed regularization is seeking to minimize this interference and maximize transfer across tasks which should mitigate the potential for dissimilar tasks to have a negative effect on learning. \\n\\nWe have found that MER, for example in algorithm 6, is not particularly sensitive to the gamma hyperparameter. Overall, for a fixed gamma*alpha which functions as an effective learning rate, we see fairly consistent performance when varying gamma and alpha. In the final draft we will include a chart demonstrating this in the appendix. \\n\\nWe will provide detailed charts in the final draft including performance results for a DQN with reservoir sampling and a single task DQN. Regarding your comments about Flappy Bird, we find that a DQN with MER achieves approximately the asymptotic performance for the single task DQN by the end of training for most tasks. On the other hand, DQN with reservoir sampling achieves worse performance than the standard DQN, so it is clear that, in this particular setting where a later task is subsumed in previous tasks, keeping easy experiences alone does not account for the benefit of DQN-MER.\"}",
"{\"title\": \"Part 2 of Response to Nice intuitions on how to think about transfer and interference, but not good enough technical contributions\", \"comment\": \"Main Concern #4) We are not sure that we totally follow your intuition. When you consider the effective loss function being optimized over in the offline case (which we are discussing in Equation 4) the extra L(xj,yj) term really only has the effect of increasing the priority of the traditional supervised learning loss function rather than the regularization term. This effect should be largely arbitrary because it can be absorbed by tuning alpha. We have edited the text to further emphasize this point. We report it in this way because this is consistent with what we do in our implementation.\\n\\nMain Concern #5) Thank you for this suggestion as this really improves our discourse. Following (Lopez-Paz & Ranzato, NIPS 2017) in addition to retained accuracy, we now also report backward transfer / interference (BTI) and forward transfer / interference (FTI). Unfortunately, forward transfer only makes sense for single headed settings with correlated tasks, which only applies to our MNIST-Rotations experiments. We include these results in Table 5 of Appendix K. As such, we report the accuracy on a task directly after learning that task (LA) for all of our experiments to express plasticity to incoming tasks. We can see in all cases that the high retained accuracy achieved by MER is the byproduct of the best balance between learned accuracy (LA) and backward transfer / interference (BTI). \\n\\nMain Concerns #6 and #7) In order to address your question about getting rid of reservoir sampling, we have added experiments using the buffer strategy from (Lopez-Paz & Ranzato, NIPS 2017) instead to our ablation experiments in Table 6 of Appendix L. Our experiments demonstrate that reservoir sampling results in the best performance for all methods. ER and GEM perform similarly regardless of the buffer management policy. We have preliminary results for MER without using reservoir sampling as well which we will include in the final draft. Regardless of buffer strategy, MER results in considerable improvements on top of both ER and GEM, especially for small buffer sizes. Thank you for mentioning the computational efficiency of MER. In Figure 2 we highlight the performance characteristics on Omniglot for which we use CNN models in a supervised learning setting. We highlight that MER achieves clearly the best tradeoff between learning performance and computation time as methods like GEM have a difficult time scaling to this kind of architecture. We have worked to make it clearer in the text that we use CNNs here in addition to in our RL DQN experiments. \\n\\nMain Concern #8) Thank you for your comment. We have proposed three variants of MER in this work which we detail in algorithms 1,5, and 6 in the updated draft. What you are asking for with one straightforward Reptile loop is detailed in algorithm 5, where algorithms 1 and 6 provide different mechanisms of adding more weight to the current example. We provide results for all variants of these models and not just algorithm 1 in Table 6 of Appendix L and provide more detail about the connection between the different approaches in Appendix H and I. We summarize these results in the second paragraph related to Question 6 in Section 6 of the main text. Algorithm 5 results in significant gains over ER and GEM in all cases. Additionally, algorithms 1 and 6 result in further gains on top of that by increasing the prioritization of the current example. \\n\\nMinor Comment #1) Thank you for pointing out the possible confusion here. We have added Footnote 1 to the abstract in order to help clarify this confusion at the onset of the paper. In this work, we focus on algorithms that are agnostic to task boundaries, so we really mean both gradients with respect to unseen examples of the current task and gradients with respect to unseen examples of unseen tasks. \\n\\nMinor Comments #2 and #3) Thank you for the comment. This is a good point. We have added Appendix A to make our definition of the problem and nonstationary setting more rigorous. \\n\\nMinor Comment #4) Thank you for bringing our attention to this issue. We now provide a comprehensive overview of reservoir sampling in Appendix F and algorithm 3. \\n\\nMinor Comments #5 and #6) Thank you for these suggestions. We have addressed them in the revised submission.\"}",
"{\"title\": \"Part 1 of Response to Nice intuitions on how to think about transfer and interference, but not good enough technical contributions\", \"comment\": \"Thank you for your detailed review and questions. We will address each comment individually:\\n\\nMain Concern #1) Thank you for pointing out that the terminology used in our submitted version may be confusing. As you pointed out, it is important to make clear that many of the main ideas we used in our paper including the concepts of transfer and interference in forward and backward directions, the link between transfer and weight sharing, and the idea of involving gradient alignment in a formulation for continual learning have been explored before. The main contribution of the transfer-interference tradeoff we propose in this work is a novel perspective on the goal of gradient alignment for the continual learning problem. We have added additional details in the abstract, Section 1, Section 2, and Appendix B in an attempt to make the comparative novelty of our approach clearer. The transfer-interference tradeoff view of continual learning can be very useful as this temporally symmetric view of this tradeoff in relation to weight sharing leads to a natural meta-learning perspective of continual learning. We have attempted to make this clearer in Figure 1 and Section 2 Footnote 3. Moreover, we have added Appendix C to make the connection with weight sharing more explicit. \\n\\nHowever, our operational measures of transfer and interference are in fact the same as forward and backward transfer considered in (Lopez-Paz & Ranzato, NIPS 2017). Following the terminology of (Lopez-Paz & Ranzato, NIPS 2017), we simply use the term \\u201ctransfer\\u201d to refer to our temporally symmetric view of the problem that does not make a distinction between the forward and backward direction. We use \\u201cinterference\\u201d as is common in the literature to refer to the case where transfer is negative. Intransigence and forgetting are also very related to our work as well as the stability-plasticity dilemma. Intransigence and forgetting measure very similar phenomenon to the metrics learned accuracy (LA) and backward transfer and interference (BTI) that we have added to our experiments. We should clarify that we do not consider the way we measure performance to be novel or noteworthy. We have tried to emphasize this by adding additional performance measures such as backward transfer (BTI) and forward transfer (FTI) as used in (Lopez-Paz & Ranzato, NIPS 2017) to our experiments. \\n\\nMain Concern #2) We have tried to make it clear at the beginning of Section 2 that these operational statements only hold at an instant in time with a set of parameters theta. Because we are considering both data points to be evaluated by the same set of parameters, these equations hold despite the fact that the data points may be drawn from different tasks. This is in fact very similar to the instantaneous notion of transfer considered for continual learning in (Lopez-Paz & Ranzato, NIPS 2017) with the main distinction being that we consider transfer on the example level and not the task level. Obviously, you are right that gradients with respect to the parameters at different points in time may be out of date, which would mean these equations wouldn\\u2019t hold. However, it is important to note that we do not implement this case even in the continual learning setting as replayed memories are always considered with the current parameters theta along with the current example. It is true that the notion of generalizing based on this learning about transfer and interference into the future will itself be a non-stationary learning problem. This is because as the parameters change, the notion of good updates for transfer and interference with past examples changes as well. That being said, we are also stabilizing learning for this non-stationary process with experience replay. \\n\\nMain Concern #3) Thank you for your comment. We would first like to clarify that our experiments on Omniglot would be considered \\u201cmulti-head\\u201d (Chaudhry et al., ECCV 2018). We have updated the text to make this clearer. We have also added a new metric learned accuracy (LA) representing performance on a task right after learning that task to our supervised learning experiments and made the task switches clearer for our RL experiments to directly address your concern. Empirically speaking we find that MER results in the best LA in all cases. Despite using a single head, MER is apparently able to efficiently navigate the transfer-interference tradeoff of weight sharing to achieve good LA while at the same time achieving good backward transfer and interference (BTI) performance.\"}",
"{\"title\": \"Nice intuitions on how to think about transfer and interference (thorough rebuttal convinced me to upgrade my rating)\", \"review\": \"The transfer/ interference perspective of lifelong learning is well motivated, and combining the meta-learning literature with the continual learning literature (applying reptile twice), even if seems obvious, wasn't explored before. In addition, this paper shows that a lot of gain can be obtained if one uses more randomized and representative memory (reservoir sampling). However, I'm not entirely convinced with the technical contributions and the analysis provided to support the claims in the paper, good enough for me to accept it in its current form. Please find below my concerns and I'm more than happy to change my mind if the answers are convincing.\", \"main_concerns\": \"1) The trade-off between transfer and interference, which is one of the main contributions of this paper, has recently been pointed out by [1,2]. GEM[1] talks about it in terms of forward transfer and RWalk[2] in terms of \\\"intransigence\\\". Please clarify how \\\"transfer\\\" is different from these. A clear distinction will strengthen the contribution, otherwise, it seems like the paper talks about the same concepts with different terminologies, which will increase confusion in the literature. \\n\\n2) Provide intuitions about equations (1) and (2). Also, why is this assumption correct in the case of \\\"incremental learning\\\" where the loss surface itself is changing for new tasks?\\n\\n3) The paper mentions that the performance for the current task isn't an issue, which to me isn't that obvious as if the evaluation setting is \\\"single-head [2]\\\" then the performance on current task becomes an issue as we move forwards over tasks because of the rigidity of the network to learn new tasks. Please clarify.\\n\\n4) In eq (4), the second sample (j) is also from the same dataset for which the loss is being minimized. Intuitively it makes sense to not to optimize loss for L(xj, yj) in order to enforce transfer. Please clarify.\\n\\n5) Since the claim is to improve the \\\"transfer-interference\\\" trade-off, how can we verify this just using accuracy? Any metric to quantify these? What about forgetting and forward transfer measures as discussed in [1,2]. Without these, its hard to say what exactly the algorithm is buying.\\n\\n6) Why there isn't any result showing MER without reservoir sampling. Also, please comment on the computational efficiency of the method (which is crucial for online learning), as it seems to be very slow. \\n\\n7)The supervised learning experiments are only shown on the MNIST. Maybe, at least show on CONV-NET/ RESNET (CIFAR etc).\\n\\n8) It is not clear from where the gains are coming. Do the ablation where instead of using two loops of reptile you use one loop.\", \"minor\": \"=======\\n1) In the abstract, please clarify what you mean by \\\"future gradient\\\". Is it gradient over \\\"unseen\\\" task, or \\\"unseen\\\" data point of the same task. It's clear after reading the manuscript, but takes a while to reach that stage.\\n2) Please clarify the difference between stationary and non-stationary distribution, or at least cite a paper with the proper definition.\\n3) Please define the problem precisely. Like a mathematical problem definition is missing which makes it hard to follow the paper. Clarify the evaluation setting (multi/single head etc [2])\\n4) No citation provided for \\\"reservoir sampling\\\" which is an important ingredient of this entire algorithm.\\n5) Please mention appendix sections as well when referred to appendix.\\n6) Provide citations for \\\"meta-learning\\\" in section 1.\\n\\n\\n[1] GEM: Gradient episodic memory for continual learning, NIPS17.\\n[2] RWalk: Riemannian walk for incremental learning: Understanding forgetting and intransigence, ECCV2018.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"A promising approach to continual learning that combines experience replay with meta-learning\", \"review\": \"The authors frame continual learning as a meta-learning problem that balances catastrophic forgetting against the capacity to learn new tasks. They propose an algorithm (MER) that combines a meta-learner (Reptile) with experience replay for continual learning. MER is evaluated on variants of MNIST (Permutated, Rotations, Many) and Omniglot against GEM and EWC. It is further tested in two reinforcement learning environments, Catcher and FlappyBird. In all cases, MER exhibits significant gains in terms of average retained accuracy.\\n\\nPro's\\n\\nThe paper is well structured and generally well written. The argument is both easy to follow and persuasive. In particular, the proposed framework for trading off catastrophic forgetting against positive transfer is enlightening and should be of interest to the community. \\n\\nWhile the idea of aligning gradients across tasks has been proposed before (Lopez-Paz & Ranzato, 2017), the authors make a non-trivial connection to Reptile that allows them to achieve the same goal in a surprisingly simple algorithm. That the algorithm does not require tasks to be identified makes it widely applicable and reported results are convincing. \\n\\nThe authors have taken considerable care to tease out various effects, such as how MER responds to the degree of non-stationarity in the data, as well as the buffer size. I\\u2019m particularly impressed that MER can achieve such high retention rates using only a buffer size of 200. Given that multiple batches are sampled from the buffer for every input from the current task, I\\u2019m surprised MER doesn\\u2019t suffer from overfitting. How does the train-test accuracy gap change as the buffer size varies?\\n\\nThe paper is further strengthened by empirically verifying that MER indeed does lead to a gradient alignment across tasks, and by an ablation study delineating the contribution from the ER strategy and the contribution from including Reptile. Notably, just using ER outperforms previous methods, and for a sufficient large buffer size, ER is almost equivalent to MER. This is not surprising given that, in practice, the difference between MER and ER is an additional decay rate ( \\\\gamma) applied to gradients from previous batches. \\n\\nCon's\\n\\nI would welcome a more thorough ablation study to measure the difference between ER and MER. In particular, how sensitive is MER is to changes in \\\\gamma? And could ER + an adaptive optimizer (e.g. Adam) emulate the effect of \\\\gamma and perform on par with MER. Similarly, given that DQN already uses ER, it would be valuable to report how a DQN with reservoir sampling performs.\\n\\nI am not entirely convinced though that MER maximizes for forward transfer. It turns continual learning into multi-task learning and if the new task is sufficiently different from previous tasks, MER\\u2019s ability to learn the current task would be impaired. The paper only reports average retained accuracy, so the empirical support for the claim is ambiguous.\\n\\nThe FlappyBird experiment could be improved. As tasks are defined by making the gap between pipes smaller, a good policy for task t is a good policy for task t-1 as well, so the trade-off between backward and forward transfer that motivates MER does not arise. Further, since the baseline DQN never finds a good policy, it is essentially a pseudo-random baseline. I suspect the only reason DQN+MER learns to play the game is because it keeps \\\"easy\\\" experiences with a lot of signal in the buffer for a longer period of time. That both the baseline and MER+DQN seems to unlearn from tasks 5 and 6 suggests further calibration might be needed.\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Good paper, more RL experiments and ablations would improve it substantially\", \"review\": \"The paper considers a number of streaming learning settings with various forms of dataset shift/drift of interest for continual learning research, and proposes a novel regularization-based objective enabled by a replay memory managed using the well known reservoir sampling algorithm.\", \"pros\": \"The new objective is not too surprising, but figuring out how to effectively implement this objective in a streaming setting is the strong point of this paper. \\n\\nTask labels are not used, yet performance seems superior to competing methods, many of which use task labels.\\n\\nResults are good on popular benchmarks, I find the baselines convincing in the supervised case.\", \"cons\": \"Despite somewhat frequent usage, I would like to respectfully point out that Permuted MNIST experiments are not very indicative for a majority of desiderata of interest in continual learning, and i.m.h.o. should be used only as a prototyping tool. To pick one issue, such results can be misleading since the benchmark allows for \\u201ctrivial\\u201d solutions which effectively freeze the upper part of the network and only change first (few) layer(s) which \\u201cundo\\u201d the permutation. This is an artificial type of dataset shift, and is not realistic for the type of continual learning issues which appear even in single task deep reinforcement learning, where policies or value functions represented by the model need to change substantially across learning.\\n\\nI was pleased to see the RL experiments, which I find more convincing because dataset drifts/shifts are more interesting. Also, such applications of continual learning solutions are attempting to solve a \\u2018real problem\\u2019, or at least something which researchers in that field struggle with. That said, I do have a few suggestions. At first glance, it\\u2019s not clear whether anything is learned in the last 3 versions of Catcher, also what the y axis actually means. What is good performance for each game is very specific to your actual settings so I have no reference to compare the scores with. The sequence of games is progressively harder, so it makes sense that scores are lower, but it\\u2019s not clear whether your approach impedes learning of new tasks, i.e. what is the price to pay for not forgetting?\\n\\nThis is particularly important for the points you\\u2019re trying to make because a large number of competing approaches either saturate the available capacity and memory with the first few tasks, or they faithfully model the recent ones. Any improvement there is worth a lot of attention, given proper comparisons. Even if this approach does not strike the \\u2018optimal\\u2019 balance, it is still worth knowing how much training would be required to reach full single-task performance on each game variant, and what kind of forgetting that induces.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Thank you for your comment! Some additional details:\", \"comment\": \"We would like to sincerely thank you for your comments about our work and for your questions. These will help us further improve our empirical discourse. We will definitely make sure that we address all of your questions in the revised version of our paper once the open review tool allows for revisions.\", \"first_question\": \"Thank you for bringing this important detail up. The answer requires some contextualization. In the case of Catcher, there is no predefined (hard coded) maximum score in the library we used. Under some soft assumptions but with a realistic settings, such as the defaults used in the experimental section (default pellet speed, default player speed, etc), the score grows approximately linearly with the number of frames for a perfect player. It can be approximated by 0.12 x n_frames (empirically we found it could be possible to achieve a score of 1 in 10 frames, 12 in 100 frames, 120 in 1k frames, 597 in 5k frames, and so on). In the case of FlappyBird, based on reported videos on popular channels, the hard limit of the original game was set to 999 points. However, for the emulator used in this experiment there is no trace of such a hard limit. Maybe a more interesting question is human performance: the fact that it was a very popular game raised the public question of the overall difficulty of the game for humans (see https://www.theguardian.com/news/2014/mar/03/flappy-bird-what-does-the-data-say). As it is stated in the article referenced above, and even in the Wikipedia article, human performance is on average much lower than this hard limit: in the analysis above it is observed that it typically takes more than 350 attempts (full episodes) to achieve a couple of games with score 12. It makes sense to us then that a 'Platinum level' is achieved with a score of 40. We are compiling more information to reliably compute the distribution of scores in human players and we will update the appendix of the revision with this information.\", \"second_and_fifth_questions\": \"The question of asymptotic scores for the RL experiments is an interesting one. We are running experiments now and think this is a good suggestion that will help provide additional context for the results. As a sneak peek for soft reference, our preliminary experiments with another model (A3C) resulted in 296 as the asymptotic score for Catcher. We have found that learning may proceed quite slowly after the initial period, so we would like to run our models for a very long period to ensure we have truly found the asymptotic performance.\", \"third_question\": \"Thank you for this question as it also improves our discourse to highlight this point, showcasing the significant extent of transfer across tasks that MER achieves during continual lifelong training. We originally provided this information through our figures in the main text, but will make sure to update the format of the figures and provide details in the text to make this much clearer. After 25k steps of training from scratch, a DQN achieves an average score across runs of 143.02 on Catcher and -2.83 on Flappy Bird. In contrast, MER achieves an average score across runs of 187.93 on Catcher and 1.32 on Flappy Bird.\", \"fourth_question\": \"Thank you for suggesting this ablation experiment. It fits nicely in the context of our ablation analysis section. This will help highlight the value add of incorporating meta-learning.\"}",
"{\"title\": \"Nice work! Could you please provide some extra details?\", \"comment\": [\"Thanks for the paper! I'm particularly impressed by the RL experiments, which I find a bit difficult to fully interpret without more information. For example:\", \"What are the maximum scores achievable in these games/versions?\", \"What score does DQN get asymptotically on each version separately, and how much data is required?\", \"How much can be learned in 25K frames from scratch in each game?\", \"How does DQN perform with reservoir sampling without MER? Any ablation experiments and data would be useful.\", \"What is the asymptotic effect of MER on a single task. Does it get to the same level of performance as DQN with enough data? Is this the case for all tasks considered?\"]}"
]
} |
|
HJehSnCcFX | Inference of unobserved event streams with neural Hawkes particle smoothing | [
"Hongyuan Mei",
"Guanghui Qin",
"Jason Eisner"
] | Events that we observe in the world may be caused by other, unobserved events. We consider sequences of discrete events in continuous time. When only some of the events are observed, we propose particle smoothing to infer the missing events. Particle smoothing is an extension of particle filtering in which proposed events are conditioned on the future as well as the past. For our setting, we develop a novel proposal distribution that is a type of continuous-time bidirectional LSTM. We use the sampled particles in an approximate minimum Bayes risk decoder that outputs a single low-risk prediction of the missing events. We experiment in multiple synthetic and real domains, modeling the complete sequences in each domain with a neural Hawkes process (Mei & Eisner, 2017). On held-out incomplete sequences, our method is effective at inferring the ground-truth unobserved events. In particular, particle smoothing consistently improves upon particle filtering, showing the benefit of training a bidirectional proposal distribution. | [
"events",
"inference",
"unobserved event streams",
"neural hawkes",
"unobserved events",
"particle",
"world",
"sequences",
"discrete events",
"continuous time"
] | https://openreview.net/pdf?id=HJehSnCcFX | https://openreview.net/forum?id=HJehSnCcFX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"S1gSGx3xeV",
"HygtSCgqAQ",
"H1lDaiOgAm",
"SJloisdeC7",
"HyxZ9oOeAX",
"BJexOsOl0m",
"ryeByi_e0X",
"r1l6a9ueCm",
"HygEs9de0X",
"HJeOuculCX",
"rJlf45ulRm",
"B1g0JtueRX",
"SyxMi84Z6Q",
"H1x57ne627",
"Hyxi0YA_nQ"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544761356986,
1543274049304,
1542650815388,
1542650787371,
1542650761204,
1542650727747,
1542650589190,
1542650564619,
1542650524140,
1542650479917,
1542650410451,
1542650085694,
1541650073629,
1541372961774,
1541102035083
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1582/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1582/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1582/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1582/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1582/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1582/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1582/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1582/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1582/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1582/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1582/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1582/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1582/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1582/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1582/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"All reviewers agree to reject. While there were many positive points to this work, reviewers believed that it was not yet ready for acceptance.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Meta-Review for neural Hawkes particle smoothing paper\"}",
"{\"title\": \"added short appendix G with new MNAR experiments\", \"comment\": \"We wrote:\\n> We can also add experiments with c_k = 0.5 to cement the expository point.\\n\\nWe have now added these experiments, which constitute new Appendix G in the supplementary material. \\nIn these experiments, events are missing stochastically rather than deterministically.\\nWe find that the method still works and has the same qualitative behavior. \\n\\n(We have renamed the missingness probability c_k to \\\\rho_k. See the start of Appendix E for the notation.)\\n\\nThis setting is definitely MNAR. A naive reader might protest that it appears to be MCAR, because whether an event is missing always has probability 0.5, independent of the type of the event. However, this is the subtle point at issue. It is MNAR because as we noted in the previous comment, \\\"the second factor of (3) ... decreases exponentially in the number of missing events |z|.\\\" (If you're still not convinced, reread our \\u201cpresentation of MAR and MNAR\\u201d response. We will work this into the final version of the paper, of course.)\"}",
"{\"title\": \"Subject: experimental evaluation (1/4)\", \"comment\": \"Thanks for the suggestions:\\n\\n> The figures reported from the paper are comparative graphs with respect to particle filtering, and so the absolute level of performance of the methods is not characterized. \\n\\nNote that we do show absolute performance on our downstream task (namely, imputation of missing events), via the axis labels in Figure 3. Figure 3 also shows the impact of particle smoothing on this downstream task.\\n\\nOur Figure 2 also shows \\u201cabsolute performance\\u201d on the axes, measured as log q(z* | x) where q is the proposal distribution. It\\u2019s true that these numbers are hard to interpret. Ideally we would compare them to log p(z* | x), since that would be the ideal proposal distribution. But unfortunately it is intractable to compute that conditional probability: even for synthetic data we are only able to compute the joint probability p(x, z*). Do you have any ideas?\\n\\n> Reporting of distribution of sample weights and or run-times/complexity would strengthen the paper.\\n\\nRegarding sample weights, we can report the effective sample size in the final version. Our effective sample size is excellent for the synthetic datasets, about 20-25 for M=50 particles. On real datasets, ESS increased roughly as sqrt(M) as we varied M from 20 to 2000 in pilot experiments. Unfortunately it was very low for M=50, the value of M that we used in our final experiments (ESS of 1.5 to 2.2 on average), yet we still got good imputation results on our task. We might be able to raise the ESS by combining our method with multinomial resampling or local search.\\n\\nRegarding theoretical and wall-clock runtime, please see our separate response \\u201cSubject: efficiency of Monte Carlo methods\\u201d. The TL;DR is that the runtime complexity is O(MI) where M is the number of particles and I is the number of observed events. In practice, we generate the particles in parallel, leading to acceptable speeds of 300-400ms per event for the final method. We can add this information to the final version of the paper.\"}",
"{\"title\": \"Subject: clarity of notation (2/4)\", \"comment\": \"We worked really hard on the exposition, including months of tinkering with the writing and getting feedback from colleagues. Perhaps the subject matter is difficult, but we really worked to make it as clear as we could, and we stand by our presentational choices.\\n\\nYour specific objections seem to be a matter of taste -- but please recognize that they were intended to *improve* clarity. We\\u2019re happy to debate the best notation, but please don\\u2019t reject a technical paper on this basis? It\\u2019s not as if our notation was careless or incomplete. The use of \\u201cComp\\u201d, \\u201cObs\\u201d, and \\u201cMiss\\u201d as random variable names was supposed to be more mnemonic than C, O, and M. The notation k@t denotes an event of type k at (\\u201c@\\u201d) time t. This was supposed to be an improvement on Mei & Eisner\\u2019s <k,t> notation because it distinguishes this kind of ordered pair from other kinds of ordered pairs (similar to the use of sigils in programming languages); it was suggested by a colleague. \\n\\n> It's not clear what p (\\\"the data model\\\") and p_miss (\\\"the missingness mechanism\\\") represent, and therefore why in equation 1: p(x,z) = p(xvz)p_miss(z| xvz) where v is the union symbol. \\n\\nThis is spelled out carefully at the start of section 2 and around equation (1). The generative story has two steps. First, the data model p generates a complete event sequence Comp. Then p_miss decides which of these events get revealed to the user. \\n\\nObs = x is the resulting subsequence of revealed (observed) events, and Miss = z is the subsequence of unrevealed (missing) events. In other words, the missingness mechanism partitions Comp into Obs and Miss. p(x,z) is the joint probability of getting a particular complete event sequence x v z *and* partitioning it into x and z. We particularly needed the notation x because x is the sequence that our particle smoother reads from right to left.\"}",
"{\"title\": \"Subject: presentation of MAR and MNAR (3/4)\", \"comment\": \"> In addition, how it's related to MAR and MNAR is unclear. If e.g. following Murphy, one writes MAR as \\u2026\\n\\nThis is indeed a subtle point, one that we are proud to have handled correctly. If you did not find our exposition clear, we will revise the camera-ready to lay out the issues more plainly. Let\\u2019s start in this response.\\n\\nLittle & Rubin\\u2019s MCAR/MAR/MNAR taxonomy was meant for graphical models. (Murphy\\u2019s textbook just recapitulates this standard taxonomy.) A graphical model has a fixed set of random variables, and the missingness mechanisms envisioned by Little & Rubin simply decide which of those variables to reveal.\\n\\nWe could have chosen to formulate our model in these terms, by using uncountably many random variables K_t where t ranges over the set of times. K_t = k if there is an event of type k at time t, and otherwise K_t = 0. Then a missing event corresponds to an unobserved variable K_t with value > 0. Values of 0 are never observed because we are never told that an event did *not* happen at time t. Some values > 0 are observed and some are not. Since the missingness of K_t depends on whether K_t > 0, this setting is ordinarily MNAR. \\n\\nHowever, we prefer to formulate our model in terms of the finite sequences that are generated or read by our LSTMs. This improves the notation later in the paper. \\n\\nFrom that point of view, unfortunately, the complete draws from p are not fixed-length vectors as in a graphical model: different draws from p can have different numbers of events. This is why our notation does not use a simple \\u201cmissingness vector\\u201d of fixed finite length as in the standard notation. A missing event is not a case of a variable whose value is missing (e.g., an event of unknown type): we don\\u2019t even know whether the variable (event) exists in the first place!\\n\\nYet our treatment of MAR is the correct generalization of Little & Rubin\\u2019s: namely, it\\u2019s the case in which the second factor of (3) can be ignored. (The ability to ignore that factor is precisely why anyone cares about the MAR case!) This is discussed around equation (3) and in Appendix F.\"}",
"{\"title\": \"Subject: MNAR data experiments (4/4)\", \"comment\": \"> We know, from the definition of MNAR that we can't use only the observed data to correctly infer the distributions of the missing values, and so while one can probabilistically predict in MNAR setting, their quality remains unknown.\\n\\nSure, working with MNAR data is impossible without additional knowledge. But in our setting, we have that additional knowledge.\\n\\nThe problem with MNAR is that JOINTLY identifying p and p_miss is impossible. If you observe few 50-year-olds on your survey, you can't know (beyond your prior) whether that\\u2019s because there are few 50-year-olds, or because 50-year-olds are very likely to omit their age. \\n\\nBut joint identification is unnecessary if either\\n(1) one has separate knowledge of the missingness distribution p_miss\\n(2) one has separate knowledge of the complete-data distribution p\", \"that_is\": \"If we know at least one of the distributions, then we can still infer the other. Actually, both (1) and (2) hold in our present experiments.\\n\\nThe E step of EM uses the current guess of p and p_miss to infer the posterior distribution of the missing values. That posterior is uncontroversially defined by the simple Bayesian formula (3).\\n\\n(1) If p_miss is known and fixed, this gives a minor variant of ordinary EM. Ordinary EM makes the MAR assumption that the p_miss factor of (3) can be ignored. But we don't need to ignore p_miss if we actually know it! In our experiments, p_miss is MNAR but we do know it: we know that events of some types are always observed and events of other types are never observed. So, no problem!\\n\\n(2) Conversely, if p is known because we estimated it FROM SOME COMPLETE DATA, then we can use incomplete data to learn the MNAR missingness distribution p_miss. This setting even lets us learn a fancy missingness mechanism, e.g., some BiLSTM model that uses the context of an event to determine the probability of censoring it. \\n\\nWe relegated this EM discussion to Appendix F since it is not used in our experiments. Appendix F says: \\u201cIn the more general MNAR scenario, we can extend the E-step to consider the not-at-random missingness mechanism (see equation (7b) below), but then we need both complete and incomplete sequences at training time in order to fit the parameters of the missingness mechanism (unless these parameters are already known) jointly with those of the neural Hawkes process. ... we describe the methods and provide MCEM pseudocode.\\u201d \\n\\n> If none of the experiments touch upon MNAR data, perhaps it is possible to omit this part.\\n\\nAlas (as we mentioned in the \\u201cpresentation of MAR and MNAR\\u201d response), for missing data in event streams, nearly *every* setting is MNAR! That is, the probability that z would be selected for censorship depends on the number and type of events in z. In particular, the second factor of (3) typically decreases exponentially in the number of missing events |z|, so it is not constant in z as required for MAR.\\n\\nIn particular, our experimental setting is MNAR in a sense described at the end of this response. Because it happens to be a special case of MNAR, it would be possible through a notational trick to gloss over the MNAR issue and not call the reader\\u2019s attention to it. However, we thought this would be dangerous, so we would prefer to clarify this aspect of the exposition rather than deleting it. (We did relegate part of the discussion to an appendix.)\\n\\nWhy dangerous? We imagine that a reader might try to apply our method to a fairly simple situation where each event of type k has independent probability c_k of being censored. We fear that the reader might carelessly omit the p_miss factor if we don\\u2019t talk about it. However, that factor is necessary to avoid proposing too many missing events of those types that would NOT tend to be censored. \\n\\nE.g., proposing 100 missing events of type k means that p_miss includes a factor of c_k ^ 100. Thus, for c_k < 1 and especially for c_k << 1, the system should prefer -- other things equal -- to posit only 50 missing events. Intuitively, for 50 events to all have gone missing is not as improbable as for 100 events to have gone missing.\\n\\nIt\\u2019s true that our reported experiments happen to have c_k = 1 (that is, events of type k are *deterministically* missing), so this exponential decay does not occur: c_k ^ 100 == c_k ^50. Nonetheless, to ensure that a future reader would handle the general case correctly, we prefer to give it some discussion. We can also add experiments with c_k = 0.5 to cement the expository point.\\n\\nEven our deterministic setting should still be regarded as MNAR, because c_k isn\\u2019t *always* 1 in our experiments. Rather, c_k = 1 or 0 depending on k. Thus, our p_miss factor can be either 1 or 0 (making us MNAR). More precisely, p_miss = 0 if z includes events of a type k that would never go missing. This is the technical reason that our code never proposes such events \\u2014 as explained at the bottom of page 14.\"}",
"{\"title\": \"Subject: minor misunderstanding (1/5)\", \"comment\": \"> Experiments on synthetic datasets with 10 different initializations and two real datasets\\n\\nTo be precise, it\\u2019s 10 completely different synthetic datasets. Each dataset is drawn from a different distribution with randomly selected parameters. Those distributions are not trained, so it\\u2019s odd to speak of \\u201cinitializations.\\u201d\"}",
"{\"title\": \"Subject: distributions other than NHP (2/5)\", \"comment\": \"> The proposed technique is tightly connected to NHP \\u2026 Can the proposed method also be applied to other processes?\\n\\nThanks for the question. Yes, certainly. The main technique is to use particle filtering or smoothing to sample from the posterior over complete sequences.\\n\\nParticle filtering is applicable to any temporal point process where the number of events is finite with probability 1, and where it is tractable to compute (or estimate) the log-likelihood of a prefix of a complete sequence.\\n\\nTo extend this to particle smoothing, we developed a particular family of proposal distributions that is based on a continuous-time LSTM, as well as a method (Algorithm 1) for sampling proposals from such a distribution in the context of particle filtering.\\n\\nOur experiments use an NHP *model* of the complete sequence, together with a *proposal distribution* whose architecture happens to be almost identical to the NHP architecture (in mirror image, as it reads the future observed events from right to left). However, *** our proposal distribution could also be used with other models ***: thus, we would also recommend it for particle smoothing of temporal point processes beyond just the NHP!\\n\\nThe job of the proposal distribution is to get a good fit to the model\\u2019s complex posterior predictive distribution of the next event (which is defined by an integral over possible completions of the incompletely observed future). A highly parameterized neural proposal distribution family like ours is designed to be flexible enough to do this, at least for non-pathological models.\", \"one_caveat\": \"Our proposal distribution does also take the state of the original point process into account. In our case, that is the state of the NHP (h(t) in equation (9). If you were using a different point process, you would need to replace h(t) with some other sufficient statistic of the history H(t).\"}",
"{\"title\": \"Subject: efficiency of Monte Carlo methods (3/5)\", \"comment\": \"> It turns out that the integral part of Equation 5 does not have an obvious analytical solution\\n> under NHP. Then, we first need a set of samples to approximate the likelihood evaluation.\\n\\nWell, this part of our method simply follows the algorithm given in the NHP paper (Mei & Eisner, NIPS 2017, sections B.2 and C.2), as we mention in our Appendix A (\\u201cintegral computation\\u201d). \\n\\nComputationally it is not a problem. A given run of particle smoothing begins by drawing O(I) time points from Uniform([0,T]), where I is the number of observed events. All particles are evaluated using integrals that are estimated by evaluating the function at these time points. \\n\\n(Using the same time points for all particles gives a paired comparison that reduces the variance of the normalized importance weights. Note also that because we sample time points uniformly, longer intervals between imputed events will tend to contain more points, which is appropriate.)\\n\\n> Later, we also need to sample particles. \\n\\nOur GPU implementation (which we will release) parallelizes the outer loop over particles. We sample 50 particles in parallel in these experiments, but we have tested with 1000 particles in parallel as well. So this is not a real problem with a GPU.\\n\\n> I am not quite convinced the computational efficiency of this approach in real applications of practice.\\n\\nWe reported experiments that we performed to demonstrate the practicality. \\nOn average, drawing an ensemble of 50 particles takes \\n5s per example on the synthetic datasets (average length 15 events)\\n12s per example on the NYC taxi dataset (average length 32 events)\\n100s per example on the elevator dataset (average length 313 events)\\n\\nThat is, 300-400 ms per event. Such speeds are acceptable in many incomplete data applications, compared to the cost of collecting complete data. Consider the applications on page 1 of the paper, all of which involve real-time decision making at a human timescale.\\n\\n> Also, there is no analysis either empirically or analytically about the impact of the \\n> accumulative sampling errors on the inference performance. \\n\\nMei and Eisner (2017, Appendix C.2) found that rather few samples could be used to estimate the integral: even sampling at only I time points gave a standard deviation of log-likelihood that was on the order of 0.1% of absolute (Mei, p.c.).\\n\\nWhat kind of \\u201caccumulative sampling errors\\u201d are you concerned about? Remember that our integral estimate is *unbiased*, and the particle filtering estimate is at least consistent. (Although it is true that the normalized particle weights are distorted both by the finite number of particles and the variance in the integral estimates, the variance of the integral estimation decreases---rapidly---as O(1/n) where n is the # of sampled time points.)\"}",
"{\"title\": \"Subject: training the proposal distribution (4/5)\", \"comment\": \"> Furthermore, to learn the proposed distribution, the paper applies the REINFORCE algorithm\\n> under the proposed distribution q. But REINFORCE is known for large variance issue. \\n\\nThis is a misunderstanding by the reviewer. Following Lin & Eisner (2018), we use an interpolation of exclusive and inclusive KL divergence (equation (12)).\\n\\nREINFORCE corresponds to exclusive KL, which does have a variance issue.\\n\\nBut in practice, our tuned interpolation coefficient placed *all* the weight on inclusive KL, which has no variance issue. (This fact is reported as \\u201cbeta=1\\u201d under equation (12).) So our experiment effectively avoids REINFORCE altogether. (Your comment may be the reason that beta=1 worked best for us, but see an alternative discussed below. Note that Lin & Eisner found that beta < 1 worked best in their setting.)\\n\\n> Given that we already need lots of samples for the likelihood, it is unclear to me how \\n> stable the algorithm could be in practice.\\n\\nWe\\u2019re not sure what you mean here by \\u201cstable.\\u201d Yes, we have a sampling-based method, but so do most people in the field right now! As you know, stochastic gradient methods always make use of \\u201clots of samples.\\u201d Remember that SGD works because the errors average out to 0 over many stochastic gradient steps. (If you don\\u2019t believe that, you should be rejecting all the deep learning papers that use SGD, right??)\\n\\nSGD methods succeed, both theoretically and practically, with even high-variance estimates of the batch gradient (e.g., where each stochastic estimate is derived from a *single* randomly chosen training example). Thus, we should be fine with a noisy sampling-based gradient as long as it is *unbiased*. \\n\\nOur Monte Carlo integral estimates (taken from Mei & Eisner 2017, Appendix B.2) are in fact unbiased. And as a result, our stochastic gradient estimate is also unbiased, as required (assuming that the observed complete data are distributed according to p). Why? Since beta=1, our stochastic gradient is simply (10). No particle filtering or smoothing is used to estimate (10), because we train it using observed complete data, as explained in the last long paragraph of section 3.2.1. The only randomness is the integral over [0,T] (similar to the one in (5)) that is required to estimate the term log q(z | x) in (10) \\u2026 and as just noted, this integral estimate is unbiased.\\n\\n(It is true that if beta were < 1, we would compute the exclusive KL gradient using particle filtering or smoothing with M particles, and this would introduce bias in the gradient. Nonetheless, since the bias vanishes as M goes to infinity, it would be possible to restore a theoretical convergence guarantee by increasing M at an appropriate rate as SGD proceeds -- see Spall (2003), p. 107.)\\n\\nAs for whether the training algorithm could work \\u201cin practice\\u201d -- did you see the beautiful figure 2? Our training method certainly appears to succeed \\u201cin practice.\\u201d The trained proposal distribution is better on *virtually every example* in *12 different datasets*! We as reviewers would be quite inclined to accept a paper with such clear results \\u2026 \\n\\nFinally, recall that the paper has 3 algorithmic contributions: particle filtering, particle smoothing, and consensus decoding (as well as introducing a useful problem setting along with a well-thought-out evaluation metric). Your question here about training the proposal distribution pertains only to particle smoothing. Even if there were a problem here, the other contributions would stand. But in fact, we see no problem here: we clearly demonstrate the value of training a proposal distribution.\"}",
"{\"title\": \"Subject: experimental evaluation (5/5)\", \"comment\": \"> it is unfair to only compare the smoothing approach with the filtering baseline. \\n\\nWell, what other baseline do you think we should compare with? There is not a lot of previous work on this problem.\\n\\nWe can see that Metropolis-Hastings would be a possible alternative, where the transition kernel proposes a single-event change (insert, delete, or move). Unfortunately, this would be quite slow for a neural model like ours. The reason is that a proposed change early in the sequence will affect the LSTM state and hence the probability of all subsequent events. Thus, a single move takes O(length of proposed complete sequence) time to evaluate. Furthermore the Markov chain may mix slowly because a move that changes only one event may often lead to an incoherent sequence that will be rejected. The point of particle smoothing is essentially to avoid this kind of rejection by proposing a *coherent sequence of events* from an approximation q to the true posterior. We can ensure that it is coherent because we build it up from left to right (taking the future into account).\\n\\nWe\\u2019d be happy of course to propose Metropolis-Hastings as future work. It could even build on our present work by using a variant of our current proposal distribution as the core of a Metropolis-Hastings kernel -- which would resample the latent events on a given *interval*. However, we would be wary of developing this nontrivial extension within the current paper; it is not an established baseline and would take a few additional pages to develop. The current submission already has too much material -- there are a lot of appendices, and the other reviewers seem to have found the submission to be overwhelming already.\\n\\nAnother good piece of future work would be particle Gibbs or other particle MCMC algorithms, which would also build on our present work.\\n\\n> sequential monte carlo approach often suffers from skewed particle issue where one particle gradually dominates all the other particles with no diversity. \\n\\nThis is indeed a danger in SMC approaches. But surely you don\\u2019t think that all SMC papers should be rejected just because they use SMC? There are several techniques in the SMC community for \\u201crejuvenating\\u201d a skewed ensemble, such as multinomial resampling, other forms of resampling, and the \\u201cparticle cascade.\\u201d Any of these techniques could be combined with ours, and this is orthogonal to the technical contributions of our paper.\\n\\n> It is unclear how the proposed approach is able to handle this. \\n\\nIn fact, our particle smoothing method is also intended to alleviate this issue. As you know, if we could achieve a perfect proposal distribution q that was proportional to p, then the particle weight p/q would be constant across all particles, completely eliminating the skew issue. So our paper shows how to improve the proposal distribution.\\n\\nSpecifically, the reason that an SMC ensemble becomes skewed over time is that some of the proposed particles turn out to be less compatible with the future, and are reweighted to have a weight near 0. Particle smoothing tries to incorporate the future into the proposal distribution so that this will not happen as badly. \\n\\n> what people really care about is how different techniques can behave in real data to impute realistic missing events.\\n\\nWe certainly agree! Which is why our section 4 (backed by appendices C-D, including Algorithm 2) gives a sophisticated method for doing exactly that. Results from applying this method to impute missing events on real data are reported in section 5.2, including the carefully designed Figure 3.\\n\\nCould you please reread that material, and raise your score as appropriate to recognize the work that we did there?\\n\\nYou suggest Linderman et al. (2017) and Shelton et al. (2018) as if they would be appropriate baselines for this imputation task. However, those papers only apply to Hawkes processes. Please note that we did discuss them carefully in section 6. \\n\\n(Specifically: Our particle filtering baseline is already the SAME as Linderman et al. (2017), just extended from the Hawkes process to the *neural* Hawkes process. Shelton et al. (2018) use MCMC, but their MCMC algorithm takes advantage of special properties of the Hawkes process. Unfortunately, those special properties no longer hold for the *neural* Hawkes process, which would therefore require a much slower MCMC algorithm, as noted above; we haven\\u2019t tried that.)\\n\\n(You also suggest that Xu et al. (2017) is relevant. We are happy to cite it in the final version, but note that that paper focuses on quite a different kind of missing data -- \\u201cshort reads\\u201d where a long sequence has been broken up and it is not known which pieces go together. The first author of that paper agreed that his paper isn\\u2019t directly comparable to our setting, when we corresponded with him before submission.)\"}",
"{\"title\": \"Subject: response to short late review\", \"comment\": \"> It's difficult to follows.\\n\\nThanks for acknowledging the importance of the problem. We are sorry to hear that you found the paper too difficult to read in the limited time that you had available to review it. \\n\\nWe worked quite hard on the exposition. If you have specific suggestions that could reduce the difficulty, we will be happy to consider them for the camera-ready version.\\n\\n> But it's a good paper and can be turned to a good paper for the next venue.\\n\\nThank you. We agree that \\u201cit\\u2019s a good paper\\u201d already. :) You provide few comments about how it could be \\u201cturned into a good paper for the next venue,\\u201d so we are not sure of your reasons for wanting to delay its publication.\\n\\n> It would have helped if the authors made it clear why each part is chosen and clearly state what is the novelty and contributed of the paper to the field.\\n\\nYes, this is why we wrote section 6, \\u201cDiscussion.\\u201d Could you please reread that section? It begins: \\u201cOur technical contribution is threefold,\\u201d and goes on to clearly describe each contribution and its novelty and importance.\\n\\n> several existing and well developed approach: Neural Hawkes Process + particle smoothing + minimum bayes risk + alignment\\n\\nActually, we are further developing methods that are still in their infancy and are under current investigation. NHP was first published in December 2017 and is being picked up by the community. Neural methods for particle smoothing were first published in June 2018. \\n\\nOur alignment method required developing a new metric and alignment algorithm (section 4 and Appendix C, including Algorithm 2). These are not groundbreaking but they did require some thought.\\n\\nOur MBR method required developing a new approximate search method (section 4 including Theorem 1, and Appendix D including Algorithm 3).\\n\\nWe believe that the novel contributions of this paper are above threshold for publication in ICLR. There is a lot of material in this paper.\\n\\nThe paper also includes strong experimental results that should be of interest to the ML community and that demonstrate the potential of our methods for applied work. We provided extensive pseudocode and will release our implementation.\"}",
"{\"title\": \"This paper proposes an algorithm for missing data problem in continuous time events data (ie, point processes) where both past and future events are helpful.\", \"review\": \"This paper tackles a very important and practical problem in event stream planning. The problem is very interesting and the approach taken is standard.\\n\\nThe presentation of the paper is not clear enough. The notations and definitions and methods are presented in a complicated way. It's difficult to follows.\", \"from_the_contribution_point_of_view_the_paper_looks_like_to_be_a_combination_of_several_existing_and_well_developed_approach\": \"Neural Hawkes Process + particle smoothing + minimum bayes risk + alignment. It's not very surprising to see these elements together. It would have helped if the authors made it clear why each part is chosen and clearly state what is the novelty and contributed of the paper to the field.\\n\\nThe paper in its current format is not ready for publication. But it's a good paper and can be turned to a good paper for the next venue.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting problem with weak experimental evaluation\", \"review\": \"The authors propose a particle smoothing approach with an approximate minimum Bayes risk decoder to impute missing events in the Neural Hawkes Process (NHP). The main goal is to address the missing events problem in continuous-time event analysis, which is an important problem in practice. The core idea is within the framework of particle smoothing.\\n\\nTo formulate the posterior distribution of the missing event, the authors consider both the left-to-right past events and the right-to-left future events. The paper first applies the NHP to capture both the observed and inferred missing events to learn a representation of the past events, and then uses a similar NHP to learn the representation of the observed events from the future. Based on the two representations, it then formulates the intensity function of the missing events and uses the thinning algorithm to sample different particles. Based on the proposed distribution, the paper also considers to decode a single prediction achieving the Minimum Bayes Risk. Experiments on synthetic datasets with 10 different initializations and two real datasets show that the proposed smoothing approach is better than the filtering baseline. \\n\\nIn general, this paper considers an important problem which is under active research in literature recently. However, there are a few weaknesses of the paper that should be addressed. \\n\\n1. The proposed technique is tightly connected to NHP, which could limit the applicability of the approach to other temporal point processes. The essential idea is similar to Bi-LSTM to learn the representation from both ends of a sequence of asynchronous temporal events. There are several different ways to represent the inter-event time to feed into the network other than NHP. Can the proposed method also be applied to other processes?\\n\\n2. Within the particle filtering framework, each particle (hypothesis) is weighted by the likelihood of the sequence of observed events under that hypothesis. It turns out that the integral part of Equation 5 does not have an obvious analytical solution under NHP. Then, we first need a set of samples to approximate the likelihood evaluation. Later, we also need to sample particles. I am not quite convinced the computational efficiency of this approach in real applications of practice. Also, there is no analysis either empirically or analytically about the impact of the accumulative sampling errors on the inference performance. Furthermore, to learn the proposed distribution, the paper applies the REINFORCE algorithm under the proposed distribution q. But REINFORCE is known for large variance issue. Given that we already need lots of samples for the likelihood, it is unclear to me how stable the algorithm could be in practice.\\n\\n3. The experimental evaluation is weak. For particle filtering and smoothing, it is known that the filtering techniques are candidates for solving the smoothing problem but perform poorly when T is large. That's why it is necessary to develop more sophisticated strategies for good smoothing\\nalgorithms. As a result, it is unfair to only compare the smoothing approach with the filtering baseline. \\n\\nActually, what people really care about is how different techniques can behave in real data to impute realistic missing events. From this perspective, I suggest to use the QQ-plot to evaluate the goodness of fitting on the synthetic dataset. For example, given a sequence of events generated from an independent temporal point process, we can randomly delete events, and then apply different techniques, including Linderman et al. (2017), Shelton et al.(2018), to impute missing events. Finally, we can compare the imputed sequence of events with the groundtruth. \\n\\nIn addition, sequential monte carlo approach often suffers from skewed particle issue where one particle gradually dominates all the other particles with no diversity. It is unclear how the proposed approach is able to handle this. \\n\\nOne missing related paper is \\\"Learning Hawkes Processes from Short Doubly-Censored Event Sequences\\\"\\n\\nSection 5.2 can be significantly strengthened if comparing with at least one of these approaches.\\n\\n4. The paper is fairly written. I had some trouble reading back and forth for understanding Figure 1 since it has long caption that is not self-contained. The annotation of Section 2 is also too heavy to quickly skim through to memorize.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Re: particle smoothing for neural Hawkes Processes\", \"review\": [\"The paper presents an inference method (implicit distribution particle smoothing) for neural Hawkes processes that accounts for latent sequences of events that influence the observed trajectories.\", \"Quality\", \"The paper combines ideas from multiple areas of machine learning to tackle a challenging task of inference in multivariate continuous-time settings.\", \"The figures reported from the paper are comparative graphs with respect to particle filtering, and so the absolute level of performance of the methods is not characterized. Reporting of distribution of sample weights and or run-times/complexity would strengthen the paper.\", \"Clarity\", \"notation is complex replete with symbols \\\"@\\\" and text in math formulas\", \"It's not clear what p (\\\"the data model\\\") and p_miss (\\\"the missingness mechanism\\\") represent, and therefore why in equation 1: p(x,z) = p(xvz)p_miss(z| xvz) where v is the union symbol. In addition, how it's related to MAR and MNAR is unclear. If e.g. following Murphy, one writes MAR as: p(r|x_u, x_o) = p(r|x_o), r is a missingness vector, x_u is x unobserved, and x_o is x observed, then r corresponds to observation or not, whereas in the manuscript p_miss is on the values themselves, i.e. on the space where z={k_{i,j}@t_{i,j}} resides. We know, from the definition of MNAR that we can't use only the observed data to correctly infer the distributions of the missing values, and so while one can probabilistically predict in MNAR setting, their quality remains unknown. If none of the experiments touch upon MNAR data, perhaps it is possible to omit this part.\", \"Originality\", \"the work is rich, complex, original, and uses leading methods from multiple areas of ML.\", \"Significance\", \"the significance of this work could be high, as it may provide a way to conduct difficult inference in an effective way to produce increasingly flexible modeling of trajectories amidst partial observation.\", \"however the exposition (particularly the experiments) does not fully demonstrate this.\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
ryxnHhRqFm | Global-to-local Memory Pointer Networks for Task-Oriented Dialogue | [
"Chien-Sheng Wu",
"Richard Socher",
"Caiming Xiong"
] | End-to-end task-oriented dialogue is challenging since knowledge bases are usually large, dynamic and hard to incorporate into a learning framework. We propose the global-to-local memory pointer (GLMP) networks to address this issue. In our model, a global memory encoder and a local memory decoder are proposed to share external knowledge. The encoder encodes dialogue history, modifies global contextual representation, and generates a global memory pointer. The decoder first generates a sketch response with unfilled slots. Next, it passes the global memory pointer to filter the external knowledge for relevant information, then instantiates the slots via the local memory pointers. We empirically show that our model can improve copy accuracy and mitigate the common out-of-vocabulary problem. As a result, GLMP is able to improve over the previous state-of-the-art models in both simulated bAbI Dialogue dataset and human-human Stanford Multi-domain Dialogue dataset on automatic and human evaluation. | [
"pointer networks",
"memory networks",
"task-oriented dialogue systems",
"natural language processing"
] | https://openreview.net/pdf?id=ryxnHhRqFm | https://openreview.net/forum?id=ryxnHhRqFm | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"rJlOcp5YRm",
"SkgRl95FR7",
"BJeSLd9FRQ",
"BJgs2DPSR7",
"ByxLNlPEAQ",
"B1xRlvQppm",
"BJetM8J3aX",
"BklMNKNLaQ",
"S1g_lF4U6X",
"BkeK5uELpm",
"r1gSR9IA2X",
"rJljYi39nX",
"BJl5tkoN2X",
"ryeDCdQ-jQ"
],
"note_type": [
"official_comment",
"official_comment",
"official_comment",
"comment",
"official_comment",
"official_comment",
"comment",
"official_comment",
"official_comment",
"official_comment",
"meta_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1543249295775,
1543248373980,
1543247948997,
1542973363403,
1542905902459,
1542432501875,
1542350352692,
1541978409604,
1541978352464,
1541978256627,
1541462732575,
1541225346803,
1540824962279,
1539549391023
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1581/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1581/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1581/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1581/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1581/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1581/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1581/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1581/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1581/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1581/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1581/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1581/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"Covered\", \"comment\": \"Many thanks for the most detailed reply. It was most enlightening. Yes please, do add that to the discussion. I believe many people in the field would be interested in your point of view. Many thanks again!\"}",
"{\"title\": \"Thank you for raising your concerns\", \"comment\": \"Please let us reply you as below:\\n\\n1) The entity type information we mentioned is the slot type representation (embedding) for each slot. For example, to the best of our knowledge, in the MemNN for bAbi dialogue, they have 7 special words (embeddings) for 7 different slot types that they then added to all the words that are related to them. In this way, for example, the representation of \\u201cParis\\u201d can include the information of \\u201clocation\\u201d. \\n\\nIn our case, our model does not have the explicit information in the \\u201cParis embedding\\u201d that it is a \\u201clocation\\u201d. Please note that when we encode the dialogue history, we used the plain text input, that is, the \\u201cParis\\u201d embedding does not include the \\u201clocation\\u201d type embedding. Even if an OOV word comes into place, our model did not add the type information as the others did, instead, we added the hidden states of the RNN encoder. The sketch response is only used while \\\"decoding\\\", not \\\"encoding\\\". During the encoding stage, all the input are plain texts, not the sketch sentences. Hope this makes it clear about your question one and two. \\n\\n2) We total understood your concern about the KB. Please note that for each \\u201cnode\\u201d, we summed up the embeddings of (Subject, Relation, Object), then we assume that every time the \\u201cnode\\u201d is pointed to, we copy the Object word out (it\\u2019s our own rule we defined). Therefore, there is no constraint that what needs to be a subject and what needs to be an object. The only thing important in our task is we need to be able to copy every entity that may exist in a response. Thus, we need at least a node that we can copy the \\u201cname of entity\\u201d out, for example, the restaurant name. That is, either we decide to copy the Subject for the entity names, or we just simply represent the name of the entity as an Object in one node. This is a matter of design. We do this is because it is easy for us to maintain our code. Therefore, hope this explains your question three. \\n\\nIn addition, as we mentioned in the last post, there are many different ways to represent the KB information, some may use flat KB as we did (ex: mem2seq), some may use the hierarchical one. Although flat KB might not be the most effective one (because the hierarchical one is easier for machine to reason KB, the nodes are assumed to connect by the entity names), we choose this preprocessing strategy and left the ability of connecting the nodes to our system, so does some previous works, is because it is simple and fast. The comparison between these two could be an interesting future work. \\n\\n3) Lastly, we will release our code if our work is published. If you have any further question about the preprocessing or model architecture, etc, we hope that can make you more clear. \\n\\nThank you again for your interests in our work. Very happy to hear that.\"}",
"{\"title\": \"Re: Reviewer3\", \"comment\": \"Yes, we agree with you that it will be interesting to have a comparison of the end-to-end systems with the modularized systems. However, please let us show some difficulties to design a system like that using pydial in the SMD and bAbI datasets we used in our paper:\\n\\nTo the best of our knowledge, in the pydial framework, it requires to have the dialogue act\\u2019s labels for the NLU module and the belief states\\u2019 labels for the belief tracker module. The biggest challenge here is we do not have such labels in the SMD and bAbI datasets we used. Moreover, the semi tracker in pydial is rule-based (ex: self.slot_vocab[\\\"pricerange\\\"] = \\\"(price|cost)(\\\\ ?range)*\\\"), which need to re-write rules whenever it encounters a new domain or new datasets. Even its dialogue management module could be a learning solution like policy networks, the input of the policy network is still the hand-crafted state features and labels. Therefore, without the rules and labels predefined in the NLU and belief tracker modules, pydial couldn\\u2019t learn a good policy network. \\n\\nLastly, for now, based on the data we have (not very big size) and the current SOTA machine learning algorithms and models, we believe that a well and carefully constructed task-oriented dialogue system (like pydial) in a known domain using human rules (in NLU and Belief Tracker) with policy networks may outperform the end-to-end systems. However, in this paper, without additional human labels and human rules, we want to explore the potential and the advantage of end-to-end systems. Besides easy to train, for multi-domain cases, or even zero-shot domain cases, we believe end-to-end approaches will have better adaptability compared to any rule-based systems. We will include this discussion in our paper. \\n \\nThank you again for your feedback and we really appreciate it.\"}",
"{\"comment\": \"Thank you for your reply, but some of my concerns are still unclear.\", \"entity_type_information\": \"Thank you for clearly stating that GLMP uses entity type information to create sketch responses. You have mentioned GLMP did not add the \\u201centity type\\u201d information into the word representation, but neither did the existing approaches that use match feature (QRN, MemNN and Gated MemNN).\\n\\nAt a high level, there exists one set of approaches that use entity type information (in the model), and GLMP also uses the same information (in a different way). However, you have only compared against the models that do not use this information at all. This does not feel like a fair comparison.\", \"dataset_preparation\": \"Thank you for explaining the preprocessing in the SMD dataset, we felt it was not consistent with the explanation in section (2.1) which states \\\"each element bi \\u2208 B is represented in the triplet format as (Subject, Relation, Object) structure\\\". Any comments about the preprocessing in the bAbI dataset? The reason for posing the question (Q3) in the initial comment is that we strongly feel that removing the preprocessing would considerably reduce the accuracy as well as task completion rate for T3-OOV and T5-OOV. \\n\\nWe would appreciate if you can answer the specific questions we raised. Your silence, suggests that the answers are likely, \\n(Q1) very close performance\\n(Q2) not very good\\n(Q3) would not work very well\", \"title\": \"a few concerns\"}",
"{\"title\": \"Thanks - just one point\", \"comment\": \"For 9. I would still be interested (if possible and straight-forward) to see how you compare to pydial (that is not an encoder-decoder approach), since pydial policy manager is also NN (and not rule-based) as the Eric et al. 2017.\"}",
"{\"title\": \"Thank you for your feedback and your clear summary of our contribution\", \"comment\": \"Please let me reply to you below:\\n\\nFirst, in our experiment, we did not add the \\u201centity type\\u201d information into the word representation, which is same as the previous works such as Mem2Seq, MemNN, QRN, etc. Therefore, the comparison is fair. The step we did related to entity type was the sketch response preprocessing, based on the provided entity table (or the NER if the table is not provided), we can obtain our gold sketch responses for training. The local memory pointer is then learned to copy words to replace the generated sketch tags. Note that all the word-level representations in the external knowledge are not included the \\u201ctype embedding\\u201d.\\n\\nSecond, yes we followed the same preprocessing as in the Mem2Seq paper to represent our KB tuples. If you look into the original KB in the SMD dataset, it is not represented as the triplet format. Therefore, there are many different ways to represent the KB information, some may use flat KB as we did, some may use the hierarchical one, or even the input the table-like KB. Although it might not be the most effective one, we choose this preprocessing strategy, so does the previous works, is because it is simple and fast. There are some related works have tried different ways to represent KB information, but it may need additional attention calculation for entity copying. The comparison between these KB structures are interesting and could be our future works.\\n\\nThank you again for your interest in our work.\"}",
"{\"comment\": \"This paper builds upon Mem2Seq (Madotto et al 2018) to incorporate large external KBs for task-oriented dialogs. To the best of my understanding, the innovations over Mem2Seq are: (1) use of context RNN (instead of just last utterance) as the query in MN encoder, (2) addition of the hidden state of context RNN to the dialog memory (equation 3), (3) the use of global memory pointer 4) additional loss components (Loss_g and Loss_l in equation 11) and (5) two step decoding using sketch tags\\n\\nTo me the biggest pro of the paper is its impressive result on SMD corpus. I appreciate the authors performing a human evaluation of the generated response for SMD. However, I am quite concerned about two main issues. The first issue is in experimental rigor, and second is in dataset preparation, which is also related to experimental rigor but also exposes certain model weaknesses. I elaborate below.\\n\\nExperimental Rigor\\nThis paper uses \\\"entity type\\\" information in Local Memory Decoder (Section 2.3), but compares against all previous work that does NOT use this information. This makes the comparison not sound. In fact, the original Memory Net paper (Bordes & Weston 2017) and many extensions performed two sets of experiments, one vanilla and one with a \\\"match\\\" feature, which had access to the entity type information. This paper compares against all previous papers in the settings that do NOT use this \\\"match\\\" feature. So, it is not clear, whether the improvement is coming from the specific changes in the model, or just by using additional information at training time. For example, QRN paper (Seo et al 2017) with Match feature reports an average OOV error (across 5 bAbI tasks) of 2.3%, which is fairly close the reported results in Table 2 in this paper. In my opinion, this careful experimental comparison is essential before having a clear assessment of this paper.\\n\\nDataset Preparation\\nThis is a comment on this paper and also the Mem2Seq paper. Both these seem to have CHANGED the original training/test datasets to suit their model. In particular, all KB tuples in bAbI follow the format (restaurant_name, relation, value of relation), e.g., (olive_garden, rating, 5), however, Mem2Seq had reversed just the rating-relation tuples (5, rating, olive_garden), because its model allowed it to copy only from the object location, and it needed to copy restaurant name in the dialog. A similar example can be seen in this paper Figure 3 (row 5) where an ARTIFICIAL tuple (chinese_restaurant no_traffic 6_miles, poi, tai_pan) has been added to the SMD KB. Notice that values for different relations (namely poi_type, traffic_info, and distance) for the entity tai_pan have been concatenated in this artificial tuple at subject location and the entity name appears as object. This is just so that it can be copied by the model. Such tuples don't usually exist in normal KBs.\\n\\nI believe that this changing of datasets makes its comparisons with other models unfair. Even if other models were re-trained with this new modified dataset, it is still a severe limitation, because in practice, such tuples may not be found in the KB, and in such situations this model (and Mem2Seq) will not perform well. \\n\\n\\nQuestions to authors\\n1) how does your model compare with \\\"+match\\\" extension of previous models? \\n2) Say we are in a more reasonable setting where we are not given entity type information, but say we are given whether it is a KB entity or not. How well will your model perform then?\\n3) Suppose we remove the reverse relations in bAbI and the concatenated poi relations in SMD. How well will your model perform then?\", \"title\": \"Interesting Work, Need Clarity on Experiements\"}",
"{\"title\": \"Re: Reviewer2\", \"comment\": \"Thank you for your review and feedback. The question which you mentioned, the replies are as followed :\\n\\n1. You describe the auxiliary loss on the global pointer, and mention an ablation study that show that this improves performance. Maybe I am overlooking something, but I cannot find this ablation in the paper or appendix. It would be nice to see how large the effect is.\", \"reply\": \"In our evaluation setting, we combine the correctness and the appropriateness, as the criteria we mentioned in the appendix A.3.\"}",
"{\"title\": \"Re: Reviewer3\", \"comment\": \"Thank you for your review and feedback. The question which you mentioned, the replies are as followed :\\n\\n1. In Section 2.1 I am not sure all the symbols are clearly defined.\", \"reply\": \"We mainly followed previous works to compare end-to-end models without human feature engineer efforts. In Table 3, results of the rule-based system from the Eric et al., 2017 are reported, we can observe the improvement over the traditional pipeline solution on the SMD human-human dialogue dataset.\", \"our_model_has_three_loss_functions\": \"Loss_g for global memory pointer, Loss_v for sketch response generation and Loss_l for local memory pointer. During training, they are summed and optimized simultaneously.\\n\\n3. I am missing one more figure. From Fig 2 it's not so straightforward to see how the encoder/decoder along with the shared KB work at the same time (i.e. not independently)\"}",
"{\"title\": \"Re: Reviewer1\", \"comment\": \"Thank you for your review and feedback. The question which you mentioned, the replies are as followed :\\n\\n1. In global memory pointer, the users employ non-normalized probability (non-softmax). What is the difference in performance if one uses softmax?\", \"reply\": \"Sorry that we did not make it clear. The \\u201cSMD\\u201d dataset in our experiment is exactly the same as the \\u201cIn-Car Assistant\\u201d dataset in the Mem2Seq paper (different naming), both came from the paper Eric et al, 2017. Therefore, the results are comparable. Second, We did not include the DSTC2 in our paper is because it is a \\u201chuman-machine\\u201d dataset which is originally designed as a DST task, not a response generation task. That dataset has many noisy system responses as well. We take one of the dialogues as an example, one can observe that the system responses are not properly collected.\\n...\", \"user\": \"anything else\", \"system\": \"The post code of the_lucky_star is the_lucky_star_post_code\\n...\"}",
"{\"metareview\": \"Interesting paper applying memory networks that encode external knowledge (represented in the form of triples) and conversation context for task oriented dialogues. Experiments demonstrate improvements over the state of the art on two public datasets.\\nNotation and presentation in the first version of the paper were not very clear, hence many question and answers were exchanged during the reviews.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"novel architecture for task oriented dialogue systems\"}",
"{\"title\": \"nicely motivated architecture and thorough evaluation, aimed at an interesting and difficult task\", \"review\": [\"The paper presents a new model for reading and writing memory in the context of task-oriented dialogue. The model contains three main components: an encoder, a decoder, and an external KB. The external KB is in the format of an SVO triple store. The encoder encodes the dialogue history and, in doing so, writes its hidden states to memory and generates a \\\"global memory pointer\\\" as its last hidden state. The decoder takes as input the global memory pointer, the encoded dialogue state history, and the external KB and then generates a response using a two-step process in which it 1) generates a template response using tags to designate slots that need filling and 2) looks up the correct filler for each slot using the template+global memory pointer as a query. The authors evaluate the model on a simulated dialogue dataset (bAbI) and on a human-human dataset (Stanford Multi-domain Dialogue or SMD) as well as in a human eval. They show substantial improvements over existing models on SMD (the more interesting of the datasets) in terms of entity F1--i.e. the number of correctly-generated entities in the response. They also show improvement on bAbI specifically on cases involving OOVs. On the human evaluation, they show improvements in terms of both \\\"appropriateness\\\" and \\\"human-likeness\\\".\", \"Overall, I think this is a nice and well-motivated model. I very much appreciate the thoroughness of the evaluation (two different datasets, plus a human evaluation). The level of analysis of the model was also good, although there (inevitably) could have been more. Since it is such a complex model, I would have liked to see more thorough ablations or at least better descriptions of the baselines, in order to better understand which specific pieces of the model yield which types of gains. A few particular questions below:\", \"You describe the auxiliary loss on the global pointer, and mention an ablation study that show that this improves performance. Maybe I am overlooking something, but I cannot find this ablation in the paper or appendix. It would be nice to see how large the effect is.\", \"Following on the above, why no similar auxiliary losses on additional components, e.g. the template generation? Were these tried and deemed unnecessary or vice-versa (i.e. the default was no auxiliary loss and they were only added when needed)? Either way, it would be nice to better communicate the experiments/intuitions that motivated the particular architecture you arrived at.\", \"I really appreciate that you run a human eval. But why not have humans evaluate objective \\\"correctness\\\" as well? It seems trivial to ask people to say whether or not the answer is correct/communicates the same information as the gold.\"], \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"End-to-end task oriented system: An encoder-decoder approach with a shared external knowledge base\", \"review\": \"This is, in general, a well-written paper with extensive experimentation.\\n\\nThe authors tried to describe their architecture both with equations as well as graphically. However, I would like to mention the following: \\n\\nIn Section 2.1 I am not sure all the symbols are clearly defined. For example, I could not locate the definitions of n, l etc. Even if they are easy to assume, I am fond of appropriate definitions. Also, I suspect that some symbols, like n, are not used consistently across the manuscript.\\n\\nI am also confused about the loss function. Which loss function is used when?\\n\\nI am missing one more figure. From Fig 2 it's not so straightforward to see how the encoder/decoder along with the shared KB work at the same time (i.e. not independently)\\n\\nIn Section 2.3, it's not clear to me how the expected output word will be picked up from the local memory pointer. Same goes for the entity table.\\n\\nHow can you guarantee that that position n+l+1 is a null token?\\n\\nWhat was the initial query vector and how did you initialise that? Did different initialisations had any effect on performance?\\n\\nIf you can please provide an example of a memory position.\\n\\nAlso, i would like to see a description of how the OOV tasks are handled.\\n\\nFinally, your method is a NN end-to-end one and I was wondering how do you compare not with other end-to-end approaches, but with a traditional approach, such as pydial?\", \"and_some_minor_suggestions\": \"Not all the abbreviations are defined. For example QRN, GMN, KVR. It would also be nice to have the references of the respective methods included in the Tables or their captions.\\n\\nParts of Figs. 1&2 are pixelised. It would be nice to have everything vectorised.\\n\\n I would prefer to see the training details (in fact, I would even be favorable of having more of those) in the main body of the manuscript, rather than in the appendix.\\n\\nThere are some minor typos, such as \\\"our approach that utilizing the recurrent\\\" or \\\"in each datasets\\\"\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"Expect more experiments\", \"review\": \"This paper puts forward a new global+local memory pointer network to tackle task-oriented dialogue problem.\\n\\nThe idea of introducing global memory is novel and experimental results show its effectiveness to encode external knowledge in most cases.\\n\\nHere're some comments:\\n1. In global memory pointer, the users employ non-normalized probability (non-softmax). What is the difference in performance if one uses softmax?\\n\\n2. In (11), there's no linear weights. Will higher weights in global/local help?\\n\\n3. As pointed out in ablation study, it's weird that in task5 global memory pointer does not help.\\n\\n4. The main competitor of this algorithm is mem2seq. While mem2seq includes DSTC2 and In-car Assistant, and especially in-car assistant provides the first example dialogue, why does the paper not include expeirments on these two datasets?\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
rJlnB3C5Ym | Rethinking the Value of Network Pruning | [
"Zhuang Liu",
"Mingjie Sun",
"Tinghui Zhou",
"Gao Huang",
"Trevor Darrell"
] | Network pruning is widely used for reducing the heavy inference cost of deep models in low-resource settings. A typical pruning algorithm is a three-stage pipeline, i.e., training (a large model), pruning and fine-tuning. During pruning, according to a certain criterion, redundant weights are pruned and important weights are kept to best preserve the accuracy. In this work, we make several surprising observations which contradict common beliefs. For all state-of-the-art structured pruning algorithms we examined, fine-tuning a pruned model only gives comparable or worse performance than training that model with randomly initialized weights. For pruning algorithms which assume a predefined target network architecture, one can get rid of the full pipeline and directly train the target network from scratch. Our observations are consistent for multiple network architectures, datasets, and tasks, which imply that: 1) training a large, over-parameterized model is often not necessary to obtain an efficient final model, 2) learned ``important'' weights of the large model are typically not useful for the small pruned model, 3) the pruned architecture itself, rather than a set of inherited ``important'' weights, is more crucial to the efficiency in the final model, which suggests that in some cases pruning can be useful as an architecture search paradigm. Our results suggest the need for more careful baseline evaluations in future research on structured pruning methods. We also compare with the "Lottery Ticket Hypothesis" (Frankle & Carbin 2019), and find that with optimal learning rate, the "winning ticket" initialization as used in Frankle & Carbin (2019) does not bring improvement over random initialization. | [
"network pruning",
"network compression",
"architecture search",
"train from scratch"
] | https://openreview.net/pdf?id=rJlnB3C5Ym | https://openreview.net/forum?id=rJlnB3C5Ym | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"B1gfhzdi8N",
"ryxuXSjNlN",
"HJxVsKesy4",
"BJggHPloJV",
"Hkgd8UEUyE",
"BkeOueaS1E",
"B1esWvSXkV",
"B1xSR8r71N",
"BygbtISQkV",
"SkeqNuj5R7",
"ryxwEIq9AQ",
"rJg7RrcqAX",
"H1lQcS95R7",
"H1eWSaXYAm",
"SkgWHPrWRX",
"HklvbPrbRX",
"rylNnUrb0Q",
"SylBTVnyAm",
"HJegC-hyCX",
"H1gtxtsJAQ",
"Bke9Aj5yAX",
"rygzEQG1AQ",
"BJlc3l9R6X",
"rylw3GLK6m",
"H1gQ_fLYT7",
"HketQGIKTm",
"BJlsAZIKpQ",
"r1g1UapQ6Q",
"S1xYz1vMaX",
"B1g8x7IGT7",
"Bklxok8M6m",
"BJl8y34z6Q",
"S1glbt1GaQ",
"Sklj-jtWpX",
"HklijvuWaX",
"HylleWeb6X",
"HyeOhcLeaQ",
"SyezQFUlTm",
"HJlmfuLlTX",
"H1xQbvIxam",
"rJgoFVI52m",
"r1g6WR792m",
"HJlT5l3Kh7",
"S1lkK99O2Q",
"SyxGOzHu2Q",
"SJg4IR4ehX",
"Sye8s2mghX",
"H1e_Yl3AoQ",
"Bkx_Yxl0sQ",
"SyxhwPyAim",
"ByeR0LJ0sQ",
"rkxX1pzsoQ",
"S1gZyN0iqX",
"SJx5xF8i5Q",
"rkl1z9EoqX",
"B1eF3Ib5qm",
"H1x0Zzy9cX",
"SkeZDURKcQ",
"BJxaJW0ucm",
"rkg1zlP_5m",
"SyxcUvPQ57",
"r1x6cG_McX",
"Bylhl2NecQ",
"B1eXOLEe9m",
"S1l5qwzl9m"
],
"note_type": [
"official_comment",
"meta_review",
"comment",
"comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"official_comment",
"comment",
"official_comment",
"comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"comment",
"official_comment",
"comment",
"official_comment",
"comment",
"official_comment",
"comment",
"official_comment",
"official_comment",
"comment",
"official_comment",
"comment",
"official_review",
"official_comment",
"official_review",
"comment",
"official_review",
"official_comment",
"comment",
"official_comment",
"comment",
"official_comment",
"official_comment",
"comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"official_comment",
"comment",
"official_comment",
"comment",
"official_comment",
"comment",
"comment",
"official_comment",
"comment"
],
"note_created": [
1551757993651,
1545020703969,
1544386971892,
1544386359637,
1544074832008,
1544044656108,
1543882498756,
1543882444879,
1543882360718,
1543317554161,
1543312943370,
1543312842525,
1543312779389,
1543220536935,
1542702905501,
1542702846848,
1542702763557,
1542599868612,
1542599112005,
1542596848520,
1542593489585,
1542558505847,
1542525105868,
1542181551301,
1542181483472,
1542181409105,
1542181331480,
1541819719169,
1541725968819,
1541722861978,
1541722008214,
1541716958101,
1541695735679,
1541671682928,
1541666723451,
1541632232107,
1541593776025,
1541593370057,
1541593098785,
1541592826824,
1541198979494,
1541189125248,
1541157012675,
1541085814977,
1541063274105,
1540537931840,
1540533406308,
1540436095952,
1540386943667,
1540384612042,
1540384469658,
1540201691052,
1539199961163,
1539168498147,
1539160583348,
1539081905384,
1539072518387,
1539069528889,
1539002596788,
1538973703093,
1538647889885,
1538585236732,
1538440179698,
1538438763356,
1538430865548
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1580/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1580/Area_Chair1"
],
[
"(anonymous)"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1580/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1580/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1580/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1580/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1580/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1580/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1580/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1580/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1580/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1580/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1580/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1580/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1580/Authors"
],
[
"~Yihui_He1"
],
[
"ICLR.cc/2019/Conference/Paper1580/Authors"
],
[
"~Yihui_He1"
],
[
"ICLR.cc/2019/Conference/Paper1580/Authors"
],
[
"~Yihui_He1"
],
[
"ICLR.cc/2019/Conference/Paper1580/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1580/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1580/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1580/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1580/Authors"
],
[
"(anonymous)"
],
[
"~Yihui_He1"
],
[
"ICLR.cc/2019/Conference/Paper1580/Authors"
],
[
"~Yihui_He1"
],
[
"ICLR.cc/2019/Conference/Paper1580/Authors"
],
[
"~Yihui_He1"
],
[
"ICLR.cc/2019/Conference/Paper1580/Authors"
],
[
"~Yihui_He1"
],
[
"ICLR.cc/2019/Conference/Paper1580/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1580/AnonReviewer3"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1580/AnonReviewer3"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1580/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1580/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1580/AnonReviewer3"
],
[
"~Brendan_Duke1"
],
[
"ICLR.cc/2019/Conference/Paper1580/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1580/Authors"
],
[
"~Yang_He2"
],
[
"ICLR.cc/2019/Conference/Paper1580/Authors"
],
[
"~Ting-Wu_Chin1"
],
[
"ICLR.cc/2019/Conference/Paper1580/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1580/Authors"
],
[
"~Yang_He2"
],
[
"ICLR.cc/2019/Conference/Paper1580/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1580/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1580/Authors"
],
[
"~Jian-Hao_Luo1"
],
[
"ICLR.cc/2019/Conference/Paper1580/Authors"
],
[
"~Jian-Hao_Luo1"
],
[
"ICLR.cc/2019/Conference/Paper1580/Authors"
],
[
"~Jian-Hao_Luo1"
],
[
"ICLR.cc/2019/Conference/Paper1580/Authors"
],
[
"~Tijmen_Blankevoort1"
],
[
"~Yihui_He1"
],
[
"ICLR.cc/2019/Conference/Paper1580/Authors"
],
[
"~Yihui_He1"
]
],
"structured_content_str": [
"{\"title\": \"Camera-ready version\", \"comment\": \"We uploaded a camera-ready version of the paper. In response to AC's comment, we added more results comparing with the Lottery Ticket Hypothesis in Appendix A, and changed the terminology of \\\"standard\\\" and \\\"non-standard\\\". We would like to thank the AC for the valuable suggestion.\"}",
"{\"metareview\": \"The paper presents a lot of empirical evidence that fine tuning pruned networks is inferior to training them from scratch. These results seem unsurprising in retrospect, but hindsight is 20-20. The reviewers raised a wide range of issues, some of which were addressed and some which were not. I recommend to the authors that they make sure that any claims they draw from their experiments are sufficiently prescribed. E.g., the lottery ticket experiments done by Anonymous in response to this paper show that the random initialization does poorer than restarting with the initial weights (other than in resnet, though this seems possibly due to the learning rate). There is something different in their setting, and so your claims should be properly circumscribed. I don't think the \\\"standard\\\" versus \\\"nonstandard\\\" terminology is appropriate until the actual boundary between these two behaviors is identified. I would recommend the authors make guarded claims here.\", \"confidence\": \"3: The area chair is somewhat confident\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Empirical paper casting shade on pruning\"}",
"{\"comment\": \"Dear reviewer,\\n I think the discussion above answers your #5 https://openreview.net/forum?id=rJlnB3C5Ym¬eId=H1gtxtsJAQ¬eId=HklijvuWaX\\n In short, no matter how long the model is fine-tuned, the comparison is unfair, since the original ImageNet model is not converged. If the original model is converged, pruning methods can achieve better performance.\", \"the_fair_setting_should_be\": \"1. train the original model long enough until convergence.\\n 2. prune the converged model and fine-tune until convergence.\\n 3. train the pruned model from scratch until convergence.\", \"title\": \"#5 \\u201cfine-tuning with enough epochs\\u201d.\"}",
"{\"comment\": \"Dear reviewer, plz read the discussion above: https://openreview.net/forum?id=rJlnB3C5Ym¬eId=rJlnB3C5Ym¬eId=HklijvuWaX\", \"title\": \"The scratch-B results are probably not valid\"}",
"{\"title\": \"More convinced about results\", \"comment\": \"Thanks to the authors for answering to all my questions and doubts. Now I am more convinced about the validity of the obtained results.\\nRegarding the comparison with [1], I checked the last version of the paper. In Fig.8 they show that with their initialization and a special initialization they are able to obtain good performance even with very strong pruning (>80%), while the random initialization obtain lower results. However, also in this case, the reason can be a not sufficient training of the random initialization.\\nGlobally, I consider the obtained results very interesting and I think that the paper deserves publication.\"}",
"{\"title\": \"Thanks for the thorough rebuttal and improved draft\", \"comment\": \"I'm satisfied that the author's have addressed many of my primary complaints and improved the exposition of the paper. I have increased my score accordingly.\"}",
"{\"title\": \"Response to AnonReviewer1 [1/3]\", \"comment\": \"Thanks for your detailed feedback! We have followed your suggestions in the first review and uploaded a revision, and a summary can be found here (https://openreview.net/forum?id=rJlnB3C5Ym¬eId=SkeqNuj5R7 ). Now we give our response to your new reply and present some new results as follows:\\n\\n#1#2. ## Fine-tuning saves training time ## In the newest revision, we have emphasized the fast speed of fine-tuning in both introduction and conclusion. In the 3rd paragraph of intro where we present our main finding, we added \\\"However, in both cases, if a pretrained large model is already available, pruning and fine-tuning from it can still greatly save the training time to obtain the efficient model\\\"; in the conclusion, we make this point more visible by making bullet points on when pruning and fine-tuning is faster; we also mentioned \\\"sometimes there are pretrained models available\\\" in the first paragraph of intro.\\t\\n\\n#5. ## Fine-tuning with enough epochs ##\\n1) We agree that this experiment shows fine-tuning is faster, and we've emphasized this in the revision. Indeed 320 epochs are longer than the Scratch-B setting in the paper, and here it is for demonstration purpose (showing fine-tuning for more epochs does not bring much improvement compared with scratch-training) and we didn't use the results here in the paper. Also, if we count both large model training and fine-tuning epochs, scratch-training with 160/320 epochs is still at a disadvantage compared with fine-tuning with 160/320 epochs, since the latter benefits from the epochs for large model training.\\n\\n2) (Q1) In scratch-training we use the learning rate for large model training (initial learning rate 0.1 multiplied by 0.1 at \\u00bd and \\u00be schedule), and for fine-tuning we use a constant low learning rate (0.001). We use this type of learning rate is to follow prior works, and using low learning rate in fine-tuning is also part of the prior belief that \\\"the inherited weights should be preserved\\\". However, we do agree that exploring the learning rate choices of fine-tuning can be useful. \\n\\nHere, we investigate using the same learning rate for fine-tuning as the large model training (initial learning rate 0.1 multiplied by 0.1 at \\u00bd and \\u00be schedule) and fine-tuning for 160 epochs. We call this \\\"fine-tuning with learning rate restart\\\", since if we include the large model training the learning rate first drops from 0.1 to 0.001 and then after pruning it \\\"restarts\\\" at 0.1 and then again drops to 0.001. We found this can be better than the fine-tuning or scratch-training results in the original paper in the table below (result tables are also available in this pdf link https://drive.google.com/open?id=1dnMDj_kAYblUjHPm9CAGi3bztkyH5dsj ):\\n\\n------------------------------------------------------------------------------------------------------------------------------\\n Dataset Pruned Model Fine-tune Scratch-E Scratch-B Fine-tune-restart\\n-------------------------------------------------------------------------------------------------------------------------------\\nCIFAR-10 VGG-16-A 93.41(\\u00b10.12) 93.62(\\u00b10.11) 93.78(\\u00b10.15) 93.80(\\u00b10.07)\\nCIFAR-10 ResNet-56-A 92.97(\\u00b10.17) 92.96(\\u00b10.26) 93.09(\\u00b10.14) 93.46(\\u00b10.21)\\nCIFAR-10 ResNet-56-B 92.67(\\u00b10.14) 92.54(\\u00b10.19) 93.05(\\u00b10.18) 93.29(\\u00b10.19)\\nCIFAR-10 ResNet-110-A 93.14(\\u00b10.16) 93.25(\\u00b10.29) 93.22(\\u00b10.22) 93.55(\\u00b10.17)\\nCIFAR-10 ResNet-110-B 92.69(\\u00b10.09) 92.89(\\u00b10.43) 93.60(\\u00b10.25) 93.51(\\u00b10.15)\\n-------------------------------------------------------------------------------------------------------------------------------\"}",
"{\"title\": \"Response to AnonReviewer1 [2/3]\", \"comment\": \"(Continuing) Since in the table above fine-tuning takes 160 epochs which is much more than the original 40 epochs, now we consider the epochs for both large model training and fine-tuning when determining scratch-training epochs (our focus is when there are no pretrained large models). We also use the same learning rate restart schedule and compare this with \\\"finetune-restart\\\" in the table below (the learning rate restart is shown to be beneficial for performance in [1]). It can be seen that Scratch-E/B with restart can still perform comparably with this better fine-tuning learning rate schedule. \\n\\n------------------------------------------------------------------------------------------------------------------------\\n Dataset Pruned Model Fine-tune-restart Scratch-E-restart Scratch-B-restart\\n------------------------------------------------------------------------------------------------------------------------\\nCIFAR-10 VGG-16-A 93.80(\\u00b10.07) 93.75(\\u00b10.21) 93.84(\\u00b10.18)\\nCIFAR-10 ResNet-56-A 93.46(\\u00b10.21) 93.44(\\u00b10.08) 93.27(\\u00b10.12)\\nCIFAR-10 ResNet-56-B 93.29(\\u00b10.19) 93.11(\\u00b10.10) 93.36(\\u00b10.29)\\nCIFAR-10 ResNet-110-A 93.55(\\u00b10.17) 93.84(\\u00b10.11) 93.56(\\u00b10.32)\\nCIFAR-10 ResNet-110-B 93.51(\\u00b10.15) 93.54(\\u00b10.26) 93.58(\\u00b10.51)\\n------------------------------------------------------------------------------------------------------------------------\\n\\nThis set of experiments demonstrates that a better choice of learning rate schedule for fine-tuning can boost the accuracy (possibly due to more epochs and the lr \\\"restart\\\" effect [1]), but we still can train the pruned model from scratch and achieve comparable performance. This is in line with our conclusion that training a large model is not absolutely necessary.\\n\\n3) (Q2) This slight result difference is because the two sets of experiments in reply#8 and reply#5 are done independently. \\n\\n4) (Q3) We have plotted some convergence curves (including significantly pruned models, with or without learning rate restart), in Section 2 of this pdf link https://drive.google.com/open?id=1dnMDj_kAYblUjHPm9CAGi3bztkyH5dsj . We will include some of the convergence curves in the next revision. \\n\\n#6-1 ## Failure case on ImageNet ## Thanks for the suggestion. In the newest revision, we have added this result and some discussions in Section 4.2 (Table 6) and Appendix G.\\n\\n#6-2 ## Significantly pruned models## \\n1) #Fine-tuning learning rate# We present the results for significantly pruned models (the models we evaluated in our last response) when considering the better restart learning rate schedule for fine-tuning:\\n-------------------------------------------------------------------------------------------------------------------------------\\n Dataset Model Ratio Fine-tune Scratch-E Scratch-B Fine-tune-restart\\n-------------------------------------------------------------------------------------------------------------------------------\\nCIFAR-10 DenseNet-40 80% 92.64(\\u00b10.12) 93.07(\\u00b10.08) 93.61(\\u00b10.12) 93.19(\\u00b10.17)\\nCIFAR-100 DenseNet-40 80% 69.60(\\u00b10.22) 71.04(\\u00b10.36) 71.45(\\u00b10.30) 72.01(\\u00b10.31)\\n========================================================================\\nCIFAR-10 PreResNet-164 80% 91.76(\\u00b10.38) 93.21(\\u00b10.17) 93.49(\\u00b10.20) 92.14(\\u00b10.16)\\nCIFAR-10 PreResNet-164 90% 82.06(\\u00b10.92) 87.55(\\u00b10.68) 88.44(\\u00b10.19) 85.59(\\u00b10.80)\\n-------------------------------------------------------------------------------------------------------------------------------\\n\\n---------------------------------------------------------------------------------------------------------------------------\\n Dataset Model Ratio Fine-tune-restart Scratch-E-restart Scratch-B-restart\\n---------------------------------------------------------------------------------------------------------------------------\\nCIFAR-10 DenseNet-40 80% 93.19(\\u00b10.17) 93.46(\\u00b10.15) 93.23(\\u00b10.34)\\nCIFAR-100 DenseNet-40 80% 72.01(\\u00b10.31) 71.71(\\u00b10.52) 72.29(\\u00b10.41)\\n=====================================================================\\nCIFAR-10 PreResNet-164 80% 92.14(\\u00b10.16) 93.52(\\u00b10.15) 93.15(\\u00b10.43)\\nCIFAR-10 PreResNet-164 90% 85.59(\\u00b10.80) 88.07(\\u00b10.66) 88.26(\\u00b10.45)\\n---------------------------------------------------------------------------------------------------------------------------\\nIt can be seen that under this setting, Scratch-E/B with restart outperforms fine-tuning with restart in all cases.\"}",
"{\"title\": \"Response to AnonReviewer1 [3/3]\", \"comment\": \"2) #ResNet-56 Results# We have added results for large prune ratios for Network Slimming on PreResNet-56 and L1-norm filter pruning on ResNet-56. For PreResNet-56, the prune ratio is 80%. For ResNet-56, we use uniform pruning ratio 90% for each layer. As before, we present results for both the original fine-tuning schedule (denoted as \\u2018Fine-tune\\u2019) and scratch-training/fine-tuning with restart (denoted as \\u2018Fine-tune-restart\\u2019).\", \"network_slimming\": \"------------------------------------------------------------------------------------------------------------------------------\\n Dataset Model Ratio Fine-tune Scratch-E Scratch-B Fine-tune-restart \\n------------------------------------------------------------------------------------------------------------------------------\\nCIFAR-10 PreResNet-56 80% 74.66(\\u00b10.96) 88.25(\\u00b10.38) 88.65(\\u00b10.32) 86.71(\\u00b11.23) \\n------------------------------------------------------------------------------------------------------------------------------\\n\\n------------------------------------------------------------------------------------------------------------------------------\\n Dataset Model Ratio Fine-tune-restart Scratch-E-restart Scratch-B-restart\\n------------------------------------------------------------------------------------------------------------------------------\\nCIFAR-10 PreResNet-56 80% 86.71(\\u00b11.23) 88.61(\\u00b10.62) 88.64(\\u00b10.28) \\n------------------------------------------------------------------------------------------------------------------------------\", \"l1_norm_filter_pruning\": \"------------------------------------------------------------------------------------------------------------------------------\\n Dataset Model Ratio Fine-tune Scratch-E Scratch-B Fine-tune-restart\\n------------------------------------------------------------------------------------------------------------------------------\\nCIFAR-10 ResNet-56 90% 89.17(0.08) 91.02(\\u00b10.12) 91.93(\\u00b10.26) 90.29(\\u00b10.26)\\n------------------------------------------------------------------------------------------------------------------------------\\n\\n------------------------------------------------------------------------------------------------------------------------------\\n Dataset Model Ratio Fine-tune-restart Scratch-E-restart Scratch-B-restart\\n------------------------------------------------------------------------------------------------------------------------------\\nCIFAR-10 ResNet-56 90% 90.29(\\u00b10.26) 91.57(\\u00b10.10) 91.40(\\u00b10.34)\\n------------------------------------------------------------------------------------------------------------------------------\\nIt can be seen that for both pruning methods, training significantly pruned models (even without learning rate restart) from scratch can still outperform fine-tuned models. This supports that \\\"the preserved weights is more essential for fast fine-tuning but less useful for significant pruning ratios\\\".\\n\\nThanks again for your questions and any further discussions and suggestions are welcome.\\n\\n[1] SGDR: Stochastic Gradient Descent with Warm Restarts. Loshchilov et al., ICLR 2017.\"}",
"{\"title\": \"Summary of Revision\", \"comment\": \"Thanks for all the detailed reviews! Following reviewers' suggestions, we have updated the paper and uploaded a revision (Nov 26), and here we give a summary of the major changes.\", \"in_response_to_reviewer_1\": \"1. We include the fine-tuning details in Section 3 and the results and analysis on fine-tuning for more epochs in Appendix D.\\n2. We add the results for significantly pruned models in Appendix C.\\n3. We add more results for non-structured weight pruning on ImageNet in Table 6, with analysis on why in some cases training from scratch cannot match fine-tuning in Section 4.2 and Appendix G.\\n4. We emphasize the fast speed of fine-tuning (intro and conclusion) and prior works' discussion on pruning and architecture learning (first paragraph of Section 5).\", \"in_response_to_reviewer_2\": \"1. We show our observations also hold on soft filter pruning [1] in Appendix A.\\n2. We present some experiments and analysis on the lottery ticket hypothesis [2] in Appendix B.\", \"in_response_to_reviewer_3\": \"1. We add more references when introducing the common belief in the introduction.\\n2. We include more experiments and analysis on (transferring) pruned sparsity patterns in Section 5 and Appendix F. We also raise this point to a major focus of the paper (e.g., in abstract, intro and conclusion).\\n3. We emphasize the fast speed of fine-tuning (intro and conclusion) and prior works' discussion on pruning and architecture learning (first paragraph of Section 5).\", \"others\": \"1. We add experiments on extending the training epochs in Appendix E.\\n2. We visualize the weight distribution in Appendix G.\\n3. We discuss pruning and conventional architecture search methods in the last two paragraphs of Section 5.\\n4. We update the result table of ThiNet (Table 2) following the suggestion from the original authors.\\n5. We add related references and discussions as suggested by other commenters.\\n\\n[1] Soft Filter Pruning for Accelerating Deep Convolutional Neural Networks, Yang He, Guoliang Kang, Xuanyi Dong, Yanwei Fu, Yi Yang, IJCAI 2018.\\n[2] The Lottery Ticket Hypothesis: Finding Small, Trainable Neural Networks. Anonymous, Submitted to ICLR, https://openreview.net/forum?id=rJl-b3RcF7 .\"}",
"{\"title\": \"Response to AnonReviewer3 [1/2]\", \"comment\": \"Thank you for your review! Following your suggestions, we've updated the paper and we're happy to address your concerns. In summary, we would like to explain why we think our finding is surprising, and in the revision we also present more results on the use of sparsity patterns, and raise this point to a major one. \\n\\n1. ## References and Quotes about Common Beliefs ##\\n\\nOur claimed beliefs have references to back up, and we will introduce them in detail here. In [1, 2, 3, 10], pruning and fine-tuning a model is reported to be superior to training the pruned model from scratch. More concretely: \\n1) In Section 4.1.4 and Table 4 of [1], the authors conducted scratch-training experiments and reported that \\u201cShown in Table 4, we observed that it\\u2019s difficult for from scratch counterparts to reach competitive accuracy. Our model outperforms from scratch one.\\u201d \\n2) In Section 4.2 and Table 1 of [2], the authors compared scratch-training with pruning and reported that \\u201cHowever, if we train this model from scratch, the top-1/top-5 accuracy are only 67.00%/87.45% respectively, which is much worse than our pruned network. \\u201d.\\n3) In Section 4 and Table 1 of [3], the authors showed that \\u201cTraining a pruned model from scratch performs worse than retraining a pruned model, which may indicate the difficulty of training a network with a small capacity.\\u201d.\\n4) In Section 1/4.2 and Figure 3, 4, 6 of [10], the authors compare with scratch-training and reported that \\u201cOur experiments show that CNNs pruned by our approach outperform those with the same structures but which are either trained from scratch or randomly pruned.\\u201d.\\n\\nThe reason why previous works obtain \\u201ccontradictory\\u201d results with us might be that this baseline is not carefully or properly evaluated. For example, for [1, 2] we found in authors' code that the accuracy gap could be due to that a simpler-than-standard data augmentation scheme is used in scratch-training. Moreover, previous works didn't compare with Scratch-B.\", \"here_we_provide_more_evidence_of_the_mentioned_common_beliefs\": \"1) In section 2 of [1], the authors stated that \\\"Many researchers have found that deep models suffer from heavy over-parameterization.... However, this redundancy seems necessary during model training, since the highly non-convex optimization is hard to be solved with current techniques.\\\"\\n2) In section 2 of [9], the authors stated that \\\"Pruning is a classic method to reduce network complexity. Compared with training the same structure from scratch, pruning from a pretrained redundant model achieves much better accuracy [2, 3]. This is mainly because of the highly non-convex optimization nature in model training. And, a certain level of redundancy is necessary to guarantee enough capacity during training. Hence, there is a great need to remove such redundancy after training.\\\"\\n3) In the first paragraph of [11], the authors stated that \\u201cPruning and compression are possible because these large nets are hugely overparameterized, and empirical evidence suggests it is easier to train a large net and compress it than to train a smaller net from start.\\u201d\\n\\nThank you for your suggestion, we have added references to the second paragraph of introduction where we introduce previous beliefs in the revision.\\n\\n2. ## Why we think our finding is surprising ##\", \"we_would_also_like_to_explain_why_our_finding_is_surprising_in_four_aspects\": \"1). #Prior Results# As we mentioned above, previous works have reported that scratch-training cannot match the accuracy of pruning and fine-tuning. In contrast, we revealed that same-level accuracy can be achieved if we ensure a proper and fair scratch-training baseline, thus there's no particular difficulty in training a small model from scratch.\\n\\n2). #Necessity of Fine-tuning# In Han et al. 15 [4], the purpose of retaining weights is described as achieving a good final solution instead of saving retraining time: \\\"During retraining, it is better to retain the weights from the initial training phase for the connections that survived pruning than it is to re-initialize the pruned layers. CNNs contain fragile co-adapted features: gradient descent is able to find a good solution when the network is initially trained, but not after re-initializing some layers and retraining them. So when we retrain the pruned layers, we should keep the surviving parameters instead of re-initializing them.\\\" Despite the fact that saving time is an important benefit of fine-tuning, it was not brought up as the major one. We have added this reference in our introduction of \\\"common beliefs\\\". \\n\\n3). #Benefit of Fine-tuning# We agree that an important benefit of pruning is to save the training time when given a trained model (as was mentioned in the conclusion section).\"}",
"{\"title\": \"Response to AnonReviewer3 [2/2]\", \"comment\": \"(continuing on point (3)) In the revision, we have made this point more visible in a bullet point, and in the introduction we've also included \\\"However, in both cases, if a pretrained large model is already available, pruning and fine-tuning from it can still greatly save the training time to obtain the efficient model\\\".\\n\\nHowever, in the efficient deep learning literature, researchers usually emphasize the *inference time* and *model size* more than *training time* (e.g., see introductions of [4,7]). The training can be done in high-end clusters while the inference sometimes must be done in low-end mobile devices. Moreover, in practice, pretrained models are not always available like ImageNet models, and in some pruning methods, the pre-training of the large model needs to be customized (e.g., use special sparsity regularization [5,6]) so a normally trained model cannot help. In iterative pruning as in [4], the training time-saving is particularly useful, but later works [1, 2, 3, 6, 9] mostly use one-shot pruning. Combining with point 2, we think the major benefit of keeping the weights was believed to be achieving a good final model. \\n\\n4). #Understate Architecture Learning# The prior belief was mentioned as \\\"both the pruned architecture and its associated weights are believed to be essential for obtaining the final efficient model\\\", so we agree that the architecture was believed to be important in previous works too. In the revision, we've emphasized that some prior works (e.g., Han et al. 15 [4]) have made connections between architecture learning and pruning, in the first paragraph of section 5.\\n\\nHowever, There are still many works that use predefined pruned architectures [1,2,3], and they didn't mention the learning of architecture in pruning. Also, the popularity of predefined methods supports that, pruning methods are not mostly treated as learning architectures. Indeed, our work is among the first to draw a distinction between predefined methods and automatic methods. Our results also suggest that for predefined pruning methods, one could train the target model directly, which is not previously shown. Even for automatic pruning methods [4,5,6], the emphasis was not mainly on learning architecture: the comparison with uniform pruning or other architecture search methods are not conducted, and the connection with architecture learning is mostly mentioned only in related work.\", \"others\": \"Thank you for pointing out, we have removed the assertion that \\\"each of the three stages is considered as indispensable\\\", as we consider this statement not accurate in describing the previous belief.\\n\\n3. ## Experiments and Emphasis on (Transferring) Pruned Sparsity Patterns ##\\n\\nThanks for your suggestions on this set of experiments. We have investigated this point on more pruning methods, datasets and architectures. We give a brief summary below, and complete results with analysis can be found in the revision (Section 5 and Appendix F).\\n\\nSimilar to non-structured weight level pruning, we performed the experiments on the channel-level Network Slimming [5]. We introduce two design principles: 1) \\\"Guided Pruning\\\": we use the average number of channels in each layer stage (layers with the same feature map size) from pruned architectures to construct a new set of architectures; 2) \\\"Transferred Guided Pruning\\\": we distill the patterns of pruned architectures to design models on different datasets and architectures. This is similar to \\\"Guided Sparsifying\\\" and \\\"Transferred Guided Sparsifying\\\" for non-structured weight pruning in the original submission.\\n\\nWe present the results of Network Slimming for VGG-19 on CIFAR-100, DenseNet-40 on CIFAR-100 and VGG-19 on ImageNet. In these three cases, we transfer the average pruned sparsity patterns from VGG-16 on CIFAR-10, DenseNet-76 on CIFAR-100 and VGG-11 on ImageNet respectively. We find that architectures obtained by transferred pruned patterns are better than uniformly pruned models, and are close to the pruned architectures, in terms of parameter efficiency. We observe that in these cases, when the prune ratio increases, the later stages are more likely to be pruned. This suggests that there is more redundancy in the later stages and pruning can help us identify it.\\n\\nIn Appendix F, we show that there exist cases when the pruned architectures are not much better than uniformly pruned ones, for both non-structured weight pruning and Network Slimming. In these cases, we find the sparsity patterns in pruned architectures are close to uniform. This might be due to that for those architectures the redundancy is spread more evenly across layers.\\n\\nOther than presenting more results and analysis in Section 5 and Appendix F, we have also raised this point to a major focal point (e.g., emphasize it more in abstract, introduction, background, etc.) in the revision. \\n\\nThank you for your review again! Any further questions or suggestions are welcome.\"}",
"{\"title\": \"References\", \"comment\": \"[1] Channel Pruning for Accelerating Very Deep Neural Networks. He et al., ICCV 2017.\\n[2] ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression. Luo et al., ICCV 2017.\\n[3] Pruning Filters for Efficient ConvNets. Li et al., ICLR 2017.\\n[4] Learning both Weights and Connections for Efficient Neural Networks. Han et al., NIPS 2015.\\n[5] Learning Efficient Convolutional Networks through Network Slimming. Liu et al., ICCV 2017.\\n[6] Data-Driven Sparse Structure Selection for Deep Neural Networks. Huang et al., ECCV 2018.\\n[7] Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding. Han et al., ICLR 2016.\\n[8] Pruning Convolutional Neural Networks for Resource Efficient Inference. Molchanov et al., ICLR 2017.\\n[9] AutoPruner: An End-to-End Trainable Filter Pruning Method for Efficient Deep Model Inference. Luo et al. arXiv, 2018.\\n[10] NISP: Pruning Networks using Neuron Importance Score Propagation. Yu et al., CVPR 2018.\\n[11] \\u201cLearning-Compression\\u201d Algorithms for Neural Net Pruning. Carreira-Perpinan et al., CVPR 2018.\"}",
"{\"title\": \"Review after rebuttal\", \"comment\": \"Thanks for the detailed reply and additional experiments. Just had more comments and questions\\n\\n#1#2 It would be great if the authors can make it clear that training is not the always the first step and the value of pruning in introduction rather than mentioning in conclusion. Saving training time is still an important factor when training from scratch is expensive. \\n\\n#5 \\u201cfine-tuning with enough epochs\\u201d. \\nI understand that the authors are mainly questioning about whether training from scratch is necessarily bad than pruning and fine-tuning. The author do find that \\u201ctraining from scratch is better when the number of epochs is large enough\\u201d. But we see that fine-tuning ResNet-56 A/B with 20 epochs does outperform (or is equivalent to) scratch training for the first 160 epochs, which validates \\u201cfine-tuning is faster to converge\\u201d. However, training 320 epochs (16x more comparing to 20 epochs fine-tuning and 2x comparing with normal training from scratch) is not quite coherent with the setting of \\u201cscratch B\\u201d, as ResNet-56 B just reduce 27% FLOPs. \\n\\nThe other part of the question is still unclear, i.e., the author claimed that the accuracy of an architecture is determined by the architecture itself, but not the initialization, then both fine-tuning and scratch training should reach equivalent solution if they are well trained enough, regardless of the initialization or pruning method. The learning rate for scratch training is already well known (learning rate drop brings boost the accuracy). However, learning rate schedule for fine-tuning (especially for significantly pruned model as for reply#6) is not well explored. I wonder whether that a carefully tuned learning rate/hyperparameters for fine-tuning may get the same or better performance as scratch training.\", \"questions\": \"- Are both methods using the same learning rate schedule between epoch 160 and epoch 320?\\n- The ResNets-56 A/B results in the reply#8 does not match the reported performance in reply#5. e.g., it shows 92.67(0.09) for ResNet-56-B with 40-epochs fine-tuning in reply5, but it turns out to be 92.68(\\u00b10.19) in reply#8.\\n- It would be great if the authors can add convergence curves for fine-tuning and scratch training for easier comparison.\\n\\n\\n#6 The failure case for sparse pruning on ImageNet is interesting and it would be great to have the imageNet result reported and discussed. \\n\\nThe authors find that \\u201cwhen the pruned ratio is large enough, training from scratch is better by a even larger margin than fine-tuning\\u201d. This could be due to following reasons: \\n 1. When the pruning ratio is large, the pruned model with preserved weights is significantly different from the original model, and fine-tuning with small learning rate and limited number of epochs is not enough to recover the accuracy. As mentioned earlier, tuning the hyperparameters for fine-tuning based on pruning ratio might improve the performance of fine-tuning. \\n 2. Though the pruning ratio is large, the model used in this experiment may still have large capacity to reach good performance. How about pruning ResNet-56 with significant pruning ratios? \\n\\nFinally, based on above observations, it seems to me that the preserved weights is more essential for fast fine-tuning but less useful for significant pruning ratios.\"}",
"{\"title\": \"Response to AnonReviewer2 [1/2]\", \"comment\": \"Thank you for your detailed review! Your questions are valuable, and we are happy to address your concerns. In summary, we explain why our work does *not* contradict with existing works and we show that our observation also holds on the soft filter pruning method [2]. Details are given below:\\n\\n1. ##Contradiction with prior works## \\nFirst, we would like to clarify that our results are not contradictory to [3] (Zhu & Gupta). In [3], the authors demonstrate that large sparse models outperform small dense models with the same memory footprint. Here, \\u201clarge sparse models\\u201d refers to the models pruned with non-structured weight level pruning, while \\u201csmall dense models\\u201d refers to models with the same memory footprint to \\u201clarge sparse models\\u201d, but they are of different architectures (one is sparse another is dense). However, in our paper, we compare the same model architectures, and the only difference is that one is obtained by pruning and fine-tuning while the other is trained from scratch. Our results demonstrate that the architecture is more important than inherited weights, while [3] demonstrate a large-sparse architecture is better than a small-dense architecture. \\n\\nSecond, our results are not contradictory to previous works on pruning methods. Previous works either didn't evaluate the scratch-training baseline [2, 6, 9, 10] or didn't choose a strong enough baseline for scratch-training. In [4, 5], while the authors showed that pruned models trained from scratch cannot match the accuracy of pruning and fine-tuning, we found that 1) they used a simpler-than-standard data augmentation scheme for training from scratch in the released code, and 2) they didn\\u2019t evaluate under the scratch-B setup.\\n\\n2. ## Contradiction with [1] ##\\nFirst, we would like to clarify that our results are not contradictory to results in [1]. The main experiments in [1] are done using non-standard hyperparameters (e.g., very small learning rates) with very small networks, while we use standard hyperparameters with the same modern network architectures as in each pruning method's original paper. In their Appendix D (in the OpenReview version linked below), they show that their hypothesis does not hold when they use standard learning rate on ResNet-18: \\\"We find that, at the learning rate used in the paper that introduced ResNet-18 (He et al., 2016), iterative pruning does not find winning tickets\\\". In their control experiments, that fact that using the \\\"correct initialization\\\" is better could be due to the learning rate being too small, and as a result, the weights are not changed much from the initialization during training. However, the small learning rate they used cannot lead to state-of-the-art performance, which means their hypothesis is more restricted. Moreover, the experiments in [1] are only on non-structured weight pruning, so it is unclear whether the hypothesis holds on other levels of pruning (e.g., channel/filter pruning).\\n\\nHowever, we agree that investigating whether using the \\\"correct initialization\\\" could bring benefit in standard training hyperparameters is very useful. We have done the control experiments as in [1] to verify this point. Our results are as follows:\", \"non_structured_weight_level_pruning\": \"----------------------------------------------------------------------------------------------\\n Dataset Model Ratio \\\"Correct Init\\\" Random Init\\n----------------------------------------------------------------------------------------------\\nCIFAR-10 VGG-19 30% 93.69(\\u00b10.13) 93.63(\\u00b10.16)\\nCIFAR-10 VGG-19 80% 93.58(\\u00b10.15) 93.65(\\u00b10.19)\\nCIFAR-10 PreResNet-110 30% 94.89(\\u00b10.14) 94.97(\\u00b10.10)\\nCIFAR-10 PreResNet-110 80% 93.87(\\u00b10.15) 93.79(\\u00b10.17)\\nCIFAR-100 VGG-19 30% 72.57(\\u00b10.58) 72.57(\\u00b10.23)\\nCIFAR-100 VGG-19 50% 72.75(\\u00b10.22) 72.31(\\u00b10.19)\\nCIFAR-100 PreResNet-110 30% 76.41(\\u00b10.15) 76.60(\\u00b10.10)\\nCIFAR-100 PreResNet-110 50% 75.61(\\u00b10.12) 75.48(\\u00b10.17)\\n-----------------------------------------------------------------------------------------------\", \"l1_norm_filter_pruning\": \"--------------------------------------------------------------------------\\nPruned Model \\\"Correct Init\\\" Random Init\\n-------------------------------------------------------------------------\\nVGG-16-A 93.62(0.09) 93.60(0.15)\\nResNet-56-A 92.72(0.10) 92.75(0.26)\\nResNet-56-B 92.78(0.23) 92.90(0.27)\\nResNet-110-A 93.21(0.09) 93.21(0.21)\\nResNet-110-B 93.15(0.12) 93.37(0.29)\\n--------------------------------------------------------------------------\\nWe observe that when we use standard learning rate and other hyperparameters, using \\\"correct initialization\\\" does not provide an advantage over random initialization.\"}",
"{\"title\": \"Response to AnonReviewer2 [2/2]\", \"comment\": \"3. ## Whether our observation holds on soft filter pruning [2] (SFP) ##\\n\\nFirst, please note that the two models you mentioned are for two different datasets, i.e., the ResNet-56 is for CIFAR-10, while the ResNet-50 is for ImageNet. These two cases are not directly comparable, since pruning a model for the small CIFAR dataset (50K images, 10 classes) is far easier than compressing a model for the large scale ImageNet task (1.2M images, 1000 classes). Second, in Table 1 of [2], ResNet-56-30% obtaining a gain of 0.19% is a result of running SFP on a *pretrained model*, while our result is not. In Table 1 of SFP [2], for ResNet-56-30%, the result without using pretrained model is 0.49% accuracy drop.\\n\\nWe agree it is meaningful to see whether our observation holds on SFP [2]. We have done experiments to verify our observation on SFP using authors' code [8]. The results are as follows: \\n\\nCIFAR-10 (not using pretrained models):\\n--------------------------------------------------------------------------------------------------------------\\n Model Ratio Pruned(Paper) Pruned(Rerun) Scratch-E Scratch-B\\n--------------------------------------------------------------------------------------------------------------\\nResNet-20 10% 92.24(\\u00b10.33) 92.00(\\u00b10.32) 92.22(\\u00b10.15) 92.13(\\u00b10.10)\\nResNet-20 20% 91.20(\\u00b10.30) 91.50(\\u00b10.30) 91.62(\\u00b10.12) 91.67(\\u00b10.15)\\nResNet-20 30% 90.83(\\u00b10.31) 90.78(\\u00b10.15) 90.93(\\u00b10.10) 91.07(\\u00b10.23)\\n===================================================================\\nResNet-32 10% 93.22(\\u00b10.09) 93.28(\\u00b10.05) 93.42(\\u00b10.40) 93.08(\\u00b10.13)\\nResNet-32 20% 92.63(\\u00b10.37) 92.50(\\u00b10.17) 92.68(\\u00b10.20) 92.96(\\u00b10.11)\\nResNet-32 30% 92.08(\\u00b10.08) 92.02(\\u00b10.11) 92.37(\\u00b10.12) 92.56(\\u00b10.06)\\n===================================================================\\nResNet-56 10% 93.89(\\u00b10.19) 93.77(\\u00b10.07) 93.42(\\u00b10.40) 93.98(\\u00b10.21)\\nResNet-56 20% 93.47(\\u00b10.24) 93.14(\\u00b10.42) 93.44(\\u00b10.05) 93.71(\\u00b10.14)\\nResNet-56 30% 93.10(\\u00b10.20) 93.01(\\u00b10.09) 93.19(\\u00b10.20) 93.57(\\u00b10.12)\\nResNet-56 40% 92.26(\\u00b10.31) 92.59(\\u00b10.14) 92.80(\\u00b10.25) 93.07(\\u00b10.25)\\n===================================================================\\nResNet-110 10% 93.83(\\u00b10.19) 93.60(\\u00b10.50) 94.21(\\u00b10.39) 94.13(\\u00b10.37)\\nResNet-110 20% 93.93(\\u00b10.41) 93.63(\\u00b10.44) 93.52(\\u00b10.18) 94.29(\\u00b10.18)\\nResNet-110 30% 93.38(\\u00b10.30) 93.26(\\u00b10.37) 93.70(\\u00b10.16) 93.92(\\u00b10.13)\\n---------------------------------------------------------------------------------------------------------------\\n\\nCIFAR-10 (using pretrained models):\\n--------------------------------------------------------------------------------------------------------------\\n Model Ratio Pruned(Paper) Pruned(rerun) Scratch-E Scratch-B\\n--------------------------------------------------------------------------------------------------------------\\nResNet-56 30% 93.78(\\u00b10.22) 93.51(\\u00b10.26) 94.45(\\u00b10.30) 93.77(\\u00b10.25)\\nResNet-56 40% 93.35(\\u00b10.31) 93.10(\\u00b10.34) 93.84(\\u00b10.16) 93.41(\\u00b10.08)\\nResNet-110 30% 93.86(\\u00b10.21) 93.46(\\u00b10.19) 93.89(\\u00b10.17) 94.37(\\u00b10.24)\\n--------------------------------------------------------------------------------------------------------------\\n\\nImageNet (not using pretrained models):\\n-------------------------------------------------------------------------------------------\\n Model Ratio Pruned Scratch-E Scratch-B\\n-------------------------------------------------------------------------------------------\\nResNet-34 30% 71.83 71.67 72.97\\nResNet-50 30% 74.61 74.98 75.56 \\n-------------------------------------------------------------------------------------------\\nIt can be seen that Scratch-E outperforms pruned models for most of the time and scratch-B outperforms SFP in nearly all cases. Therefore, our observation also holds on SFP.\", \"additional_comments\": \"Thank you for your suggestion. In our experiments, we follow the standard training settings for CIFAR and ImageNet, where early stopping is not directly applicable due to the step-wise learning rate schedule.\\n\\nThank you for your review again! If you have any further questions, we are very happy to answer.\"}",
"{\"title\": \"Reference\", \"comment\": \"[1] The Lottery Ticket Hypothesis: Finding Small, Trainable Neural Networks, Jonathan Frankle, Michael Carbin, arXiv 2018. https://openreview.net/forum?id=rJl-b3RcF7\\n[2] Soft Filter Pruning for Accelerating Deep Convolutional Neural Networks, Yang He, Guoliang Kang, Xuanyi Dong, Yanwei Fu, Yi Yang, arXiv 2018.\\n[3] To prune, or not to prune: exploring the efficacy of pruning for model compression. Zhu et al., NIPS workshop 2017.\\n[4] Channel Pruning for Accelerating Very Deep Neural Networks. He et al., ICCV 2017.\\n[5] ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression. Luo et al., ICCV 2017.\\n[6] Learning both Weights and Connections for Efficient Neural Networks. Han et al., NIPS 2015.\\n[7] Pruning Filters for Efficient ConvNets. Li et al., ICLR 2017.\\n[8] https://github.com/he-y/soft-filter-pruning \\n[9] Data-Driven Sparse Structure Selection for Deep Neural Networks. Huang et al., ECCV 2018.\\n[10] Learning Efficient Convolutional Networks through Network Slimming. Liu et al., ICCV 2017.\"}",
"{\"comment\": \"1) Commonly convergence is measured by epochs. Squeezenet is hundreds times smaller than vgg. Why not train it with 1 epoch?\\nMore importantly, people care about inference time of compact models but not training time. It requires weeks to train vgg several years ago, but now it only requires days. \\n\\n2) Comparing under the same training time budget is not reasonable. If training time budget is the concern, why not use pruning-finetuning?\\nPruning-finetuning only requires roughly 9 epochs, since it starts from the pre-trained models. Then according to the definition of scratch-B(udget), scratch-B should only be trained for 18 (9x2) epochs for fair comparison.\\n\\n3) If the main point of your paper is ImageNet models are not converged, I would totally agree.\", \"for_example\": \"(1) Suppose the original model is trained for 10 iterations and gets 1% acc. \\n(2) Then the model is pruned and finetuned for 1 iteration. Suppose it recovers the acc to 1%. \\n(3) Scratch-B is trained 20 for iterations and gets 5% acc. \\nCan the conclusion \\\"scratch-trained models are better than the fine-tuned models\\\" be drawn from this experiment?\", \"title\": \"comparison under the same computations does not make sense\"}",
"{\"title\": \"Reply\", \"comment\": \"We understand your point about convergence. But if the \\\"convergence\\\" you mentioned takes an inaffordable budget/unreasonable epochs to achieve, it is not meaningful to consider in practice. In practice, one does not try to achieve every marginal benefit by training the model for unreasonably long. Also, convergence are not only measurable in epochs; if we consider computations, scratch-B and large model are trained to the same degree of convergence.\\n\\n\\\"It is a fact that L1-norm pruning is the worst among the channel pruning methods listed in your experiments.\\\" This statement needs to be backed up by a direct comparison on the same model/pruned with other methods. And even if it is slightly worse than other methods, it does not make the results \\\"not convincing\\\".\"}",
"{\"comment\": \"The point is that the performance is gained from convergence.\\nIf VGG with 360 epochs converges but not VGG-5x 360 epochs, then you train VGG-5x for 720 epochs and get better results, that experiment will be convincing but not this one. Maybe you can try smaller models like MobileNets. They take less time to train.\\n\\nIt is a fact that L1-norm pruning is the worst among the channel pruning methods listed in your experiments.\", \"title\": \"reply\"}",
"{\"title\": \"Reply\", \"comment\": \"In practice, it is not possible to train the model for infinitely long. We've already extended the large model training epochs to 2x standard, and the Scratch-B for VGG-5x actually uses 2.5x less training budget so the fine-tuning result is already at a significant advantage. We think this experiment is enough to support our point. Also, we don't think being the first channel pruning method makes L1-norm pruning the \\\"worst\\\" or its results \\\"not convincing\\\".\"}",
"{\"comment\": \"(1) Since VGG-5x with 360 epochs is better than VGG-5x with 180 epochs, you should check the convergence of VGG with 360 epochs.\\n\\n(2) I don't think experiments with L1-norm filter pruning are convincing, because L1-norm filter pruning is the first and the worst channel pruning method.\", \"title\": \"The first result simply means VGG-16 still need more epochs to converge.\"}",
"{\"title\": \"Sanity check results\", \"comment\": \"As we mentioned before, when the large model VGG is trained for 180 epochs, the pruned model VGG-5x should be trained for 360 epochs for scratch-B (actually it should be 900 epochs since the model saves 5x Flops, but we use 360 here). Now we have the result for this case:\\n----------------------------------------------------------------\\n unpruned VGG-16-5x\\n----------------------------------------------------------------\\n Original paper 71.03 \\u22122.67 (fine-tuned)\\n Ours 74.78 \\u22122.55 (scratch-B)\\n----------------------------------------------------------------\\nWe can observe that the accuracy drop is smaller than fine-tuned method. Therefore our observation still holds.\\n\\nFor L1-norm filter pruning, we have also done experiments to extend the large model training schedule from 160 to 300 epochs. The results are as follows:\\n--------------------------------------------------------------------------------------------------------\\nPruned Model Baseline Fine-tuned Scratch-E Scratch-B\\n--------------------------------------------------------------------------------------------------------\\nResNet-110-A 93.82(\\u00b10.32) 93.75(\\u00b10.24) 93.80(\\u00b10.15) 94.10(\\u00b10.12)\\nResNet-110-B 93.82(\\u00b10.32) 93.36(\\u00b10.28) 93.75(\\u00b10.16) 93.90(\\u00b10.17)\\n--------------------------------------------------------------------------------------------------------\\nIt can be seen that scratch trained models still consistently outperforms fine-tuned models.\\n\\nWe are considering including these results in Appendix.\"}",
"{\"title\": \"Response to AnonReviewer1 [1/4]\", \"comment\": \"Thank you for your review and detailed questions! We are happy to address your concerns:\\n\\n1. ## Two common beliefs ## The two \\u201ccommon beliefs\\u201d indeed can be combined into one statement, but we would like to keep them separate to emphasize two slightly different perspectives: 1) Optimization. Given that a large model presumably provides stronger optimization power, it is believed that training a large model first is necessary for finding optimal \\\"important\\\" weights to condense the model; 2) Initialization. Given \\\"important\\\" weights pruned from a large model, the common belief is that it is necessary to inherit them to achieve a final efficient model, even if we have enough training resources.\\n\\n2. ## Training as first step ## We agree that training the large model is not always the first step. In fact, when there exist pretrained models, it is a lot faster to prune and fine-tune than training model from scratch. This point is mentioned at the conclusion part of the paper (second last paragraph). We will further emphasize this point by making it more visible (as one of the benefits of pruning overtraining from scratch, in bullet points at the conclusion) and also state this in our introduction.\\n\\nHowever, in many practical applications, pretrained models may not be available, and one has to train specialized large model by him/herself. In addition, some pruning methods (e.g., [1, 2]) impose additional sparsity constraints during the large model training process, in which case one has to train the large model with customized settings and it is not possible to directly get a pretrained model from others. \\n\\nIn the efficient deep learning literature, what seems more important is the *inference speed and model size*, rather than *training time*, because inference/storage sometimes must be on low-end mobile devices while training can be done in high-end GPUs. Most previous works' emphasis is on pursuing a final efficient model for inference on low-resource settings (e.g., see introductions of [5, 6]), rather than optimizing the training time. Further, it was believed that pruning and fine-tuning is not only for fast training speed, but it also gives an efficiency that is not reachable by naive training from scratch (see [1, 2], where training from scratch is reported to be worse than pruning and fine-tuning. We found this is due to a simpler-than-standard data augmentation scheme is used for training from scratch in authors' code and also they didn\\u2019t evaluate scratch-B as a baseline). Our work shows that when one is not constrained by training resource and only cares about inference efficiency, pruning from a large model does not give an efficiency (accuracy/resource tradeoff) that is not reachable by direct training from scratch.\\n\\n3. ## Time/complexity for pruning/fine-tuning ## Thanks for your suggestion. Here we provide some details about this.\", \"time\": \"For most pruning methods examined in this paper, pruning takes a negligible amount of time (several seconds). For reconstruction-based methods (regression-based pruning [1] & ThiNet [2]) where pruning is formulated as an optimization problem, it can take longer time (several minutes), but the time is still short compared to training. For fine-tuning, in our experiments, fine-tuning takes at most one-fourth of the standard training schedule. For CIFAR, scratch-training/fine-tuning takes 160/40 epochs. For ImageNet, scratch-training/fine-tuning takes 90/20 epochs. The time for training/fine-tuning is in proportion with the number of epochs for the same model, so the comparison on time is straightforward. We will include this information in the paper.\", \"complexity\": \"In our experience, for the pruning process, weight-norm based methods are easier to implement, while reconstruction-based methods are not as straightforward. In addition, in training the large model, some special optimization techniques [3, 7] can be required for sparsity regularization, to facilitate the later pruning, which also requires some effort to implement. We have mentioned the engineering effort required as a bullet point in our conclusion, but implementation complexity is a more subjective thing that is hard to precisely measure.\\n\\n4. ## Second value of pruning ## Thank you for pointing out. This is a point we are trying to make through some experiments in Section 5 (figure 3 middle and right). In the revision, we will include more results on this point (e.g., more results on network slimming), raise it to a major focus of the paper and mention it in the abstract.\"}",
"{\"title\": \"Response to AnonReviewer1 [2/4]\", \"comment\": \"5. ## Fine-tuning with enough epochs ## Thank you for asking, this is an important point which was not explained in detail in the paper. Fine-tuning usually uses a small learning rate [1, 2, 4, 5] (usually the final learning rate during large model training) to preserve the inherited weights, which are believed to be helpful for the small model. Using small learning rate for fine-tuning is a part of the previous belief on the necessity of preserving the \\\"important\\\" weights. However, it might cause the model to be stuck in a local minimum. But training pruned model from scratch uses the same learning rate (decays from large to small) as large model training, which is the reason why it can sometimes outperform fine-tuning if enough epochs are trained. We will add this explanation in the revision.\\n \\nWe have done experiments to illustrate the effects of the number of epochs on both fine-tuning and training from scratch. We choose l1-norm pruning on ResNet-56. The results are as follows: \\n--------------------------------------------------------------------------------------------------------\\nResNet-56-A 20 40 80 160 320\\n--------------------------------------------------------------------------------------------------------\\nScratch 86.64(0.41) 90.12(0.26) 91.71(0.13) 92.96(0.26) 93.60(0.21)\\nFine-tune 93.00(0.18) 92.94(0.05) 92.95(0.17) 92.95(0.17) 93.02(0.20)\\n--------------------------------------------------------------------------------------------------------\\n\\n--------------------------------------------------------------------------------------------------------\\nResNet-56-B 20 40 80 160 320\\n--------------------------------------------------------------------------------------------------------\\nScratch 86.87(0.41) 89.71(0.36) 91.56(0.10) 92.54(0.19) 93.41(0.21)\\nFine-tune 92.66(0.10) 92.67(0.09) 92.68(0.10) 92.64(0.10) 92.70(0.16)\\n--------------------------------------------------------------------------------------------------------\\nIt can be seen that despite fine-tuning is faster to converge, training from scratch is better when the number of epochs is large enough.\\n\\n6. ## Significantly pruned models ## The ThiNet model \\\"VGG-Tiny\\\" evaluated in our paper is a significantly pruned model (FLOPs reduced by 15x), and the same observation still holds. We have done more experiments on significantly pruned models using Network Slimming [3]. The pruning ratio for those models is at most 60% in the original paper but here we prune the models by 80% and 90%. Here are the results:\\n-------------------------------------------------------------------------------------------------------\\n Dataset Model Ratio Fine-tuned Scratch-E Scratch-B\\n-------------------------------------------------------------------------------------------------------\\nCIFAR-10 DenseNet-40 80% 92.64(\\u00b10.12) 93.07(\\u00b10.08) 93.61(\\u00b10.12)\\nCIFAR-100 DenseNet-40 80% 69.60(\\u00b10.22) 71.04(\\u00b10.36) 71.45(\\u00b10.30)\\n===========================================================\\nCIFAR-10 PreResNet-164 80% 91.76(\\u00b10.38) 93.21(\\u00b10.17) 93.49(\\u00b10.20)\\nCIFAR-10 PreResNet-164 90% 82.06(\\u00b10.92) 87.55(\\u00b10.68) 88.44(\\u00b10.19)\\n-------------------------------------------------------------------------------------------------------\\nWe observe that when the pruned ratio is large enough, training from scratch is better by a even larger margin than fine-tuning. \\n\\nAfter the submission, we have run more experiments on non-structured pruning [5] with ImageNet, the results are shown below. We found in some cases training from scratch cannot match the accuracy of fine-tuning.\\n\\n-----------------------------------------------------------------------------------------------------\\n Dataset Model Ratio Fine-tuned Scratch-E Scratch-B\\n-----------------------------------------------------------------------------------------------------\\nImageNet VGG-16 30% 73.68 72.75 74.02\\nImageNet VGG-16 60% 73.63 71.50 73.42\\nImageNet ResNet-50 30% 76.06 74.77 75.70\\nImageNet ResNet-50 60% 76.09 73.69 74.91\\n--------------------------------------------------------------------------------------------------\\n\\nThis is possibly due to the fine pruning granularity of non-structured pruning and the task complexity of ImageNet. We further explored the change of weight distribution with non-structured pruning, which could be a reason too. (Refer to Section 1 in this anonymous link for more details, https://drive.google.com/open?id=1BjGJQASV-CuGoq-nVErIRihHMdVwCZxl ) We will include this result and discussion in the revision.\"}",
"{\"title\": \"Response to AnonReviewer1 [3/4]\", \"comment\": \"7. #Fine-tuning is faster# It is true that fine-tuning is faster than training from scratch, and our further explanation is similar to that for point 2.\\n\\n## Smaller variance for fine-tuning ##\\nThe observation about variance is interesting. It seems that this point is most obvious for ResNet-110 in Table 1 (Section 4.1, L1-norm filter pruning), while not so apparent for other methods and models. To investigate whether this is a coincidence or it implies something deeper, we have rerun the experiments for L1-norm pruning on ResNet-110. Here are the results:\\n-------------------------------------------------------------------------------------------------\\nPruned Model Baseline Fine-tuned Scratch-E Scratch-B\\n-------------------------------------------------------------------------------------------------\\nResNet-110-A 93.56(\\u00b10.19) 93.41(\\u00b10.20) 93.06(\\u00b10.20) 93.34(\\u00b10.21)\\nResNet-110-B 93.56(\\u00b10.19) 93.03(\\u00b10.21) 92.72(\\u00b10.18) 93.64(\\u00b10.22)\\n-------------------------------------------------------------------------------------------------\\nIt can be seen that the variance of fine-tuned models\\u2019 accuracy is now at the same level with scratch trained models. Thus we think the result of different levels of variance for ResNet-110 in Table 1 might be a coincidence. \\n\\nThe variance result (5 instances) on ImageNet is very expensive to run (can take up to 2 weeks on an 8-GPU machine for one model, previous image classification works [8, 9] also rarely report variance on ImageNet). We will let you know when we have results.\\n\\nIn our experiments, for fine-tuning, the 5 accuracies which we report mean and std, are from 5 different large models, instead of fine-tuning 5 times from the same large model. However, if we fine-tune the *same* pruned model for 5 times, the variance of the final models\\u2019 accuracy is indeed smaller (results shown in the table below). The standard deviations are all less than 0.1, in contrast to ~0.2 for those fine-tuned from different large models. This is intuitive given that the same pruned weights are used as initialization and relatively small learning rate is used. \\n-----------------------------------------------------------------------------------------------------------------\\nPruned Model Model-1 Model-2 Model-3 Model-4 Model-5\\n------------------------------------------------------------------------------------------------------------------\\nResNet-110-A 93.10(\\u00b10.06) 93.04(\\u00b10.03) 92.95(\\u00b10.07) 93.48(\\u00b10.04) 93.13(\\u00b10.07)\\nResNet-110-B 92.85(\\u00b10.08) 92.60(\\u00b10.05) 92.76(\\u00b10.09) 92.98(\\u00b10.10) 92.64(\\u00b10.07)\\n------------------------------------------------------------------------------------------------------------------\\n\\n8. ## Hyperparameters for training/fine-tuning ## We list the hyperparameters used in our experiments below.\\n\\nOn CIFAR, the initial learning rate is 0.1, weight decay is 0.0001 and batch size is 64. We train for 160 epochs and the learning rate is dropped by 0.1 at 80 and 120 epochs. We use SGD with momentum 0.9. Fine-tuning is 40 epochs and using learning rate 0.001, with other settings the same as training.\\n\\nOn ImageNet, the initial learning rate is 0.1, weight decay is 0.0001 and batch size is 256. We train for 90 epochs and the learning rate is dropped by 0.1 at 30 and 60 epochs. We use SGD with momentum 0.9. Fine-tuning uses 20 epochs with learning rate 0.001, with other settings the same as training.\\n\\nBoth settings are very close to the original paper of L1-norm filter pruning, except that we use 160 instead of 164 epochs for training, and use batch size 64 instead of 128. However, the hyperparameter settings (including batch size) are consistent for training the large model, fine-tuning and Scratch-E/B. If smaller batch size leads to better results, it will benefit the large model training and fine-tuning as well, so we believe the comparison is fair. We also have run the experiments using batch size 128 for all training, and the results are below:\\n----------------------------------------------------------------------------------------------------\\nPruned Model Baseline Fine-tuned Scratch-E Scratch-B\\n----------------------------------------------------------------------------------------------------\\nResNet-56-A 92.26(\\u00b10.23) 92.18(\\u00b10.34) 92.65(\\u00b10.24) 92.63(\\u00b10.26)\\nResNet-56-B 92.26(\\u00b10.23) 91.82(\\u00b10.21) 91.85(\\u00b10.22) 92.70(\\u00b10.29)\\n----------------------------------------------------------------------------------------------------\\nWe observe that when the batch size is 128, training from scratch is still at least on par with fine-tuning.\"}",
"{\"title\": \"Response to AnonReviewer1 [4/4]\", \"comment\": \"For fine-tuning epochs, we have run experiments to show that more epochs don\\u2019t improve fine-tuning noticeably. This is because very small learning rate is used to preserve the inherited weights, as also mentioned in point 5. The results are as follows:\\n------------------------------------------------------------------------------------------\\nPruned Model Fine-tune-40 Fine-tune-80 Fine-tune-160\\n------------------------------------------------------------------------------------------\\nVGG-16 93.40(\\u00b10.12) 93.45(\\u00b10.06) 93.45(\\u00b10.08)\\nResNet-56-A 92.97(\\u00b10.17) 92.92(\\u00b10.15) 92.94(\\u00b10.16)\\nResNet-56-B 92.68(\\u00b10.19) 92.67(\\u00b10.14) 92.76(\\u00b10.16)\\nResNet-110-A 93.14(\\u00b10.16) 93.12(\\u00b10.19) 93.04(\\u00b10.22)\\nResNet-110-B 92.69(\\u00b10.09) 92.75(\\u00b10.15) 92.76(\\u00b10.16)\\n------------------------------------------------------------------------------------------\\nIt can be seen that fine-tuning for more epochs gives negligible accuracy increase and sometimes small decrease.\\n\\n9. ## Conclusion of Section 5 ## Yes, we agree with your points in 3&4, and our experiments in Section 4 and 5 are to verify these points. We also agree that the conclusion of Section 5 may seem straightforward to some audience, but we think pruning is not very widely recognized as architecture search. Conventional network pruning and architecture search works still use totally different techniques, with the former focus on selecting important weights from a larger network and the later typically uses reinforcement learning or evolutionary algorithms to search an architecture through iterations. Pruning is usually mentioned as a model compression technique, in a resource-saving context, instead of being treated as an architecture search method. \\n\\nTo our knowledge, our work is one of the first to draw a distinction between predefined and automatic pruning methods, and also one of the first to compare automatically pruned architecture with uniform pruning. Previous works compare pruned and fine-tuned model with the original large model, this is not sufficient to for \\\"pruning can be seen as architecture search\\\": 1. The benefit could be from the inherited weights, not the architecture. 2. Comparison with uniform pruning is missing. In Section 4, we show that the performance of training from scratch can be on par with pruning, however, comparison with uniform pruning is still needed. In Section 5, we break the tie between inherited weights and the resulting architecture, training the pruned architecture from scratch and comparing with uniform pruning. This provides further evidence that the value lies in searching efficient architecture.\\n\\nAlso, in Section 5, we have shown that we can transfer the sparsity pattern in the pruned model to a different architecture on a different dataset. This implies that in these cases, we don\\u2019t need to train a large model on the target dataset to find the efficient model and transferred design patterns can help us design an efficient model from scratch. This experiment was also not investigated in prior works and the conclusion is not obvious. We will include more results on this point in the revision.\\n\\nWe will also add some discussions about the differences and similarities between pruning as architecture and conventional architecture search. We will mention that some previous works have made connections between pruning and architecture search as well.\\n\\nThe main reason why we didn\\u2019t include figures with FLOPs as x-axis is mainly the space limit. We have included figures with FLOPs as x-axis in Section 2 of this anonymous pdf link ( https://drive.google.com/open?id=1BjGJQASV-CuGoq-nVErIRihHMdVwCZxl ).\", \"minor\": \"Thank you for reminding, we will change the name to \\\"filter pruning\\\" as suggested.\\n\\nWe will upload a revision after we address other reviewers' concerns, and please advise on which part of this response you would like to see to be reflected in the revision (other than the content we already plan to include). Again, thank you for your detailed review. If you have any further questions, we are happy to answer.\\n\\n\\n[1] Channel Pruning for Accelerating Very Deep Neural Networks. He et al., ICCV 2017.\\n[2] ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression. Luo et al., ICCV 2017.\\n[3] Learning Efficient Convolutional Networks through Network Slimming. Liu et al., ICCV 2017.\\n[4] Pruning Filters for Efficient ConvNets. Li et al., ICLR 2017.\\n[5] Learning both Weights and Connections for Efficient Neural Networks. Han et al., NIPS 2015.\\n[6] Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding. Han et al., ICLR 2016.\\n[7] Data-Driven Sparse Structure Selection for Deep Neural Networks. Huang et al., ECCV 2018.\\n[8] Deep Residual Learning for Image Recognition. He et al., CVPR 2016.\\n[9] Densely Connected Convolutional Networks. Liu et al., CVPR 2017.\"}",
"{\"comment\": \"I also support your opinion that a main problem is that \\\"some models are not sufficiently trained\\\", as a result it is not sufficient to reach the conclusion.\", \"title\": \"support your argument\"}",
"{\"comment\": \"1. If those sanity check results are not recorded, is it still safe to draw the conclusions from the existing correct experiments? https://openreview.net/forum?id=rJlnB3C5Ym¬eId=Bklxok8M6m¬eId=HklijvuWaX\\n2. \\\"Some ImageNet models are not sufficiently trained\\\" is a common sense. I don't think the argument: \\\"those papers should be corrected instead of ours\\\" is solid.\\n3. I believe this is a new topic. The previous topic is about your incorrect results of channel pruning.\", \"title\": \"It would be better to add those sanity check results before drawing conclusions\"}",
"{\"title\": \"Disagree; will not further reply to avoid distraction\", \"comment\": \"We have stated that if we extend both epochs of large model training and Scratch-B to be more than the standard number of epochs, the same observations still hold. Further, the same number of epochs in image classification literature [1, 2] and your own paper [3] are used in our experiment, for training the large model. If the number of epochs is not enough for \\\"converging\\\", those papers should be corrected instead of ours.\", \"the_commenter_also_seems_to_misuse_the_comment_function_of_openreview\": \"the topic is from a discussion in a previous thread (https://openreview.net/forum?id=rJlnB3C5Ym¬eId=Bylhl2NecQ¬eId=Bylhl2NecQ , mentioned at his start), so a new thread should not be opened to avoid distracting other readers from official reviews. To avoid further distraction, we will not reply to this thread anymore.\\n\\n[1] Deep Residual Learning for Image Recognition. He et al., CVPR 2016.\\n[2] Densely Connected Convolutional Networks. Huang et al., CVPR 2017.\\n[3] Channel Pruning for Accelerating Very Deep Neural Networks. He et al., ICCV 2017.\"}",
"{\"comment\": \"\\\"the fact that pruned models take less budget to train for each epoch.\\\" is understood.\\nBut you still ignore the important fact that Scratch-B is better because of better convergence. \\nHow can it be a valid baseline when other models you compared didn't even converge?\", \"title\": \"Scratch-B is not a valid baseline\"}",
"{\"title\": \"Scratch-B is a valid baseline\", \"comment\": \"Thanks for your explanation. More epochs of Scratch-B come from the fact that pruned models take less budget to train for each epoch as we mentioned in the paper. If a 2x schedule large model is used, scratch-B should be further extended to ensure a fair comparison. In our opinion, Scratch-B is a valid baseline, especially for predefined pruning methods.\\n\\nThe experiment mentioned in the last reply is a sanity check and we didn't record the full results in precise number, so it needs rerunning. Currently, in the rebuttal period, we're giving higher priority to experiments which address official reviewers' concerns, and we will let you know the results when we have the resource to run that experiment.\"}",
"{\"comment\": \"Thanks for the reply.\\n The point is that some ImageNet models are not sufficiently trained. That\\u2019s why scratch-B with longer training schedule performs better. \\n Furthermore, pruning methods try to approximate the original model instead of optimizing the final accuracy. It\\u2019s unfair for pruning methods to approximate a worse model when compared with scratch-B. So 2x model should be used for pruning for fair comparison with scratch-B. \\n Therefore it is hard to believe that scratched models are better with experiments of scratch-E. \\n\\nFor the second paragraph, please show your detailed experiments.\", \"title\": \"My point is not about computational budget\"}",
"{\"title\": \"Unfair comparison between large model 2x schedule and current Scratch-B results\", \"comment\": \"Thank for your experiment results. However, \\\"Scratch-B\\\" means training the pruned model using the same computation budget as training the large unpruned model. If you extend the epochs for training the large unpruned model, the epochs for Scratch-B should also be extended, for it to still be \\\"Scratch-B(udget)\\\". Otherwise, the budgets are not equal any longer. Moreover, for a fair comparison, the fine-tuning result also needs to be based on this new unpruned model which is trained longer. Thus the results are not directly comparable here.\\n\\nFor some pruning methods, we've tried to extend the epochs for both training the large unpruned model and Scratch-E/B, and we found our observations still hold (Scratch-B can match the accuracy of fine-tuning from a large model). But to keep our results comparable with existing literature, we use the standard training epochs for the large model, based on which we determine the epochs for Scratch-E/B, for the experiment results presented in our paper.\"}",
"{\"comment\": \"I conducted an experiment on VGG-16 ImageNet to verify my point https://openreview.net/forum?id=rJlnB3C5Ym¬eId=Bylhl2NecQ¬eId=Bylhl2NecQ\\nLonger training schedule matters, but not training from scratch. \\n\\nAn unpruned VGG-16 was trained from scratch with 2x schedule using the authors' code. So far it only finished 132 epoches, but the accuracy already reached 74.5% (v.s. 71.5% 1x schedule).\\n(The 2x schedule follows Scratch-B, namely learning rate decay happens at 60, 120 epoches. 1x schedule is the normal 90 epoches training schedule.)\\n\\nLet's put the correct accuracy in Table 3:\\n schedule unpruned VGG-16-5x\\nFine-tuned 1x 71.03 \\u22122.67\\nScratch-E 1x 71.51 \\u22123.46\\nScratch-B 2x 74.5 \\u22123.51\", \"we_can_observed_that\": \"1. scratch-B mainly benefits from longer training schedule.\\n2. the accuracy drop is larger than Fine-tuned method. \\nTherefore \\\"scratch-trained models are better than the fine-tuned models\\\" does not hold for channel pruning.\\nSimilarly in Table 2, Scratch-B might be worse than ThiNet.\\n\\nI also planned to conduct 2x schedule experiments on ResNet, unfortunately I don't have the computational resources for that.\", \"title\": \"conclusions drawn from Scratch-B need rethinking\"}",
"{\"title\": \"Different aspects of over-parameterization\", \"comment\": \"Thank you for your question, and we appreciate Reviewer 3's effort for his prompt and correct explanations. As Reviewer 3 pointed out, the assumption in [1] is that the neural network only consists of linear layers. More importantly, further explaining his/her second point, [1]'s focus is on over-parameterization's effect on accelerating convergence, while our focus is on its necessity for obtaining a final efficient model (provided that we already know the architecture of the final model), i.e., whether we need to train an over-parameterized model first to obtain a final efficient model. They are from different aspects, so the conclusions are not contradictory.\\n\\nAs another point, we conducted extensive experiments (multiple pruning methods, datasets, models, pruning ratios, tasks) using standard hyperparameters (same with the standards in the image classification literature [2, 3] and the hyperparameters used in original papers of these pruning methods), so it might be unfair to say that the conclusions are from \\\"several small experiments\\\", and the conclusions are inappropriate \\\"because these results may largely depend on the hyper-parameters you set.\\\"\\n\\n[1] On the Optimization of Deep Networks: Implicit Acceleration by Overparameterization. ICML 2018.\\n[2] Deep Residual Learning for Image Recognition. CVPR 2016.\\n[3] Densely Connected Convolutional Networks. CVPR 2017.\"}",
"{\"title\": \"Better\", \"comment\": \"The edited comment is more suitable. Now that we can put issues of propriety aside, it's not clear to me that the Arora paper is as contradictory as you claim. 1) Their theoretical analysis is based on a large number of assumptions that do not hold here, including *linear* neural networks. 2) They are not analyzing sparse networks with learned connectivity patterns as these authors are.\\n\\nCorrect me if you're wrong, but I believe that the perceived contradiction here owes to a shallow reading of both papers.\"}",
"{\"comment\": \"Hi,\\nI have revised the comments and I think my question is fair and reasonable enough now.\", \"title\": \"Response\"}",
"{\"title\": \"Inappropriate\", \"comment\": \"This comment is inappropriate and an abuse of the anonymous commenting system. First, do not try to strengthen your point with 4 exclamation marks. This is a peer reviewing institution, not Reddit. Second, \\\"such a sloppy claim\\\" is a not a professional way to speak to your colleagues.\\n\\nThird, ~\\\"there is a theoretical paper at ICML that claims something which superficially seems to disagree with you\\\" does not provide clear evidence as to who if anyone is wrong or in what way.\\n\\nIf you would like to edit your comment to provide a more thoughtful analysis of both claims and how you would like to see the paper revised, please do. Else I will ask the Area Chair to purge this comment.\"}",
"{\"comment\": \"Dear authors,\\n\\nThere is a theoretical paper named \\\"On the Optimization of Deep Networks: Implicit Acceleration by Overparameterization\\\" in ICML2018 have proved that over-parameterization can accelerate the convergence of a deep neural network, which is just in contrast to your main claim.\\n\\nI think it is not appropriate to give such a claim from several small experiments because these results may largely depend on the hyper-parameters you set.\", \"title\": \"Doubt on the main argument \\\"training an over-parameterized model is not necessary to obtain an efficient final model\\\"\"}",
"{\"title\": \"Interesting results, but not sure they generalize to any pruning approach\", \"review\": \"This paper shows through a set of experiments that the common belief that a large neural network trained, then pruned and fine-tuned performs better than another network that has the same size of the pruned one, but trained from scratch, is actually false. That is, a pruned network does not perform better than a network with the same dimensions but trained from scratch. Also, the authors consider that what is important for good performance is to know how many weights/filters are needed at each layer, while the actual values of the weights do not matter. Then, what happens in a standard large neural network training can be seen as an architecture search, in which the algorithm learns what is the right amount of weights for each layer.\", \"pros\": [\"If these results are generally true, then, most of the pruning techniques are not really needed. This is an important result.\", \"If these results hold, there is no need for training larger models and prune them. Best results can be obtained by training from scratch the right architecture.\", \"the intuition that the neural network pruning is actually performing architecture search is quite interesting.\"], \"cons\": [\"It is still difficult to believe that most of the previous work and previous experiments (as in Zhu & Gupta 2018) are faulty.\", \"Another paper with opposing results is [1]. There the authors have an explicit control experiment in which they evaluate the training of a pruned network with random initialization and obtain worse performance than when pruned and pruned and retrained with the correct initialization.\", \"Soft pruning techniques as [2] obtain even better results than the original network. These approaches are not considered in the analysis. For instance, in their tab. 1, ResNet-56 pruned 30% obtained a gain of 0.19% while your ResNet-50 pruned 30% obtains a loss of 4.56 from tab. 2. This is a significant difference in performance.\"], \"global_evaluation\": \"In general, the paper is well written and give good insides about pruning techniques. However, considering the vast literature that contradicts this paper results, it is not easy to understand which results to believe. It would be useful to see if the authors can obtain good results without pruning also on the control experiment in [1]. Finally, it seems that the proposed method is worse than soft pruning. In soft pruning, we do not gain in training speed, but if the main objective is performance, it is a very relevant result and makes the claims of the paper weaker.\", \"additional_comments\": \"- top pag.4: \\\"in practice, we found that increasing the training epochs within a reasonable range is rarely harmful\\\". If you use early stopping results should not be affected by the number of training epochs (if trained until convergence).\\n\\n[1] The Lottery Ticket Hypothesis: Finding Small, Trainable Neural Networks, Jonathan Frankle, Michael Carbin, arXiv2018\\n[2] Soft Filter Pruning for Accelerating Deep Convolutional Neural Networks, Yang He, Guoliang Kang, Xuanyi Dong, Yanwei Fu, Yi Yang, arXiv 2018\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Thanks\", \"comment\": \"Hi Brendan,\\n\\nThanks for your comment! The \\\"sharing/inheriting weights\\\", and \\\"investigate whether training from scratch would sometimes yield better results\\\" we mentioned, are for the training during the search process (for accelerating convergence), not for training the final discovered model. Thanks for bringing this into our attention, we will try to make this more clear in the revision.\"}",
"{\"title\": \"The primary claim is not surprising, but an exciting result is buried at the end\", \"review\": \"This paper proposes to investigate recent popular approaches to pruning networks, which have roots in works by Lecun \\u201890, and are mostly rooted in a recent series of papers by Song Han (2015-2016). The methods proposed in these papers consist of the following pipeline: (i) train a neural network, (ii) then prune the weights, typically by trimming the those connections corresponding to weights with lowest magnitude, (iii) fine tune the resulting sparsely-connected neural network.\\n\\nThe authors of the present work assert that traditionally, \\u201ceach of the three stages is considered as indispensable\\u201d. The authors go on to investigate the contribution of each step to the overall pipeline. Among their findings, they report that fine-tuning appears no better than training the resulting pruned network from scratch. The assertion then is that the important aspect of pruning is not that it identifies the \\u201cimportant weights\\u201d but rather that it identifies a useful sparse architecture.\\n\\nOne problem here is that the authors may overstate the extent to which previous papers emphasize the fine-tuning, and they may understate the extent to which previous papers emphasize the learning of the architecture. Re-reading Han 2015, it seems clear enough that the key point is \\u201clearning the connections\\u201d (it\\u2019s right there in the title) and that the \\u201cimportant weights\\u201d are a means to achieve this end. Moreover the authors may miss the actual point of fine-tuning. The chief benefit of fine-tuning is that it is faster than training from scratch at each round of retraining, so that even if it achieves the same performance as training from scratch, that\\u2019s still a key benefit.\\n\\nIn general, when making claims about other people\\u2019s beliefs, the authors need to provide citations. References are not just about credit attribution but also about providing evidence and here that evidence is missing. I\\u2019d like to see sweeping statements like \\u201cThis is\\nusually reported to be superior to directly training a smaller network from scratch\\u201d supported by precise references, perhaps even a quote, to spare the reader some time. \\n\\nTo this reader, the most interesting finding in the paper by far is surprisingly understated in the abstract and introduction, buried at the end of the paper. Here, the authors investigate what are the properties of the resulting sparse architectures that make them useful. They find that by looking at convolutional kernels from pruned architectures, they can obtain for each connection, a probability that a connection is \\u201ckept\\u201d. Using these probabilities, they can create new sparse architectures that match the sparsity pattern of the pruned architectures, a technique that they call \\u201cguided sparsification\\u201d. The method yields similar benefits to pruning. Note that while obtaining the sparsity patterns does require running a pruning algorithm in the first place, ***the learned sparsity patterns generalize well across architectures and datasets***. This result is interesting and useful, and to my knowledge novel. I think the authors should go deeper here, investigating the idea on yet more datasets and architectures (ImageNet would be nice). I also think that this result should be given greater emphasis and raised to the level of a major focal point of the paper. With convincing results and some hard-work to reshape the narrative to support this more important finding, I will consider revising my score.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"comment\": \"Hi, thanks for the interesting investigation, and releasing the code.\\n\\nI believe there might be a slight mistake in the last paragraph of Section 5, where it says \\\"it would be interesting to investigate whether training from scratch would sometimes yield better results\\\". Actually, (Pham et al., 2018) and (Liu et al., 2018b) do retrain their discovered architectures from scratch, and both papers say this is important for them to achieve their published results.\", \"edit\": \"Sorry, I realized that this paragraph seems to appear only in the arxiv version of your paper.\", \"title\": \"Section 5 W.r.t. Comparison with ENAS, DARTS\"}",
"{\"title\": \"interesting paper, more in-depth analysis to support their findings would be better.\", \"review\": \"This paper reinvestigate several recent works on network pruning and find that the common belief about the necessity to train a large network before pruning may not hold. The authors find that training the pruned model from scratch can achieve similar, if not better, performance given enough time of training. Based on these observations, the author conclude that training a larger model followed by pruning is not necessary for obtaining an efficient model with similar performance. In other words, the pruned architecture is more important than the weights inherited from the large model. It reminds researchers to perform stronger baselines before showing complex pruning methods.\\n\\nThe paper is well organized and written. It re-evaluate the recent progresses made on this topic. Instead of comparing approaches by simply using the numbers from previous paper, the authors perform extensive experiments to verify whether training the pruned network from scratch would work. The results are very interesting, it suggests the researchers to tune the baseline \\u201chardly\\u201d and stick to simple approach. However, here are some places that I have concerns with:\\n\\n1. The two \\u201ccommon beliefs\\u201d actually state one thing, that is the weights of a pre-trained larger model can potentially help optimization for a smaller model. \\n\\n2. I don\\u2019t quite agree with that \\u201ctraining\\u201d is the first step of a pruning pipeline as illustrated in Figure 1. Actually the motivation or the common assumption for pruning is that there are already existing trained models (training is already finished) with good performance. If a trained model does not even exist, then one can certainly train various thin/smaller model from scratch as before, this is still a trial and error process. \\n\\n3. \\u201cThe value of pruning\\u201d. The goal of pruning is to explore a \\u201cthin\\u201d or \\u201cshallower\\u201d version of it with similar accuracy while avoiding the exhaustive architecture search with heavy training processes. Thus the first value of pruning is to explore efficient architecture while avoiding heavy training. Therefore, it should be fast and efficient, ideally with no retraining or little fine-tuning. When the pruning method is too complex to implement or requires much more time than training from scratch, it could be an overkill and adds little value, especially when the performance is not better enough. Therefore, it is more informative if the authors would report the time/complexities for pruning/fine-tuning .\\n\\n4. The second value of pruning lies at understand the redundancy of the model and providing insights for more efficient architecture designs. \\n\\n5. Comparing to random initialization, pruning simply provide an initialization point inherited from the larger network. The essential question the author asked is whether a subset of pre-trained weights can outperform random initialization. This seems to be a common belief in transfer learning, knowledge distillation and the studies on initialization. The authors conclude that the accuracy of an architecture is determined by the architecture itself, but not the initialization. If this is true, training from scratch should have similar (but not better) result as fine-tuning a pruned model. As the inherited weights can also be viewed as a \\u201crandom\\u201d initialization. Both methods should reach equivalent good solution if they are trained with enough number of epochs. Can this be verified with experiments?\\n\\n6. The experiments might not be enough to reject the common belief. The experiments only spoke that the pruned architectures can still be easily trained and encounter no difficulties during the optimization. One conjecture is that the pruned models in the previous work still have enough capacity for keeping good accuracy. What if the models are significantly pruned (say more than 70% of channels got pruned), is training from scratch still working well? It would add much value if the author can identify when training from scratch fails to match the performance obtained by pruning and fine-tuning.\\n\\n7. In Section 4.1, \\u201cscratch-trained models achieve at least the same level of accuracy as fine-tuned models\\u201d. First, the ResNet-34-pruned A/B for this comparison does not have significant FLOPs reduction (10% and 24% FLOPs reduction). Fine-tuning still has advantage as it only takes \\u00bc of training time compare to scratch-E. Second, it is interesting that fine-tuning has generally smaller variance than stratch-E (except VGG-19). Would this imply that fine-tuning a pruned model produce more stable result? It would be more complete if there is variance analysis for the imagenet result. \\n\\n8. What is the training/fine-tuning hyperparameters used in section 4.1? Note that in the experiment of Li et al, 2017, scratch-E takes 164 epochs to train from scratch, while fine-tuning takes only 40 epochs. Like suggested above, if we fine-tune it with more epochs, would it achieve equivalent performance? Also, what is the hyperparameter used in scratch-E? Note that the original paper use batch size 128. If the authors adopts a smaller batch-size for scratch-E, then it has in more iterations and could certainly result in better performance according to recent belief that small batch-size generates better.\\n\\n9. The conclusion of section 5 is not quite clear or novel. Using uniform pruning ratio for pruning is expected to perform worse than automatic pruning methods as it does not consider the importance difference of each layer and. This comes back to my point 3 & 4 about the value of pruning, that is the value of pruning lies at the analysis of the redundancy of the network. There are a number of works worked on analyzing the importance of different layers of filters. So I think the \\u201chypothesis\\u201d of \\u201cthe value of automatic pruning methods actually lies in the resulting architecture rather than the inherited weight\\u201d is kind of straightforward. Also, why not use FLOPs as x-axis in Figure 3?\", \"minor\": \"It might be more accurate to use \\u201cL1-norm based Filter Pruning (Li et al., 2017)\\u201d as literally \\u201cchannels\\u201d usually refers to feature maps, which are by-products of the model but not the model itself.\\n\\nI will revise my score if authors can address above concerns.\\n\\n\\n--------- review after rebuttal----------\\n#1#2 It would be great if the authors can make it clear that training is not the always the first step and the value of pruning in introduction rather than mentioning in conclusion. Saving training time is still an important factor when training from scratch is expensive. \\n\\n#5 \\u201cfine-tuning with enough epochs\\u201d. \\nI understand that the authors are mainly questioning about whether training from scratch is necessarily bad than pruning and fine-tuning. The author do find that \\u201ctraining from scratch is better when the number of epochs is large enough\\u201d. But we see that fine-tuning ResNet-56 A/B with 20 epochs does outperform (or is equivalent to) scratch training for the first 160 epochs, which validates \\u201cfine-tuning is faster to converge\\u201d. However, training 320 epochs (16x more comparing to 20 epochs fine-tuning and 2x comparing with normal training from scratch) is not quite coherent with the setting of \\u201cscratch B\\u201d, as ResNet-56 B just reduce 27% FLOPs. \\n\\nThe other part of the question is still unclear, i.e., the author claimed that the accuracy of an architecture is determined by the architecture itself, but not the initialization, then both fine-tuning and scratch training should reach equivalent solution if they are well trained enough, regardless of the initialization or pruning method. The learning rate for scratch training is already well known (learning rate drop brings boost the accuracy). However, learning rate schedule for fine-tuning (especially for significantly pruned model as for reply#6) is not well explored. I wonder whether that a carefully tuned learning rate/hyperparameters for fine-tuning may get the same or better performance as scratch training.\", \"questions\": \"- Are both methods using the same learning rate schedule between epoch 160 and epoch 320?\\n- The ResNets-56 A/B results in the reply#8 does not match the reported performance in reply#5. e.g., it shows 92.67(0.09) for ResNet-56-B with 40-epochs fine-tuning in reply5, but it turns out to be 92.68(\\u00b10.19) in reply#8.\\n- It would be great if the authors can add convergence curves for fine-tuning and scratch training for easier comparison.\\n\\n\\n#6 The failure case for sparse pruning on ImageNet is interesting and it would be great to have the imageNet result reported and discussed. \\n\\nThe authors find that \\u201cwhen the pruned ratio is large enough, training from scratch is better by a even larger margin than fine-tuning\\u201d. This could be due to following reasons: \\n 1. When the pruning ratio is large, the pruned model with preserved weights is significantly different from the original model, and fine-tuning with small learning rate and limited number of epochs is not enough to recover the accuracy. As mentioned earlier, tuning the hyperparameters for fine-tuning based on pruning ratio might improve the performance of fine-tuning. \\n 2. Though the pruning ratio is large, the model used in this experiment may still have large capacity to reach good performance. How about pruning ResNet-56 with significant pruning ratios? \\n\\nFinally, based on above observations, it seems to me that the preserved weights is more essential for fast fine-tuning but less useful for significant pruning ratios.\\n\\n-------- update ----------------\\n\\nThe authors addressed most of my concerns. Some questions are still remaining in my comment \\u201cReview after rebuttal\\u201d, specifically, fine-tuning a pruned network may still get good performance if the hyperparameters are carefully tuned based on the pruning ratios, or in other words, the preserved weights is more essential for fast fine-tuning but less useful for significant pruning ratios. The authors may need to carefully made the conclusion from the observations. I would hope the authors can address these concerns in the future version.\\n\\nHowever, I think the paper is overall well-written and existing content is inspiring enough for readers to further explore the trainability of the pruned network. Therefore I raised my score to 7.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Authors' Reply\", \"comment\": \"Thank you for your reply, and we are happy to explain further.\\n\\n1. \\\"If the paper just focuses on a special type of pruning algorithm (the typical PIPE1), the conclusion makes sense.\\\"\\n\\nThe phrase \\\"special type\\\" is misleading: this pipeline is of dominant popularity in the network pruning literature, and is the most general one, as we have already mentioned in our last response.\\n\\n2. \\\"To avoid misunderstanding, it is better to add a proper constraint to the conclusion of the paper. For example: 1. In the Abstract Section, 'For pruning algorithms which assume a predefined architecture of the target pruned network, one can completely get rid of the pipeline and directly train the target network from scratch'. PIPE2 may not be included in the range of the \\u201cpruning algorithm\\u201d in this sentence.\\\"\\n\\nWe don't think the mentioned sentence in the abstract will cause misunderstanding. Here the \\\"pipeline\\\" clearly refers to the typical pipeline described in the second sentence of our abstract. SFP does not fall into this, since in SFP pruning happens with training. \\n\\n3. \\\"In the Background Section, 'Fine-tuning the pruned model with inherited weights is no better than training it from scratch.' Because for PIPE2, utilizing inherited weights is still better than training from scratch.\\\"\\n\\na) We stated in our last response that our \\\"training from scratch\\\" is different from the \\\"training from scratch\\\" in your SFP paper: ours means \\\"training pruned model from scratch\\\", yours means \\\"training large model with SFP from scratch\\\". b) In your paper, utilizing the pretrained weights can be helpful, but at the same time it consumes more computation budget (pretraining + fine-tuning), thus the \\\"training from scratch\\\" baseline in your paper should be trained for more epochs for a fair comparison. The advantage of \\\"utilizing pretrained weights\\\" over \\\"training from scratch\\\" (in your paper) could be merely due to more epochs are trained or more computation budget is used. c) This statement is for the typical pipeline, not for SFP, as well as all other statements/conclusions in our paper. Considering a), b) and c), we think your result is not against our statement.\\n\\n4. We will include references to the mentioned variations of pruning, and we agree that investigating whether similar conclusions hold on SFP is interesting future work. Thanks for your suggestion.\"}",
"{\"comment\": \"Thanks for your feedback.\\nIf the paper just focuses on a special type of pruning algorithm (the typical PIPE1), the conclusion makes sense.\\n\\nTo avoid misunderstanding, it is better to add a proper constraint to the conclusion of the paper. For example:\\n1. In the Abstract Section, \\\"For pruning algorithms which assume a predefined architecture of the target pruned network, one can completely get rid of the pipeline and directly train the target network from scratch\\\". PIPE2 may not be included in the range of the \\u201cpruning algorithm\\u201d in this sentence. \\n2. In the Background Section, \\u201cFine-tuning the pruned model with inherited weights is no better than training it from scratch.\\u201d Because for PIPE2, utilizing inherited weights is still better than training from scratch.\\n\\nFor Background, adding some discussions of the variants of the *typical* three-stage pipeline [1,3,8,10,11] is great, we are looking forward to seeing the revision.\\n\\nWe agree with the authors that \\u201cwe cannot exhaustively experiment on all variations of pruning methods\\u201d, the authors do not need to cut the first two stage of every variation of pruning methods and run the experiment. We think the result comparison is enough and also necessary for two reasons.\\n\\n1. In fact, TSFS, PIPE2 [1,3], and others [10,11] all reconsider PIPE1, but on different aspects. The proposed TSFS consider the initialization of PIPE1 (or the necessity of first two stage of the PIPE1), while PIPE2 is focused on the scheme of training and pruning of PIPE1 (or the necessity of maintaining the network connection of the PIPE1). Therefore, we recommend comparing with those variants to show the importance of the components of the PIPE1, which we believe is important for the community.\\n2. [1] achieves a better result than [4,5,6] (maybe state-of-the-art), it would be more convincing if the comparison is revealed in the revision. PIPE2 [1] was adopted much less often than PIPE1 [4,5,6] because PIPE2 [1] is a rather new method.\", \"title\": \"the second round reply from reviewers\"}",
"{\"title\": \"The DenseNets we evaluated are more compact than MobileNets\", \"comment\": \"Dear Ting-Wu,\\n\\nThanks for your comment and question! It is really a important point and we will explain below.\\n\\nThe DenseNet-40, and DenseNet-BC-100 we evaluated in our experiments, are actually more compact than MobileNet. It can be seen from the table below. The results are all trained by 160 epochs using standard hyperparameters.\\n\\n--------------------------------------------------------------------\\n Model Accuracy (CIFAR-10) Parameters\\n--------------------------------------------------------------------\\nDenseNet-40 94.10\\u00b10.12 1.0M\\nDenseNet-BC-100 95.24\\u00b10.17 0.8M\\nMobileNet_v2 93.67\\u00b10.10 1.1M\\n--------------------------------------------------------------------\\n(MoibleNet_v2 is adopted from [1])\\nOur observations on these DenseNets are consistent with other bigger networks, so it can be argued that our observation hold on relatively compact models. We agree that it would be helpful to include results on MobileNet, and we will let you know when we have results.\\n\\nThe reason why you got worse results on pruned MobileNet when training from scratch, may be that you use Scratch-E rather than Scratch-B. In our experiments, we found using Scratch-B is rather important for extremely small or aggressively pruned models, since it costs significantly less computation than training the large model for the same epochs. This can be seen from our discussion on the VGG-Tiny model in the paper:\\n\\\"The only exception is Scratch-E for VGG-Tiny, where the model is pruned very aggressively from VGG-16 (FLOPs reduced by 15\\u00d7), and as a result, drastically reducing the training budget for Scratch-E. The training budget of Scratch-B for this model is also 7 times smaller than the original large model, yet it can achieve the same level of accuracy as the fine-tuned model.\\\" \\n\\nAlso, we suggest not to compare the scratch epochs to fine-tuning epochs directly (200 vs 60 in your comment), as fine-tuning is based on a pretrained large model which is trained possibly using more computation budget. \\n\\nThanks for providing your reference, we will include it into our discussion in the revision.\\n\\n[1] https://github.com/kuangliu/pytorch-cifar/blob/master/models/mobilenetv2.py\"}",
"{\"comment\": \"Dear authors,\\n\\nThis work is really interesting. I'm wondering if the same conclusion holds for a much more compact network such as MobileNet or SqueezeNet. I'm thinking along this line since I have done some experiments previously of MobileNetV2 on CIFAR-10 and I found that training from scratch is worse than training from the pruned model by more than 1% with the same architecture and longer time (200 epochs vs 60 epochs). I find this interesting since I do not observe this for ResNet-56 on CIFAR-10 (training from scratch == training from the pruned model).\\nSpecifically, for MobileNetV2 trained from scratch for 200 epochs (same hyperparam with training from scratch), I get accuracy 91.77 with std 0.13. On the other hand, fine-tuning from the pruned model for 60 epochs, I get accuracy 93.07 with std 0.17.\\n\\nAlso, I would like to provide another reference, LcP [1] (my recent work), which can be also recognized as pruning as an architecture search.\\n\\n[1] Chin, Ting-Wu, Cha Zhang, and Diana Marculescu. \\\"Layer-compensated Pruning for Resource-constrained Convolutional Neural Networks.\\\" arXiv preprint arXiv:1810.00518 (2018).\\n\\nThanks,\\nTing-Wu\", \"title\": \"Interesting results but does this hold for compact model?\"}",
"{\"title\": \"References for our last response, due to space limit\", \"comment\": \"[9] AutoPruner: An End-to-End Trainable Filter Pruning Method for Efficient Deep Model Inference. Luo et al. arXiv, 2018.\\n[10] Runtime Neural Pruning. Lin et al. NIPS, 2017.\\n[11] SkipNet: Learning Dynamic Routing in Convolutional Networks. Wang et al. ECCV, 2018.\\n[12] Optimal Brain Damage. Lecun et al. NIPS, 1990.\\n[13] Pruning Convolutional Neural Networks for Resource Efficient Inference. Molchanov et al. ICLR, 2017.\\n[14] Network trimming: A Data-driven Neuron Pruning Approach towards Efficient Deep Architectures. Hu et al. arXiv, 2016.\\n[15] Rethinking the Smaller-norm-less-informative Assumption in Channel Pruning of Convolution Layers. Ye et al. ICLR, 2018.\\n[16] Less is More: Towards Compact CNN. Zhou et al. ECCV, 2016.\\n[17] Data-driven Sparse Structure Selection for Deep Neural Networks. Huang et al. ECCV, 2018.\\n[18] Learning Efficient Convolutional Networks through Network Slimming. Liu et al. ICCV, 2017.\\n[19] Learning Structured Sparsity in Deep Neural Networks. Wen et al. NIPS, 2016.\\n[20] Principal filter analysis for guided network compression. Suau et al. arXiv, 2018.\"}",
"{\"title\": \"Thanks and our response\", \"comment\": \"Dear Yang,\\n\\nThanks for your comment! We give our response as follows:\\n\\n1. We cannot agree \\\"the core idea is about the initialization of the small (pruned) network\\\". Instead, our core idea is to validate whether the common beliefs about network pruning are true. We agree that our experiments are comparing using random and inherited weights to initialize the pruned model, but through this, we are really questioning the existing common beliefs about pruning, like \\\"inheriting weights is useful\\\" or \\\"training a large model first is necessary for obtaining a efficient model\\\", as we have extensively discussed in the paper. Surprisingly our results do not support those beliefs and give us some new understandings, which we think is more important than the choice of initialization for the pruned model in engineering practice.\\n\\n2. First, we did not claim that the three-stage pipeline is not necessary for every pruning method. Our results only suggest that for a typical pruning algorithm that fits in our pipeline (PIPE1 in your comment) with predefined architectures, one can skip the pipeline.\\n\\nThe pipeline of our evaluated methods (PIPE1) is the dominantly popular pipeline in the network pruning literature [2,4,5,6,12,13,14,15,16,17,18,19,20], and the PIPE2 procedure you mentioned is adopted much less often [1,3]. From our understanding, the soft filter pruning (SFP) procedure proposed in [1], makes a significant modification to the conventional training process, by dropping out certain channels in every epoch, and this can provide additional regularization ability (as mentioned in [1]). Despite SFP could possibly get better results than PIPE1 methods, it could be due to the regularization effect. What we are interested in this paper, is instead the effect/necessity of over-parameterization. A fair scratch-baseline would be training the pruned model with SFP regularization, instead of training the pruned model normally as in our paper. We had a similar discussion for the AutoPruner method [9] in response to Jian-Hao Luo's comment. \\n\\n\\\"For example, ResNet-56-A on CIFAR-10 in Table 1, PIPE1 [4] achieves -0.17%(\\u201c-\\u201d means accuracy drop) and TSFS achieves -0.18% when pruning 10.4% FLOPs. While in SFP [1], PIPE2 even achieves +0.30% (\\u201c+\\u201d means accuracy improvement) when pruning 14.7% FLOPs.\\\" \\nNote that the ResNet-56-A model that saves 10.4% FLOPs, and the model you mentioned that saves 14.7% FLOPs, are of *different* architectures, so the TSFS (scratch) result of the first model is *not* comparable to the pruned result of the second model. It is possible that the second pruned model just has a better architecture, and training from scratch can give the same or better performance on the second model than pruning. In our paper, we are only interested in whether training the *same* pruned architecture from scratch can be on par with fine-tuning it, without considering models of similar FLOPs but *different* architectures. \\n\\nThat being said, we agree it is possible that similar conclusions do not hold for methods using PIPE2, as well as other variations of network pruning, e.g., dynamic pruning methods [10][11] (pruning based on current input). For better generality, our experiments are on the most general and widely-used prototype of network pruning (PIPE1), since we cannot exhaustively experiment on all variations of pruning methods. We could add discussions on this point in the revision.\\n\\n3. \\\"Because the optimization space provided by TSFS and PIPE1 are similar, it is not surprising that sometimes \\u201crandom initiation\\u201d by TSFS is better (the Table 2,3,5 in the paper) and occasionally \\u201cguided initiation\\u201d by PIPE1 is better (the Table 1,4,6 in the paper).\\\"\\n\\nIt had been a belief that PIPE1 has stronger optimization power than TSFS (see the first belief mentioned in the 2nd paragraph of paper), that's the reason why people use PIPE1 instead of TSFS even when the target architecture is predefined [4,5,6]. Our experiment results, for the first time, suggest the optimization power is not as different as people used to believe, and we cannot agree that this observation is \\\"not surprising\\\".\\n\\n4. \\\"In [1,8], the pre-trained knowledge is beneficial for the final performance, which is contradictory to the proposed conclusion. For example, in [1,8], when pruning 40.8% FLOPs of ResNet-110 on CIFAR10, pruning a pre-trained model achieves +0.18% accuracy, while pruning a scratch model achieves -0.30% accuracy (worse than pruning the pre-trained model).\\\"\\nFrom our understanding, the \\\"scratch\\\" you referred to in your paper, means training a large model from scratch with the SFP, then pruning, and is *different* from the \\\"scratch\\\" we used in our paper, which means training the pruned model from scratch. This result demonstrates SFP is more suitable for fine-tuning a pretrained model, but is not directly related and not contradictory to our conclusion.\\n\\n5. Thanks again for your positive feedback on our Section 5, as well as other feedbacks!\"}",
"{\"comment\": \"Dear authors,\\nI am Yang He, the first-author of SFP [1].\\n\\nIn my opinion, the core idea of your paper is about the initialization of the small (pruned) network. This paper claims that \\\"random initialization\\\" of the small model is comparable to, or even better than, the \\\"guided initialization\\\", which is provided by the \\u201cthree-stage pipeline\\u201d, the knowledge of pre-trained network and some selection criterions.\\n\\nHowever, the above phenomenon might not be enough to conclude that the \\u201cthree-stage pipeline is not necessary\\u201d, because the compared algorithm [2,4,5,6] in the paper might not reflect the true power of the \\u201cthree-stage pipeline\\u201d. \\n\\n------------------------------------------------------------------------------------------------------------------\\n\\nFor clarity, we use some notations here:\\nTSFS - Training small model from scratch (the proposed method).\\nPIPE1 - \\u201cthree-stage pipeline\\u201d of [2,4,5,6] (not allow the pruned connection to recover when training/fine-tuning).\\nPIPE2 - \\u201cthree-stage pipeline\\u201d of [1,3] (allow the pruned connection to recover when training/fine-tuning).\\n\\nGenerally, PIPE2 could achieve better performance than PIPE1 because PIPE2 allows the incorrectly pruned connections have a chance to come back [3] and provides a larger optimization space than PIPE1 [1]. \\n\\nHowever, the proposed conclusion, \\u201cthree-stage pipeline is not necessary,\\u201d is just based on the comparison between TSFE and PIPE1 ([2] for weight pruning and [4,5,6] for filter pruning). \\nIf we only consider PIPE1 (instead of PIPE2), we agree with Jian-Hao Luo that \\u201cpruning is not always better than training from scratch\\u201d[7]. Because the optimization space provided by TSFS and PIPE1 are similar, it is not surprising that sometimes \\u201crandom initiation\\u201d by TSFS is better (the Table 2,3,5 in the paper) and occasionally \\u201cguided initiation\\u201d by PIPE1 is better (the Table 1,4,6 in the paper).\\n\\n------------------------------------------------------------------------------------------------------------------\\n\\nTo draw the proposed conclusion that \\u201cthree-stage pipeline is not necessary\\u201d and to achieve a fairer comparison, we recommend comparing with [3] (for weight pruning) and [1] (for filter pruning) for two reasons.\\n\\n1. PIPE2 could get a better result than PIPE1. \\nFor example, ResNet-56-A on CIFAR-10 in Table 1, PIPE1[4] achieves -0.17%(\\u201c-\\u201d means accuracy drop) and TSFS achieves -0.18% when pruning 10.4% FLOPs. While in SFP [1], PIPE2 even achieves +0.30% (\\u201c+\\u201d means accuracy improvement) when pruning 14.7% FLOPs.\\n\\n2. In [1,8], the pre-trained knowledge is beneficial for the final performance, which is contradictory to the proposed conclusion. \\nFor example, in [1,8], when pruning 40.8% FLOPs of ResNet-110 on CIFAR10, pruning a pre-trained model achieves +0.18% accuracy, while pruning a scratch model achieves -0.30% accuracy (worse than pruning the pre-trained model).\\nWe believe it is one of the benefits of PIPE2 that takes full advantage of the pre-trained knowledge.\\n\\n\\nBy the way, section 5 is fascinating.\\n\\n-------------------\", \"reference\": \"[1] Soft filter pruning for accelerating deep convolutional neural networks, IJCAI\\u20192018, \\u201chttps://www.ijcai.org/proceedings/2018/0309\\u201d, \\u201chttps://github.com/he-y/soft-filter-pruning\\u201d\\n[2] Learning both Weights and Connections for Efficient Neural Networks, NIPS\\u20192015\\n[3] Dynamic Network Surgery for Efficient DNNs, NIPS\\u20192016\\n[4] Pruning filters for efficient convnets. ICLR\\u20192017\\n[5] Thinet: A filter level pruning method for deep neural network compression. ICCV\\u20192017\\n[6] Channel pruning for accelerating very deep neural networks.ICCV\\u20192017\\n[7] https://openreview.net/forum?id=rJlnB3C5Ym¬eId=rkg1zlP_5m\\n[8] Extended version of [1], https://arxiv.org/abs/1808.07471v2\", \"title\": \"Interesting, but unclear for some points.\"}",
"{\"title\": \"Code Release\", \"comment\": \"We have released the code and document to reproduce the results in this anonymous link (https://drive.google.com/open?id=1HB_1FphsWtbuMAdgHbODSLdAMnj3B6TM ). Links to trained ImageNet models are also included in the document.\"}",
"{\"title\": \"Data augmentation influence\", \"comment\": \"Thank you for the reply. We've conducted experiments to verify the influence of data augmentation (DA) during training and fine-tuning, using the ResNet-34-A/B model from Li et al. Here are the results:\", \"scratch_e_trained\": \"-----------------------------------------------------------\\n Model standard DA simpler DA\\n-----------------------------------------------------------\\nResNet34-A 72.77 70.96\\nResNet34-B 72.55 70.89\\n-----------------------------------------------------------\", \"fine_tuned\": \"-----------------------------------------------------------\\n Model standard DA simpler DA\\n-----------------------------------------------------------\\nResNet34-A 72.56 72.68\\nResNet34-B 72.29 71.89\\n-----------------------------------------------------------\\nIt can be seen that the DA scheme indeed has a much more significant impact on scratch-trained accuracy than fine-tuned accuracy.\\n\\nYes, training for more epochs is another reason for the difference in Scratch-B, which we omitted in our reply since our discussion was on the Scratch-E results.\"}",
"{\"title\": \"Updated Result Table for ThiNet\", \"comment\": \"Thanks for your suggestion, we've put the updated result table for ThiNet in the anonymous link here (https://drive.google.com/file/d/1oYuVLkACu4tDBi-wuZDOi0_H6XHfK1NC/view?usp=sharing ), and the table in the paper will be updated in the revised version. Now instead of using the same preprocessing scheme for all models, each model is evaluated using the scheme that it is trained/fine-tuned on.\\n\\nWe would like to let other readers know that results for other methods do not have this issue, and the update of ThiNet's results will not affect our main conclusions. This issue is due to that different image preprocessing schemes are used for training and fine-tuning in the original ThiNet paper, and in the original submission, we evaluated all models using the most commonly used scheme.\"}",
"{\"comment\": \"Of course, the unpruned models should be 71.03%. This is a wrong setting in my ThiNet paper. I just cite the 68.36% accuracy reported in previous work. After ICCV, I noticed this bug. Hence, the accuracy of unpruned model is changed to 71.50% in the journal version.\", \"title\": \"Reply\"}",
"{\"title\": \"Reply to \\\"image preprocessing scheme\\\"\", \"comment\": \"Yes, we agree that testing preprocessing scheme should be the same as the model's training preprocessing scheme. But the original unpruned VGG in Caffe was trained using scheme 1, however, in the ThiNet paper it is evaluated using scheme 2, thus it is significantly worse (68.36% vs 71.03%), so if we compare relative accuracy drop it is unfair for us.\\n\\nIs it ok if we use VGG-Conv/GAP/Tiny evaluated on scheme 2 (69.80% for VGG-Conv), and unpruned models evaluated using scheme 1 (71.03% instead of 68.36% in ThiNet paper)? In that case, every model is tested using the same scheme as its training. We think this is a fair setting. If we reach a consensus, we will make the change for both VGG and ResNet series in the revised version.\"}",
"{\"comment\": \"Ok, my suggestion is that if you train a model using scheme 2, you should test it using scheme 2 rather than other preprocessing schemes. Or you can fine-tune it using scheme 1 in several epochs, then the network will recover its accuracy. That is to say, I train a VGG-Conv model using scheme 2, and test its accuracy using scheme 2, then I get 69.80%. But, if I test its accuracy using scheme 1, it may be only 67.80% (just a guess). However, if I fine-tune it using scheme 1 with 1 epoch, I will get 69.80% accuracy again. Hence, I think 69.80% is a more convincing result.\", \"title\": \"image preprocessing scheme\"}",
"{\"title\": \"Thanks and our answers\", \"comment\": \"Hi Jian-Hao,\\nThanks for your comment! We give our answers to your questions below:\\n1)\", \"differences_in_results\": \"The differences between our results and yours are due to the difference in image preprocessing scheme at test time, as we mentioned in the methodology section \\\"For testing on the ImageNet, the image is first resized so that the shorter edge has length 256, and then center-cropped to be of 224\\u00d7224\\\" (scheme 1). In your paper, the preprocessing is \\\"images are resized to 256 \\u00d7 256\\\" and then \\\"center crop to 224 \\u00d7 224\\\" (scheme 2), except for the \\\"VGG-Tiny\\\" model which uses scheme 1. We believe scheme 1 is a more commonly used one. During our experiments, we evaluated all models in both schemes, and we show the complete results in the anonymous link here ( https://drive.google.com/open?id=1_nQmJlLGqfDDG7MFyyF3Km0eohdQxQPJ ). The results for scheme 2 should match the results in the original ThiNet paper. The reason why we chose to present results for scheme 1 in the paper is also explained in the linked file. If needed, we could include the results for both schemes in the revised paper or Appendix.\", \"frameworks\": \"In fact, we've reproduced the scratch-trained result in your paper (67%) in Caffe, using the data augmentation and image preprocessing scheme (scheme 2 mentioned above) from your Github, which, however, are different from the training setting of the original VGG trained in Caffe Model Zoo (the model you used as the unpruned model). If we train the original unpruned VGG using this setting in Caffe, we cannot achieve the accuracy of the unpruned VGG reported in your paper either. \\nAs we mentioned in the paper, the contradiction between our results and yours may be due to a simpler-than-standard data augmentation scheme during training from scratch. We believe the cause is more than the differences in frameworks, since we already compare relative performance drop from unpruned models in each framework.\", \"scratch_b\": \"We only train the ImageNet models for at most 180 epochs (2x90 epochs), as mentioned in the footnote of page 4.\\nWhen a large pretrained model is given, we agree pruning and fine-tuning can be faster, as we mentioned in the last section of the paper. But in most practical cases, we need to train the large model by ourselves (as popular pretrained models are only on certain datasets like ImageNet), thus we think Scratch-B is a fair setting. \\n\\n2)\\na) The reason is the same as that for VGG.\\n\\nb) The difference between Pytorch and Caffe should not matter that much since we compare relative accuracy drop. \\n\\nAs for AutoPruner, we noticed that there are a pooling layer and a fully connected layer for selecting channels in each convolutional layer, and their parameters can be removed after training. From our understanding, this channel selection module is somewhat similar to the \\\"squeeze-and-excitation\\\" module in the SENet paper [1]. Other than enabling pruning, these modules themselves can give the network stronger representation power and boost the accuracy (as shown in [1]), thus a fair comparison would be training the pruned networks with those modules from scratch, and remove them afterward. However, in our evaluation for training from scratch, ResNet-50% and ResNet-30% are not trained with those modules, since ThiNet does not have those modules. Therefore, we think the results are not directly comparable, despite they may have exactly the same architectures after training. Thus, our results here might not support your claim \\\"pruning can outperform training from scratch\\\".\\n\\n3) Thanks for your positive feedback!\\n\\n[1] Squeeze-and-excitation Networks. Hu et al. CVPR 2018.\"}",
"{\"comment\": \"Hi,\\n\\nI am Jian-Hao Luo, the first-author of ThiNet (ICCV'17, TPAMI'18). In fact, I also noticed that pruning is not always better than training from scratch. Sometimes, it may be even worse than training from scratch. But, in general, I think pruning can outperform training from scratch.\\n\\nI have read your paper, then I will talk about the results shown in Table 2.\\n\\n1) First, Let us focus on VGG16. In my ICCV paper, the top-1 accuracy of VGG-GAP is 67.34%, 3.69% lower than your baseline accuracy (71.03%). But in Table 2, the accuracy drop is 4.93%. Is there anything wrong? Then, let us translate this table into a more intuitive one:\\n-------------------------------------------------------------\\nStrategy VGG-Conv VGG-GAP VGG-Tiny\\n-------------------------------------------------------------\\nThiNet 69.80% 67.34% 58.34%\\nScratch-E 68.76% 66.85% 57.15%\\nScratch-B 71.71% 68.66% 59.93%\\n-------------------------------------------------------------\\nObviously, ThiNet is better than Scratch-E, i.e., train from scratch in the same epochs. In fact, I also compared with training from scratch in ICCV' paper. Its accuracy is 67% (the same architecture as VGG-Conv, in 80 epochs). The gap between 67% and yours 68.76% may be because of deep learning framework (caffe vs. PyTorch) and image pre-processing. \\n\\nThe next question is about Scratch-B? In my opinion, it is due to more training epoch. VGG-Conv and VGG-GAP save more than 3x FLOPs than VGG16 model. Then, Scratch-B will train 3x epochs than Scratch-E. If you use the official settings of PyTorch, it needs 90 epochs to train VGG16 from scratch. Then I guess you cost 270 epochs to train VGG-Conv and VGG-GAP. But, in my ThiNet pipeline, I only conduct 26 epochs to finish pruning and fine-tuning (see the code in Github[1]). Hence, I think Scratch-B is unfair.\\n\\n2) Next, let us talk about ResNet-50:\\na) The same problem also exists. The reported accuracy of ResNet50-30% is 68.42%, 6.73% lower than your baseline (75.15%). Or do you compare with 76.13%?\\nb) In my ICCV implementation, the hyper-parameter settings of pruning ResNet50 are not good. Hence, the accuracy drop is obvious. We have improved their performance in the journal version [2]. But they are still lower than your Scratch-E and Scratch-B. Then, I think: (1). ThiNet is not good enough when pruning ResNet models. (2). PyTorch is better than caffe when training ResNet.\\n\\nHence, I agree that ThiNet is not good at pruning ResNet. But I do not agree that training from scratch is better than all pruning methods in ResNet. For example, our recent arxiv paper AutoPruner [3] is better than the reported training from scratch results:\\n----------------------------------------------------------------\\nStrategy ResNet50-30% ResNet50-50% \\n----------------------------------------------------------------\\nAutoPruner 72.53% 74.22%\\nScratch-E 70.92% 73.31%\\nScratch-B 71.57% 73.90%\\n----------------------------------------------------------------\\nMy AutoPruner has exactly the same architecture as ThiNet. And it is also conducted within PyTorch. My code is based on the PyTorch official training code [4], and use the same image preprocessing pipelines. Hence, the comparison is fair.\\n\\n3) Anyway, this is an interesting paper. The observation mentioned in this paper is important for the community. Maybe we should think more about pruning and training from scratch.\\n\\n-------------------\", \"reference\": \"[1]. ThiNet code: https://github.com/Roll920/ThiNet_Code\\n[2]. ThiNet: Pruning CNN Filters for a Thinner Net. TPAMI, 2018.\\n[3]. AutoPruner: An End-to-End Trainable Filter Pruning Method for Efficient Deep Model Inference. arxiv, 2018.\\n[4]. PyTorch ImageNet code: https://github.com/pytorch/examples/blob/master/imagenet/main.py\", \"title\": \"Interesting paper!\"}",
"{\"title\": \"Reply to \\\"Tensor decompositions\\\"\", \"comment\": \"Thanks for your comment! Indeed, tensor decomposition is a very important family of compression techniques. Here we mainly focus on understanding and verifying the assumptions behind network pruning. Tensor decomposition approaches share some similar operations with network pruning, but differ in some important aspects, e.g., the methods you mentioned (Zhang et al., Jaderberg et al.) do not use fine-tuning, and some works already adopt the strategy of directly training the low-rank decomposed network from scratch (e.g., [1][2]). For tensor decomposition, we think the assumption could be more appropriately described as \\\"low-rank tensor is a more efficient parameterization for convolution weights\\\", rather than \\\"starting with a large model is necessary\\\" or \\\"inheriting important weights is helpful\\\", as we studied in the network pruning methods.\\n\\nThat being said, investigating whether tensor decomposition and other compression methods exhibit similar properties would be an interesting future work.\\n\\n[1] Training CNNs with Low-rank Filters for Efficient Image Classification. Ioannou et al., ICLR 2016.\\n[2] Convolutional Neural Networks with Low-rank Regularization. Tai et al., ICLR 2016.\"}",
"{\"comment\": \"Interesting results!\\nWould it be possible to add tensor decomposed architectures like Zhang et. al. or Jaderberg et. al. (Asym3D, Spatial SVD) to the paper? That's a whole set of neural network compression methods left out of the equation here :)\\n\\nCheers,\\nTijmen\", \"title\": \"Tensor decompositions\"}",
"{\"comment\": \"Maybe fine-tuning doesn't rely that much on data augmentation. The training time of the from scratch counterpart in our experiment is not as long as your Scratch-B, which may also cause the difference.\", \"title\": \"That's interesting\"}",
"{\"title\": \"Sorry for the typo, \\\"VGG-2x\\\" should be \\\"VGG-5x\\\"\", \"comment\": \"Dear Yihui,\\nThanks for your detailed comments! \\n1. We double checked our result tables. In our Table 3, we found the \\\"VGG-2x\\\" is actually a typo, it should be \\\"VGG-5x\\\", which is the model available on your GitHub repo and the model we actually used. We are sorry for the confusion and we will correct the typo in the next version. After this change, our results for fine-tuning match the results listed above.\\n\\nFor scratch-trained results, we hypothesize the difference could be explained by less carefully chosen hyper-parameter setting and data augmentation scheme, as we mentioned in Section 1. For example, fine-tuning may not require heavy data augmentation for a good performance (if it is already used in large model training), but training from scratch does require. From your GitHub repo, we observe that you use models pre-trained with heavy data augmentation, and during fine-tuning a simpler data augmentation scheme is used. If the same setting as fine-tuning is used for evaluating the scratch-baseline, the difference could be explained. In our experiment, the scratch-baseline setting is the same as large model training. We plan to release our code soon.\\n\\n2. The focus of our submission is the property of network pruning. From our understanding, the \\\"VGG_3C_4x_FT\\\" model you referred to is obtained through a combination of 3 techniques, namely spatial factorization, channel factorization, and channel pruning. If for this model, the scratch results cannot match the fine-tuned results, it could be due to the former two techniques. Thus, to isolate the effects, we choose to evaluate network obtained by only channel pruning.\\n\\n3. Yes, thanks for pointing out. The AMC method is indeed very related to our discussion. We will add citations to AMC and other examples of using pruning-related techniques to guide architecture search (e.g., [1]) in the revised version.\\n[1] MorphNet: Fast & Simple Resource-Constrained Structure Learning of Deep Networks. CVPR 2018.\"}",
"{\"comment\": \"Dear authors,\\n I'm Yihui He, the first author of channel pruning and AMC [1] (channel pruning with RL).\\n First, please correct our VGG-16 results:\\n\\n(the baseline top-1/top-5 Err: 29.5%/10.1%)\\nModel\\t Top1/Top5 Inc Err(%)\\n(Table 1)\\nVGG_2x_FT\\t\\t0.0/0.0\\nVGG_4x_FT\\t\\t2.2/1.0\\nVGG_5x_FT\\t\\t2.9/1.7\\n(Table 2)\\nVGG_3C_4x_FT\\t-0.3/0.0\\nVGG_3C_5x_FT\\t0.0/0.3\\n(Table 4)\\nFrom scratch\\t2.4/1.8\\nFrom scratch uni\\t3.4/2.4\\nOurs (FT)\\t\\t2.2/1.0\\n\\nSo actually VGG_2x_FT doesn't hurt accuracy at all. Your argument in section 4.1 \\\"scratch-trained models are better than the fine-tuned models\\\" does not hold for channel pruning. I do agree that channel pruning is not good at ResNet, which is because of the multi-branch characteristic. \\n\\nSecond, I'm more curious to see the comparison of your Scratch-B and Scratch-E with our strong result VGG_3C_4x_FT which is also released on Github. VGG_3C_4x_FT reduced computation 4x and increased top-1 accuracy by 0.3%, which was the state-of-the-art result at the time our paper published. \\n\\nThird, AMC [1] is a perfect example of network pruning as architecture search (section 5).\\n\\n[1] AMC: AutoML for Model Compression and Acceleration on Mobile Devices, ECCV'18, http://openaccess.thecvf.com/content_ECCV_2018/html/Yihui_He_AMC_Automated_Model_ECCV_2018_paper.html\", \"title\": \"please use the correct results of channel pruning\"}"
]
} |
|
H1x3SnAcYQ | A Better Baseline for Second Order Gradient Estimation in Stochastic Computation Graphs | [
"Jingkai Mao",
"Jakob Foerster",
"Tim Rocktäschel",
"Gregory Farquhar",
"Maruan Al-Shedivat",
"Shimon Whiteson"
] | Motivated by the need for higher order gradients in multi-agent reinforcement learning and meta-learning, this paper studies the construction of baselines for second order Monte Carlo gradient estimators in order to reduce the sample variance. Following the construction of a stochastic computation graph (SCG), the Infinitely Differentiable Monte-Carlo Estimator (DiCE) can generate correct estimates of arbitrary order gradients through differentiation. However, a baseline term that serves as a control variate for reducing variance is currently provided only for first order gradient estimation, limiting the utility of higher-order gradient estimates. To improve the sample efficiency of DiCE, we propose a new baseline term for higher order gradient estimation. This term may be easily included in the objective, and produces unbiased variance-reduced estimators under (automatic) differentiation, without affecting the estimate of the objective itself or of the first order gradient. We provide theoretical analysis and numerical evaluations of our baseline term, which demonstrate that it can dramatically reduce the variance of second order gradient estimators produced by DiCE. This computational tool can be easily used to estimate second order gradients with unprecedented efficiency wherever automatic differentiation is utilised, and has the potential to unlock applications of higher order gradients in reinforcement learning and meta-learning. | [
"Reinforcement learning",
"meta-learning",
"higher order derivatives",
"gradient estimation",
"stochastic computation graphs"
] | https://openreview.net/pdf?id=H1x3SnAcYQ | https://openreview.net/forum?id=H1x3SnAcYQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"BJxBDIyblE",
"BylyjkynC7",
"Byx4BJJ3RQ",
"SJgVaACi0X",
"HJl83sY967",
"ryglNsz9p7",
"B1eK5IRF6X",
"Hyl7UE0Ya7",
"HJeQrTXahX",
"SJxbt-3sjX"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_comment",
"official_comment",
"official_review",
"official_review"
],
"note_created": [
1544775260727,
1543397270704,
1543397179879,
1543397052099,
1542261678358,
1542232872415,
1542215313279,
1542214731084,
1541385531137,
1540239736537
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1579/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1579/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1579/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1579/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1579/AnonReviewer4"
],
[
"ICLR.cc/2019/Conference/Paper1579/AnonReviewer5"
],
[
"ICLR.cc/2019/Conference/Paper1579/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1579/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1579/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1579/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper extends the DiCE estimator with a better control variate baseline for variance reduction.\\nThe reviewers all think the paper is fairly clear and well written. However, as the reviews and discussion indicates, there are several critical issues, including lack of explanation of the choice of baseline, the lack more realistic experiments and a few misleading assertions. We encourage the authors to rewrite the paper to address these criticism. We believe this work will make a successful submission with proper modification in the future.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Good work but some critical issues need to be addressed\"}",
"{\"title\": \"Thank you for the relevant references and the review. We disagree on relevance (as you can imagine).\", \"comment\": \"\\u201cI would \\\"reverse engineer\\\" from the exact derivatives and figure out the corresponding DiCE formula.\\u201d\", \"there_are_two_separate_but_related_challenges\": \"First of all, you need to formulate the correct baseline for the 2nd order derivatives. In particular we wanted to make sure that this baseline can be constructed using the standard state-value function.\\nSecondly, this baseline needs to be constructed via a combination of DiCE operators, such that it can be included in the original objective. In other words, it needs to leave the evaluation of both, the original objective and the first order gradient, unchanged, but then also generate the correct terms for the 2nd order variance reduction when differentiated twice. This is non-trivial.\\n\\n\\u201cb_w,.. Is this choice optimal for the second order control variate\\u201d:\\nThe main application of the DiCE formalism is within the context of Reinforcement Learning. In Reinforcement Learning, b_w is simply the state value-function, V(s). While this is not an optimal baseline, it is the best *practical* baseline based upon applications. It is used by state-of-the-art algorithms such as PPO (https://arxiv.org/abs/1707.06347), A3C (https://arxiv.org/pdf/1602.01783.pdf) and others.\\nOur 2nd order baseline is the extension of this \\u2018good enough\\u2019 baseline to higher order terms. As such it is not an optimal baseline, but a \\u2018good enough\\u2019 and easy to implement one. \\n\\n\\u201cdesign choice of b_w is not rigorously explained,\\u201d:\\nOur focus is on having a variance reduction baseline which keeps the estimator unbiased. As such the terms need to be of the form b_w as described in the paper. We\\u2019ll clarify this further as appropriate. We respectfully disagree that MAML is a more complex task. LOLA has the same properties in terms of differentiating through the learning step of an agent. One major difference here is the continuous vs discrete action space.\\n\\n\\u201cwhy using marginal distribution is valid when nodes in W are not independent.\\u201d:\\nThe nodes w are indeed independent when conditioned on their causes. Intuitively, you can think of this as the actions being sampled iid once you condition on the states. \\n\\n\\u201cCite [1], [2]\\u201d:\\nMany thanks, we\\u2019ll update the paper to include these references! \\n\\n\\\"correlation coefficient\\\":\\n-Yes.\"}",
"{\"title\": \"Thank you for an encouraging and insightful review\", \"comment\": \"Thank you for an encouraging and insightful review. We address specific points below.\\n\\n\\u201cMore explanation/intuition\\u201d:\\nWe will add further intuition regarding the construction of the 2nd order baseline. The basic derivation is currently provided in the appendix on the bottom of page 11: When the DiCE objective is differentiated twice, the resulting terms can be rewritten as a double summation over nodes in the graph, with the inner sum containing an R_v (ie. sum of downstream costs). Importantly, the first order baseline does not provide a variance-reduction term for the R_v in term A^2. To do so we need a term that is the same as the A^2 term but contains -b_w instead of R_v, after double differentiation. That\\u2019s how we constructed the 2nd order baseline. We fully agree that this part should be explained more clearly in the paper and will do so in the next version.\\n\\n\\u201cshow that the reduction in variance is isolated to the second term\\u201d:\\nIn fact, we already show this in the paper. In Figure 3, we compare the performance of the original DiCE objective (including the first order baseline) with the DiCE objective including the 1st and 2nd order baselines. We will make this point more clear in the text.\\n\\n\\u201c solve new, more difficult problems\\u201d:\\nThis is a great idea which we hope to address in future work. We also believe that making this method broadly available will encourage other research groups to use the tool to solve new problems.\"}",
"{\"title\": \"A detailed review that partially misses the point due to a miss understanding of the purpose of the paper.\", \"comment\": \"First of all, we would like to thank you for taking the time to review the paper and for taking the time to reply to our comments. This is appreciated, especially at a busy time of the year like this.\\n\\n@1) This is fair: While from a practitioner\\u2019s perspective \\u2018c = 1\\u2019, from an educational point we agree that mentioning the optimal \\u2018c\\u2019 is valuable and we\\u2019ll include this in the next revision of the paper. We had omitted this since it seemed irrelevant from a practical point of view, but you are right that it is useful background.\\n\\n@2)-5): All of these points indicate a miss-understanding of the paper\\u2019s main point, which we will emphasise more in a revision:\\nBy \\u201cbetter baseline\\u201d we mean literally \\u201cbetter than current DiCE baseline\\u201d. So the point is not a comparison between action-dependent and state-dependent baselines, but simply making a \\u2018better baseline\\u2019 easily available. \\nWe agree that investigating action-dependent baselines is a fascinating research area. However, that\\u2019s also the main reason why we do not focus on them in our work: The point of this paper is to make methods that have been proven to work (and are commonly being used), more easily available to practitioners.\", \"this_fits_in_nicely_with_the_narrative_of_dice_overall\": \"The main point of DiCE is to facilitate the development and deployment of methods that require higher order gradients. Note that higher order gradients here are not the subject of the research, but merely a required tool. This process should be pursued in parallel to the development, investigation, and analysis of different higher order estimators and variance reduction techniques.\", \"we_also_strongly_disagree_with_the_statement_that_the_baseline_should_be_tested_on_novel_settings_to_increase_the_novelty_of_the_paper\": \"Reproducibility is absolutely vital for scientific progress, especially for the development of tools (such as DiCE and its baselines).\", \"so_i_think_what_it_comes_down_to_is_this\": \"Are higher order gradient estimators well enough developed so that they can be used for practical applications as a tool rather than having to be the subject of research itself? Our belief is yes and this submission constitutes an important step in that direction. Reducing the sample-requirements by a factor of 100x should not be just brushed aside, even if it's on a 'toy' problem.\\n\\nFor future work we do agree that it would be great to extend our formalism for the baseline to include the option of having action-dependent baselines and others.\"}",
"{\"title\": \"Interesting paper, could push it further\", \"review\": \"This paper extends the \\\"infinitely differentiable Monte Carlo gradient estimator\\\" (or DiCE) with a better control variate baseline for reducing the variance of the second order gradient estimates.\\n\\nThe paper is fairly clear and well written, and shows significant improvements on the tasks used in the DiCE paper.\", \"i_think_the_paper_would_be_a_much_stronger_submission_with_the_following_improvements\": [\"More explanation/intuition for how the authors came up with their new baseline (eq. (8)). As the paper currently reads, it feels as if it comes out of nowhere.\", \"Some analysis of the variance of the two terms in the second derivative in eq. (11). In particular, it would be nice to show the variance of the two terms separately (for both DiCE and this paper), to show that the reduction in variance is isolated to the second term (I get that this must be the case, given the math, but would be nice to see some verification of this). Also I do not have good intuition for which of these two terms dominates the variance.\", \"I appreciate that the authors tested their estimator on the same tasks as in the DiCE paper, which makes it easy to compare them. However, I think the paper would have much more impact if the authors could demonstrate that their estimator allows them to solve new, more difficult problems. Some of these potential applications are discussed in the introduction, it would be nice if the authors could demonstrate improvements in those domains.\", \"As is, the paper is still a nice contribution.\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"A paper on an important topic, but the contribution is not very significant\", \"review\": \"Thank you for an interesting read.\\n\\nThis paper extends the recently published DiCE estimator for gradients of SCGs and proposed a control variate method for the second order gradient. The paper is well written. Experiments are a bit too toy, but the authors did show significant improvements over DiCE with no control variate.\\n\\nGiven that control variates are widely used in deep RL and Monte Carlo VI, the paper can be interesting to many people. I haven't read the DiCE paper, but my impression is that DiCE found a way to conveniently implement the REINFORCE rules applied infinite times. So if I were to derive a baseline control variate for the second or higher order derivatives, I would \\\"reverse engineer\\\" from the exact derivatives and figure out the corresponding DiCE formula. Therefore I would say the proposed idea is new, although fairly straightforward for people who knows REINFORCE and baseline methods.\\n\\nFor me, the biggest issue of the paper is the lack of explanation on the choice of the baseline. Why using the same baseline b_w for both control variates? Is this choice optimal for the second order control variate, even when b_w is selected to be optimal for the first order control variate? The paper has no explanation on this issue, and if the answer is no, then it's important to find out an (approximately) optimal baseline for this second order control variate. \\n\\nAlso the evaluation seems quite toy. As the design choice of b_w is not rigorously explained, I am not sure the better performance of the variance-reduced derivatives generalises to more complicated tasks such as MAML for few-shot learning.\", \"minor\": \"1. In DiCE, given a set of stochastic nodes W, why did you use marginal distributions p(w, \\\\theta) for a node w in W, instead of the joint distribution p(W, \\\\theta)? I agree that there's no need to use p(S, \\\\theta) that includes all stochastic nodes, but I can't see why using marginal distribution is valid when nodes in W are not independent.\\n\\n2. For the choice of b_w discussed below eq (4), you probably need to cite [1][2].\\n\\n3. In your experiments, what does \\\"correlation coefficient\\\" mean? Normalised dot product?\\n\\n[1] Mnih and Rezende (2016). Variational inference for Monte Carlo objectives. ICML 2016.\\n[2] Titsias and L\\u00e1zaro-Gredilla (2015). Local Expectation Gradients for Black Box Variational Inference. NIPS 2015.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"We strongly disagree with the main points raised, particularly the misconception around the optimal scaling constant \\u2018c\\u2019 and action-dependent baselines.\", \"comment\": \"Thank you for your feedback. However, we strongly disagree with the main points of criticism, particularly the misconception around the optimal scaling constant \\u2018c\\u2019 and action-dependent baselines. Please see our detailed response to the individual points of the review below.\\n\\n1) Re \\u201cpositive vs negative correlation\\u201d: Of course covariates can be positively or negatively correlated. The fact that we say it should be positively correlated does not indicate a misunderstanding but merely reflects the fact that in our case c is set to 1 (see below why directly evaluating the optimal c is not practical in realistic settings).\\n\\n2) Re \\u201coptimal scaling constant\\u201d: While it is well known that the optimal variance reduction depends on the covariance between the control variate and the estimator, this optimal factor is rarely used in practice for reinforcement learning due to the computational costs of doing one gradient estimate per entry in a batch. What is used in practice across the board for Deep RL (eg. A3C, PPO, IMPALA, etc) is the value-function based variance reduction, which we are enabling for higher order gradients through the DiCE formalism. The fact that our method only depends on the commonly used state-value-function for the baseline computation is a strength, not a weakness.\\n\\n3) Re \\u201cindependent of the action\\u201d: We will clarify this issue in the paper but the review misrepresents the facts on this point. Yes, there is a way to account for the bias introduced by an action-dependent baseline and in some cases this bias can be removed exactly. However, this is another method (like the optimal scaling factor mentioned above) that has not been shown to work in practice. In fact the very paper cited by the reviewer (the \\u2018mirage of action dependent baselines\\u2019), concludes that from a practitioner's point of view there currently is no reason to consider action dependent baselines. Our submission extends the utility of value functions to provide action-independent variance reduction for higher order gradients.\\n\\n4) Re \\u201crevises DiCE formalism\\u201d: Thank you for this suggestion. Prior to submission, we carefully considered how our contribution and decided that in order for the paper to be self-contained, Stochastic Computation Graphs as well as the DiCE formalism should be explained clearly in the Background section. To highlight the delta to prior work, we clearly separated out the revision of Stochastic Computation Graphs and the DiCE formalism (Section 2) from the Method (Section 3). \\n\\n5) Re \\u201cnovelty too low.. does not generalize past the second order gradient\\u201d: This is obviously a subjective claim. However, note that second order gradients are a key use-case in meta-learning and multi-agent RL and as such the new baseline has the potential to unlock a large number of applications and is of key importance to the community (in fact we are aware of one other research group that has already started experimentation with this baseline). Also, just as the 1st order variance reduction term contributes to a lower 2nd order variance, our new 2nd order baseline also acts as a variance reduction for higher order gradient estimators, although we did not quantify the impact experimentally (since 2nd order is the most relevant for current research). \\n\\n6) Re \\u201cexperiments are identical\\u201d: This is also a strength, not a weakness: for the sake of reproducibility between the original DiCE and the new baseline, it is crucial to use the same setting. Also, note that his paper is about proposing a new tool, rather than demonstrating full solutions to novel applications. Experimental results are provided as proof-of-principle and are not the main point of the paper. Clearly, the experimental results support the utility of the new baseline compared to a previously published result.\"}",
"{\"title\": \"Thank you for the positive review, we\\u2019re excited about applying this tool to larger problems in future work.\", \"comment\": \"Many thanks for the review. While we agree that more experimental validation would have value, this paper is primarily proposing a novel method, which is validated both through proof-of-principle experiments, but, also, and more importantly, theoretically. Furthermore, since we uploaded the paper to OpenReview, another research group has already started experimenting with the new baseline.\"}",
"{\"title\": \"An important direction motivated by recent need for second-order gradient estimation, but need to verify its advantages more thoroughly\", \"review\": \"In this paper, the author proposed a better control variate formula for second-order Monte Carlo gradient estimators, based on a special version of DiCE (Foerster et al, 2018). The motivation and the main method is easy to follow and the paper is well written. The author followed the same experiments setting as DiCE, numerically verifying the advantages of the newly proposed baseline, which can estimate the Hession accurately.\\n\\nThe work is essentially important due to the need for second-order gradient estimation for meta-learning (Finn et al., 2017) and multi-agent reinforcement learnings. However, the advantage of the proposed method is not verified thoroughly. The only real application demonstrated in the paper, can be achieved the same performance as the second-order baseline using a simple trick. Since this work only focuses on second-order gradient estimations, I think it would be better to verify its advantages in various scenarios such as meta-learning or sparse reward RL as the author suggested in the paper.\\n\\nFinn, Chelsea, Pieter Abbeel, and Sergey Levine. \\\"Model-agnostic meta-learning for fast adaptation of deep networks.\\\" ICML 2017.\\nFoerster, Jakob, et al. \\\"DiCE: The Infinitely Differentiable Monte-Carlo Estimator.\\\" ICML 2018.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Nicely written paper contributes useful trick; novelty too low for conference track, some correctness issues present\", \"review\": \"Overview:\\nThis nicely written paper contributes a useful variance reduction baseline to make the recent formalism of the DiCE estimator more practical in application. I assess the novelty and scale of the current contribution as too low for publication at ICLR. Also, the paper includes a few incorrect assertions regarding the control variate framework as well as action-dependent baselines in reinforcement learning. Such issues reduce the value of the contribution in its current form and may contribute to ongoing misunderstandings of the control variate framework and action-dependent baselines in RL, to the detriment of variance reduction techniques in machine learning. I do not recommend publication at this time.\", \"pros\": \"The paper is well written modulo the issues discussed below. It strikes me as a valuable workshop contribution once the errors are addressed, but it lacks enough novelty for the main conference track.\", \"issues\": \"* (p.5) \\\"R_w and b_w are positively correlated by design, as they should be for variance reduction of the first order gradients.\\\"\\n\\nThis statement is not true in general. Intuitively, a control variate reduces variance because when a single estimate of an expectation of a function diverges from its true value according to some delta, then, with high probability, some function strongly correlated with that function will also diverge with a similar delta. Such a delta might be positive or negative, so long as the error may be appropriately modeled as drawn from some symmetric distribution (i.e. is Gaussian).\\n\\nControl variates are often estimated with an optimal scaling constant that depends on the covariance of the original function and its control variate. Due to the dependence on the covariance, the scaling constant flips sign as appropriate in order reduce variance for any delta. For more information, see the chapter on variance reduction and subsection on control variates in Sheldon Ross's textbook \\\"Simulation.\\\"\\n\\nThe fact that a control variate appears to work despite this is not surprising. Biased and suboptimal unbiased gradient estimators have been shown to work well for reasons not fully explored in the literature yet. See, for example, Tucker et al.'s \\\"Mirage of Action-Dependent Baselines\\\", https://arxiv.org/abs/1802.10031.\\n\\nSince the authors claim on page 6 that the baseline is positively correlated by design, this misunderstanding of the control variate framework appears to be baked into the baseline itself. I recommend the authors look into adaptively estimating an optimal scale for the baseline using a rolling estimator of the covariance and variance to fix this issue. See the Ross book cited above for full derivation of this optimal scale.\\n\\n* The second error is a mischaracterization of the use and utility of action-dependent baselines for RL problems, on page 6: \\\"We choose the baseline ... to be a function of state ... it must be independent of the action ....\\\" and \\\"it is essential to exclude the current action ... because the baselines ... must be independent of the action ... to remain unbiased.\\\" In the past year, a slew of papers have presented techniques for the use of action-dependent baselines, with mixed results (see the Mirage paper just cited), including two of the papers the authors cited.\\n\\nCons\\n* Much of paper revises the DiCE estimator results, arguing for and explaining again those results rather than referring to them as a citation. \\n* I assess the novelty of proposed contribution as too low for publication. The baseline is an extension of the same method used in the original paper, and does not generalize past the second order gradient, making the promising formalism of the DiCE estimator as infinitely differentiable still unrealizable in practice.\\n* The experiments are practically identical to the DiCE estimator paper, also reducing the novelty and contribution of the paper.\\n\\n*EDIT: \\nI thank the authors for a careful point-by-point comparison of our disagreements on this paper so that we may continue the discussion. However, none of the points I identified were addressed, and so I maintain my original score and urge against publication. In their rebuttal, the authors have defended errors and misrepresentations in the original submission, and so I provide a detailed response to each of the numbered issues below:\\n\\n(1) I acknowledge that it is common to set c=1 in experiments. This is not the same as the misstatements I cited, verbatim, in the paper that suggest this is required for variance reduction. My aim in identifying these mistakes is not to shame the authors (they appear to simply be typos) but simply to ensure that future work in this area begins with a correct understanding of the theory. I request again that the authors revise the cited lines that incorrectly state the reliance of a control variate on positive correlation. It is not enough to state that \\\"everyone knows\\\" what is meant when the actual claim is misleading.\\n\\n(2) Without more empirical investigation, the authors' new claim that a strictly state-value-function baseline is a strength rather than a weakness cannot be evaluated. This may be the case, and I would welcome some set of experiments that establish this empirical claim by comparing against state-action-dependent baselines. The authors appear to believe that state-action-dependent baselines are never effective in reducing variance, and this is perhaps the central error in the paper that should be addressed. See response (3). Were the authors to fix this, they would necessarily compare against state-action-dependent baselines, which would be of great value for the community at large in settling this open issue.\\n\\n(3) Action-dependent baselines have not been shown to be ineffective. I wish to strongly emphasize that this is not the conclusion of the Mirage paper, and the claim repeated in the authors' response (3) has not been validated empirically or analytically, and does not represent the state of variance reduction in reinforcement learning as of this note. I repeat a few key arguments from the Mirage paper in an attempt to dispel the authors' repeated misinterpretation of the paper.\\n\\nThe variance of the policy gradient estimator, subject to a baseline \\\"phi,\\\" is decomposed using the Law of Total Variance in Eq (3) of the Mirage paper. This decomposition identifies a non-zero contribution from \\\"phi(a,s)\\\", the (adaptive or non-adaptive) baseline. The Mirage paper analyzes under what conditions such a contribution is expected to be non-negligible. Quoting from the paper:\\n\\\"We expect this to be the case when single actions have a large effect on the overall discounted\\nreturn (e.g., in a Cliffworld domain, where a single action could cause the agent to fall of the cliff and suffer a large negative reward).\\\"\\nPlease see Sec. 3, \\\"Policy Gradient Variance Decomposition\\\" of the Mirage paper for further details.\\nThe Mirage paper does indeed cast reasonable doubt on subsets of a few papers' experiments, and shows that the strong claim, mistakenly made by these papers, that state-action-dependence is always required for an adaptive control variate to reduce variance over state dependence, is not true. \\n\\nIt should be clear from the discussion of the paper to this point that this does _not_ imply the even stronger claim in \\\"A Better Second Order Baseline\\\" that action dependence is never effective and should no longer be considered as a means to reduce variance from a practitioner's point of view. Such a misinterpretation should not be legitimized through publication, as it will muddy the waters in future research. I again urge the authors to remove this mistake from the paper.\\n\\n(4) I acknowledge the efforts of the authors to ensure that adequate background is provided for readers. This is a thorny issue, and it is difficult to balance in any work. Since this material represents a sizeable chunk of the paper and is nearly identical to existing published work, it leads me to lower the score for novelty of contribution simply by that fact. Perhaps the authors could have considered placing the extensive background materials in the appendix and instead summarizing them briefly in the body of the paper, leaving more room for discussion and experimental validation beyond the synthetic cases already studied in the DiCE paper.\\n\\n(5), (6) In my review I provided specific, objective criteria by which I have assessed the novelty of this paper: the lack of original written material, and the nearly identical experiments to the DiCE paper. As I noted in response (4) above, this reduces space for further analysis and experimentation.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
ryxhB3CcK7 | Probabilistic Neural-Symbolic Models for Interpretable Visual Question Answering | [
"Ramakrishna Vedantam",
"Stefan Lee",
"Marcus Rohrbach",
"Dhruv Batra",
"Devi Parikh"
] | We propose a new class of probabilistic neural-symbolic models for visual question answering (VQA) that provide interpretable explanations of their decision making in the form of programs, given a small annotated set of human programs. The key idea of our approach is to learn a rich latent space which effectively propagates program annotations from known questions to novel questions. We do this by formalizing prior work on VQA, called module networks (Andreas, 2016) as discrete, structured, latent variable models on the joint distribution over questions and answers given images, and devise a procedure to train the model effectively. Our results on a dataset of compositional questions about SHAPES (Andreas, 2016) show that our model generates more interpretable programs and obtains better accuracy on VQA in the low-data regime than prior work. | [
"Neural-symbolic models",
"visual question answering",
"reasoning",
"interpretability",
"graphical models",
"variational inference"
] | https://openreview.net/pdf?id=ryxhB3CcK7 | https://openreview.net/forum?id=ryxhB3CcK7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"rkgwNdTVV4",
"SyxvKhdxxN",
"rkgGCh_e1N",
"ryl1ZpPly4",
"BJxuJagkk4",
"HkgE_Wx1yE",
"rygqGjiRAm",
"B1xhUTsjRX",
"BkeK5LLq07",
"HkeMV8U907",
"HygagHLqRX",
"Hkl7MurqR7",
"ryxReMqxCX",
"SJxb4jA567",
"BygPBqabpm",
"Hkeq6bpbaQ",
"SkxggDsxam",
"ryeoWKdlpX",
"BkevSwdl6Q",
"HklnNrv53m",
"HJgdaqL5nQ",
"S1x1C3I7i7"
],
"note_type": [
"comment",
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1549223983331,
1544748159192,
1543699658207,
1543695607109,
1543601376019,
1543598444232,
1543580434386,
1543384403788,
1543296657241,
1543296554102,
1543296245303,
1543292939099,
1542656501848,
1542282025390,
1541687871119,
1541685697678,
1541613288140,
1541601539385,
1541601086720,
1541203252448,
1541200575718,
1539693766599
],
"note_signatures": [
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1578/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1578/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1578/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1578/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1578/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1578/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1578/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1578/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1578/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1578/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1578/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1578/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1578/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1578/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1578/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1578/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1578/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1578/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1578/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1578/AnonReviewer3"
]
],
"structured_content_str": [
"{\"comment\": \"The authors report their results on SHAPES dataset. In visual reasoning, however, the CLEVR dataset is a much more acceptable benchmark. Is there any specific reason that the authors don't use CLEVR dataset in spite of referring to all the papers in NMN series (Johnson et al., 2017; Hu et al., 2017; Andreas et al., 2016a) ?\", \"title\": \"Results on other datasets\"}",
"{\"metareview\": \"This paper proposes a latent variable approach to the neural module networks of Andreas et al, whereby the program determining the structure of a module network is a structured discrete latent variable. The authors explore inference mechanisms over such programs and evaluate them on SHAPES.\\n\\nThis paper may seem acceptable on the basis of its scores, but R1 (in particular) and R3 did a shambolic job of reviewing: their reviews are extremely short, and offer no substance to justify their scores. R2 has admirably engaged in discussion and upped their score to 6, but continue to find the paper fairly borderline, as do I. Weighing the reviews by the confidence I have in the reviewers based on their engagement, I would have to concur with R2 that this paper is very borderline. I like the core idea, but agree that the presentation of the inference techniques for V-NMN is complex and its presentation could stand to be significantly improved. I appreciate that the authors have made some updates on the basis of R2's feedback, but unfortunately due to the competitive nature of this year's ICLR and the number of acceptable paper, I cannot fully recommend acceptance at this time.\\n\\nAs a complete side note, it is surprising not to see the Kingma & Welling (2013) VAE paper cited here, given the topic.\", \"confidence\": \"3: The area chair is somewhat confident\", \"recommendation\": \"Reject\", \"title\": \"Very borderline\"}",
"{\"title\": \"Thanks\", \"comment\": \"I'd like to thank the authors for the comments. I agree with your points. I think that the updated version is a good work and I give it a positive score. As I pointed out, I can see that this work proposes a probabilistic neural symbolic models to the field of VQA. The probabilistic formulation is popular in many fields in machine learning, which I do not see it as special compared to non-probabilistic models. I can not strongly support a work only because it is formulated as a probabilistic model. As I mentioned, I do not think that I am experienced enough to determine the impact of this probabilistic model to the field of VQA, so I may underestimate the importance.\\n\\nI also think that a work should be evaluated from different aspects. As I mentioned, I focus more on technical section compared to other reviewers. I'd like to clarify again that my creativity and insight comments are about technical section. \\n\\nThank the authors for detailed replies to my concerns. I think that the updated version is good and I give this work a positive rating. We both make some clarification. I do not see a contradiction of points between the authors and me which we need to clarify further. I think that our discussion is clear for the AC. As the authors pointed out several times, how we view the importance of a probabilistic model to the field of VQA is crucial to determine the impact of this work. For this type of question, I can not change my support point to a strongly support point from a discussion with authors. As I mentioned, I do not think that I am experienced enough to determine the impact of this probabilistic model to the field of VQA. For technical section and experiment section, I still think that they are ordinary level to me.\\n\\nThank the authors. AC asked me to clarify my review. If we do not see a need to clarify further, we can leave it to AC and see if the AC still have questions. I think that our discussion is clear and we've discussed a lot of details about this work. I appreciate that.\"}",
"{\"title\": \"Clarifying perspective on insight/novelty\", \"comment\": \"We are glad R2 found the updated version of the paper more clear, and would like to thank the reviewer for prompt responses. As suggested by R2, we will release the code for our paper, along with all the settings to recreate the experiments on our github, and will add additional technical details of the REINFORCE estimator.\\n\\nAdditionally, we would like to clarify for R2 what we think are the novel and original aspects of this work. While warm starting certain terms from previous stages is common practice, we still believe our key insight is not in terms of showing that warm-starts work, but in terms of realizing that one way to better capture the intent of a program specification is to model a stochastic latent space. This leads to better sharing of statistics across different questions and leads to a more meaningful latent program space. While this is well known for continuous valued latent variable (variational autoencoder style) models, this is relatively underexplored for the discrete, sequential program case.\\n\\nOverall, we believe our creativity and insight are not in terms of the mechanics or novelty of specific steps we undertook, but in terms of taking a concrete step towards probabilistic neural symbolic models which share statistics meaningfully in a latent space, and learn how to parse questions into programs as well as learn to execute programs using neural modules.\\n\\nA lot of important, open questions traditionally in AI have been in terms of representation learning, and modeling systematicity and compositional generalization [A], and we believe the line of work on neural-symbolic models (including ours as well as relevant works like [B, C]) are important steps towards solving these challenges.\\n\\nReferences\\n[A]: Lake, Brenden M., and Marco Baroni. 2017. \\u201cGeneralization without Systematicity: On the Compositional Skills of Sequence-to-Sequence Recurrent Networks.\\u201d arXiv [cs.CL]. arXiv. http://arxiv.org/abs/1711.00350.\\n[B]: Yi, Kexin, Jiajun Wu, Chuang Gan, Antonio Torralba, Pushmeet Kohli, and Joshua B. Tenenbaum. 2018. \\u201cNeural-Symbolic VQA: Disentangling Reasoning from Vision and Language Understanding.\\u201d arXiv [cs.AI]. arXiv. http://arxiv.org/abs/1810.02338.\\n[C]: Evans, Richard, David Saxton, David Amos, Pushmeet Kohli, and Edward Grefenstette. 2018. \\u201cCan Neural Networks Understand Logical Entailment?\\u201d arXiv [cs.NE]. arXiv. http://arxiv.org/abs/1802.08535.\"}",
"{\"title\": \"Thanks\", \"comment\": \"Thank you for clarifying, Reviewer 2. I will leave it to the authors to respond to this comment if they see fit.\\n\\nAC\"}",
"{\"title\": \"Simple reply to AC\", \"comment\": \"I am positive for this work after updated. I think that the probabilistic formulation of V-NMN is new. Technically, the authors need to solve a complex objective. As I mentioned in my above comments, I can see it is a hard question and the authors made lots of efforts to solve it. I think that the updated version is good now with more details where the original version is too descriptive. I do not think that the technical part is very impressive for me to give a higher score. It is common to simplify an complex objective to obtain \\\"warm start\\\" for some parameters. Another important part in the optimization is the KL divergence term. As I mentioned in my review, the authors used the existing score function estimator (Glynn, 1990) or REINFORCE (Williams, 1992). This part is not introduced enough in the paper, that is also the reason why I give a suggestion. If the authors can show me creativity, theoretical insight or illuminating explanation here, I may update my score. The reason why I point out \\\"warm start\\\" and KL divergence is because they are the key in the technical difficulty.\\n\\nFor me, as I mention above, I think that the technical part is ordinary level since there is no enough creativity or theoretical insight. The experimental section is also an ordinary level, which just compares V-NMN and NMN on the SHAPES dataset. That is the reason why I give it a score 6. This paper also has a value that it provides a probabilistic formulation V-NMN to the VQA field. I am not experienced to judge the impact of a probabilistic model to this field. That is the other part which I may update my score if I see super-positive comments. I think that the technical part is relevant to my experience and I read it very carefully, and that is the reason why my score is more conservative.\"}",
"{\"title\": \"Please clarify your objection\", \"comment\": \"I do not find the following statement particularly clear: \\\"On the other aspect, I am not very impressed by the technical part regarding creativity and insight. The experiment section is ordinary level. Overall, I think that the updated version is a good work and I updated my rating.\\\"\\n\\nPlease explain what you mean, as it is not obvious to me which part of your review underpins your assigning a score of 6. You are welcome to stand by your assessment, but it must be justified.\"}",
"{\"title\": \"The presentation in the updated version is better\", \"comment\": \"Thanks. I appreciate that the authors made improvement on the presentation based on reviews and provided detailed replies to my questions. The presentation in the updated version is better. As I understand, the paper used the existing score function estimator (Glynn, 1990) or REINFORCE (Williams, 1992) to solve the complex optimization part involving the KL divergence term. The optimization objective Eq. (6) is simplified to get the \\\"warm starts\\\" for relations between question x and program z, referred as question coding and for relation between program z, image i and answer a, referred as module training. For experiments, the authors compared V-NMN and NMN on the SHAPES dataset. The proposed method V-NMN is a probabilistic formulation. As I understand, the underlying graphical model provides interpretability for V-NMN. For the updated version, I think that the presentation is better, the probabilistic formulation V-NMN is interesting and the authors made lots of efforts to solve the optimization problem. On the other aspect, I am not very impressed by the technical part regarding creativity and insight. The experiment section is ordinary level. Overall, I think that the updated version is a good work and I updated my rating.\\n\\nI have a suggestion for the authors. It is nice to have a detailed introduction of the score function estimator (Glynn, 1990) or REINFORCE (Williams, 1992) and how they lead to the gradients updates, the step 3 and the step 10 in Algorithm 1 in Appendix. For the relation functions, such as those sequence to sequence models, most of them are discussed descriptively in the paper. It is nice to make the code open-source later or have mathematical discussion for them in the Appendix for reproducibility.\"}",
"{\"title\": \"Addressing R2's concerns [Part 2/2]\", \"comment\": \"----------------------\\n[R2] Why the objective (1) is hard to train but objective (4) is possible to train?\\n\\nObjective (4) is Objective (6) in the updated version, and we refer to that in the discussion below.\\n\\nAs mentioned in the submission (Page. 4, Sec. 2.1), objective (6) is possible to train because it uses \\u2018warm-starts\\u2019 from the other two stages of training (question coding, Eqn. 1, and module training, Eqn. 5 respectively). We provide further intuition for why objective (6) is inherently difficult to train: essentially the parameterization of p(a|i,z) is done by assembling neural module networks on the fly based on the predicted program (z), which is then trained using SGD. Thus, the optimization landscape is in some sense discontinuous in the parameters of the modules (since a different set of modules, with different parameters, could be chosen based on the program). Hence, optimizing Eqn. 6 from scratch is hard (without a good inference network q(z| x) and a good parameterization/ initialization of p(a|i,z)).\\n\\n----------------------\\n[R2] Other related work:\\n1] Interpretable Visual Question Answering by Visual Grounding from Attention Supervision Mining (Zhang et al. 2018)\\n-- Our work focuses on a different notion of interpretability for VQA compared to Zhang et.al. While we are interested in a notion of interpretability that preserves a notion/ syntactic specification of `how' to answer a question, this paper is interested in grounding the answer into appropriate regions in the image. This is an orthogonal notion of interpretability compared to what we are mainly interested in; in the sense that we are explicitly interested in question answering via human-interpretable programs, while Zhang et.al. is interested in grounding answers into relevant regions in an image.\\n\\n2] Neural-Symbolic VQA: Disentangling Reasoning from Vision and Language Understanding (Yi et al. 2018).\\n-- This is certainly very relevant work, thank you for the pointer. Conceptually, at a high level, the goals of this work and ours are similar: both the approaches want to do question answering with limited question program supervision. While Yi et al. seem to approach this problem by simplifying the p(a|i,z) mapping (in context of our model) by first converting the image into a symbolic table, we approach this goal by modeling a stochastic latent space. In some sense Yi et al., and our approaches are orthogonal, as one can use our probabilistic latent space in conjunction with this work., and thus we believe our approach is of independent interest.\\n\\nFurther, our approach is addressing the full complexity of learning such neural-symbolic models in an end to end manner, where program execution is being learned in conjunction with parsing questions into programs. This is arguably more general, as parsing images can into tables need not be a sufficient representation for visual recognition across different settings. We have added a discussion on the differences to Yi et.al in related work.\"}",
"{\"title\": \"Addressing R2's concerns [Part 1/2]\", \"comment\": \"We reply to more specific concerns from R2 below, and hope to convince them that in light of the justifications (already in the paper and below) the proposed method is not heuristic and has sufficient clarity.\\n\\n----------------------\\n[R2] Besides accuracy improvement, is there any other benefit by using V-NMN compared to NMN?\\n\\nAs mentioned in the paper, and pointed out by R1, V-NMN is not only more accurate but generates the right answers for the ``right\\u2019\\u2019 reasons by providing more correct program explanations for questions. This makes V-NMN more interpretable, which is one of our key stated goals. In general, interpretability is important for allowing humans to build trust to make actionable decisions from outputs generated by machines.\\n\\n----------------------\\n[R2] How do the authors design the prior distribution p(z) and the variational distribution q_\\\\phi(z|x), and how do the authors optimize the KL(q_\\\\phi(z|x), p(z)) term? [..]\\n\\nThe appendix in the submitted version (Page. 14 in the revision) describes how we parameterize the variational distributions and the priors. Further, we have added text to the main paper (Page. 3, learning) to clarify how we parameterize prior and posterior distributions (LSTM recurrent neural network based sequence models), and highlight existing text in the paper (Page. 5, before Eqn. 4) and Algorithm 1, step 10 explaining how the KL divergence is optimized.\\n\\n----------------------\\n[R2] If the readers want to understand why it is good to relax \\\\beta to < 1, they need to check Alemi et al. (2018), which is the only information the authors provide.\\n\\nSince we directly use the results from Alemi et.al, the initial version did not have further justifications. However, in light of the reviewer\\u2019s concern, we have added text to the paper (Page. 4 Sec. 2.1) explaining further why one needs to set \\\\beta < 1. \\nEssentially, Alemi et.al. identify that the ELBO is comprised of the negative log-marginal likelihood term D, and the KL divergence term R, i.e. ELBO = -D -R. Further, they show the mutual information between the data \\u201cx\\u201d and the latent variable \\u201cz\\u201d, is bounded below by H-D and above by R (where H is a constant). Since the ELBO is equal to -D-R, the value of the ELBO on its own does not tell us about the mutual information between the observations and the latents, as different models (with say architectural differences) can achieve different D and R values for the same ELBO. Thus, Alemi et.al. prescribe setting $\\\\beta<1$ for architectures with an auto-decoding behavior, so that we can get higher R values (by emphasizing on minimizing it less), pushing up the upper bound on mutual information I(x, z) achieved by the model\\n\\n----------------------\\n[R2] Could the authors provide technical details for this \\\"argmax decoded programs\\\"?\\n\\nWe perform beam search to get an approximate solution for the argmax program given a question, as is standard practice [A]. This detail is added to Sec. 2.1. We did beam search instead of sampling since sequence models are known to suffer from a distributional mismatch between training and sampling [B],\\nmaking beam search a conservative (and standard choice) in the literature for inference in sequence models [B]. For the module training stage, we find this leads to a good warm-start for the full objective (Eqn. 6) (which is optimized via. sampling).\\n\\nWe also performed an experiment using sampling instead of beam search. In low-supervision settings, we find sampling leads to a drop in performance: with 10% supervision, accuracy on module training drops from 81.34 (+- 8.61) (with beam search) to 76.31 (+- 8.41) (with sampling) on validation. The performance with more supervision remains the same. We have added this finding to the paper (Page. 9, Results section).\\n\\n----------------------\\n[R2] \\\\gamma. The authors present the reason for that is because it is similar as Vedantam et al. (2018), which again totally points readers to check other references when the relaxation seems to have issue\\n\\nWe use values of $\\\\gamma > 1$ (see Appendix, and Page. 7) and thus for the case where $\\\\beta > 1$ this still corresponds to a valid lower bound on the ELBO (for discrete-valued probability distributions, which is the case for answers in VQA). Thus, as such, we believe the relaxation does not have any issues (in addition to the justification already presented in the paper based on Vedantam et.al.).\\n\\n----------------------\\n[R2] For objective (4), is \\\\beta >=1 or < 1? \\n\\n\\\\beta is set to 0.1 for all three stages, as clarified by the Algorithm box in the submitted version.\\n\\nReferences\\n[A]: Vinyals, Oriol, Alexander Toshev, Samy Bengio, and Dumitru Erhan. 2014. \\u201cShow and Tell: A Neural Image Caption Generator.\\u201d arXiv [cs.CV]. arXiv. http://arxiv.org/abs/1411.4555.\\n[B]: Bengio, Samy, Oriol Vinyals, Navdeep Jaitly, and Noam Shazeer. 2015. \\u201cScheduled Sampling for Sequence Prediction with Recurrent Neural Networks.\\u201d arXiv [cs.LG]. arXiv. http://arxiv.org/abs/1506.03099.\"}",
"{\"title\": \"Updates to paper addressing R2's concerns, specific replies to R2 and Anon.\", \"comment\": \"We thank the reviewers for the detailed comments and questions, and are encouraged reviewers found the paper well written [R3], our approach worth studying [R3] and that we address the problem well [R1]. In the revised version, we have highlighted the sections in magenta which address the concerns of R2 but have not changed since the initial submission, for ease of reference. Changes we made since the initial submission are marked in blue.\\n\\nPlease note that whenever we refer to text in the paper in this discussion, we are referring to the updated version of the paper (and not the submitted version, although we might point to text explicitly present in the submitted version based on the above color scheme).\\n\\nTo reiterate, the goal of this work is not just to get higher empirical performance on VQA. Instead, our aim is to augment an existing class of techniques -- that has been shown to have desirable properties like interpretability and compositionality (Johnson et.al., Hu et.al.) -- with a probabilistic treatment. Concretely and in the short term, this results in higher performance for capturing the intent of human program specifications better (via. better semi-supervised learning), but arguably equally importantly, in the longer term, this is a framework for building probabilistic, neural-symbolic models. Note that neural-symbolic models bring together the power of deep representation learning with the systematic generalization capabilities of symbolic reasoning, promising the best of both worlds. Moreover, probabilistic tools provide a systematic and flexible framework for modeling and inference in general. With these high-level goals, we generally restrict the comparison to previous VQA approaches which use explicit program representations and learn to execute them. \\n\\nComments addressing specific issues with Sec. 2.1 are in the replies to R2 and more specific comparisons to closely related work follow in replies to R2 and the Anonymous comment.\\nWe thank R3 for pointing out typos other writing-related issues, which we address in the updated version.\"}",
"{\"title\": \"Relevant Work, Updates to paper to discuss differences.\", \"comment\": \"Thank you for pointing us to this highly relevant work! The approach is indeed quite relevant but has some key differences which we highlight now in the updated draft (in related work).\\n\\nIn Yin et.al., while the programs are modeled as a latent variable, the model does not capture ``how'' to execute the programs that it generates, and indeed the tasks considered only have a notion of ``parsing\\u2019\\u2019 into programs but not program execution. More specifically, the model presented in Yin et.al. is a unimodal model with a structured latent space, where the observed modality is the raw text/ question. However, in our model, we have a second modality which is the output of what gets executed when the program runs, and we capture both jointly in our model. Thus we argue that we address probabilistic neural-symbolic learning in a more general setting: where one has to parse a question into programs *as well as* learn to execute them by training neural modules.\\nHowever, the key idea of a tree-structured syntactic latent space to represent programs from the work is very interesting and would be relevant to use in our model as well.\"}",
"{\"comment\": \"This is a nice submission of approaching visual question answering using a probabilistic neural-symbolic model with discrete latent program space. However, there has been prior work on probabilistic neural-symbolic models with a latent program space (Yin et al., 2018), which seems to be one of the major contributions claimed in this submission (at least from the TL;DR line :)) I was wondering if the authors could explain the difference between V-NMN and Yin et al., which would better substantiate the novelty of this work compared with Yin et al.\", \"reference\": \"[1] Pengcheng Yin, Chunting Zhou, Junxian He, Graham Neubig. StructVAE: Tree-structured Latent Variable Models for Semi-supervised Semantic Parsing. ACL 2018.\", \"title\": \"Prior work on VAEs with discrete latent program space\"}",
"{\"title\": \"Urgent need for detail\", \"comment\": \"Can I please ask Reviewer 1 to, with due urgency, expand upon their review and/or comment upon those of the other reviewers, so as to proffer an appropriate defence of their recommendation in favour of the paper DURING the discussion period.\"}",
"{\"title\": \"As regards Section 2.1\", \"comment\": \"I have found section 2.1 good enough though it is more descriptive than mathematically detailed. However, comments of Rev #2 are right so I can agree with them, maybe authors could add such details in the appendix or in the remaining half page depending on how long would be the discussion.\\nI still think that the document is worthy in this version. If the authors manage to add a good response to the Rev #2 comments, my current score will at least be confirmed if not increased.\"}",
"{\"title\": \"Thank you\", \"comment\": \"Thanks for providing an expanded review so quickly. I'm sure the authors will find this helpful for the purpose of this discussion period.\"}",
"{\"title\": \"Provide more details based on AC's comments\", \"comment\": \"I provide more details for my reviews based on AC's comments.\\n\\nFirst, I agree with the other two reviewers that the authors present the problem and their idea well and show V-NMN outperforms NMN in experiments. My concerns, however, are about the technical section 2.1, which I do not see other two reviewers comment on that.\\n\\nI think that section 2.1 is hard to follow because the presentation is vague and lack details. There are several places pointing the readers to check other references or briefly summarizing the ideas where I think technical details should be provided. \\n\\nTechnically, I have concerns about how the authors solve the challenges they mentioned in section 2.1. In the question coding stage, the goal is to learn an informative mapping from questions to programs, which based on my understanding, is q_\\\\phi(z|x). As the authors mention, the prior distribution for p(z) is not a Gaussian distribution. How do the authors design the prior distribution p(z) and the variational distribution q_\\\\phi(z|x), and how do the authors optimize the KL(q_\\\\phi(z|x), p(z)) term? I agree with the authors that it is not an easy question. The presentation from the authors for this hard question is vague and lack technical details. Based on my understanding, I also think the solution is heuristic.\\n\\nSecond, in order to optimize q_\\\\phi(z|x) in objective (1), the E_{z\\\\sim q_\\\\phi(z|x)}[\\\\log p_{\\\\phi_z}(a|z,i)] term has an effect as well. In the question coding stage, this term is totally removed. In order for the objective (1) to be a valid ELBO for log p(x, a|i), \\\\beta need to be >=1. As the authors pointed out, they need to relax \\\\beta to < 1 and it is no longer an ELBO. If the readers want to understand why it is good to relax \\\\beta to < 1, they need to check Alemi et al. (2018), which is the only information the authors provide.\\n\\nIn the module training stage, the goal is to learn the NMN for question answering with the objective (3) E_{z\\\\sim q_\\\\phi(z|x)}[\\\\log p_{\\\\phi_z}(a|z,i)]. After the authors obtain an estimate q_\\\\phi(z|x) in question coding stage, it is possible to optimize this objective by sampling on q(z|x). Instead of using q(z|x), the authors presents \\\"In practice, we take argmax decoded programs from q_\\\\phi(z|x) to simplify training, but perform sampling during joint training in the next stage.\\\" Could the authors provide technical details for this \\\"argmax decoded programs\\\"?\\n\\nIn the joint train stage, the authors have an objective (4), which is different from objective (1) with a scalar \\\\gamma. The authors presents the reason for that is because it is similar as Vedantam et al. (2018), which again totally points readers to check other references when the relaxation seems to have issue. For objective (4), is \\\\beta >=1 or < 1? Why the objective (1) is hard to train but objective (4) is possible to train? How do the question coding stage and module training stage helps to solve the challenges for objective (1) so that objective (4) is easier to train?\\n\\nOverall, my points for the technical section 2.1 is that at least the presentation needs to improve with more technical details so that readers can see how the authors solved the challenges they proposed. Also, I think the solution proposed by the authors is a heuristic way which is not clear how it solves those challenges. For other reviewers, do you think that section 2.1 need to provide more technical details as well?\\n\\nFor a heuristic method, if the authors show promising experimental results, I value this kind of work as well. My concerns are that the literature review is focused on restricted related works without a comprehensive introduction of VQA works, for example, \\n\\nInterpretable Visual Question Answering by Visual Grounding from Attention Supervision Mining (Zhang et al. 2018)\", \"neural_symbolic_vqa\": \"Disentangling Reasoning from Vision and Language Understanding (Yi et al. 2018).\\n\\nThe authors only compares V-NMN with NMN in the experiments. Could the authors answer why it is enough to only compare with NMN without comparing other VQA methods? Besides accuracy improvement, is there any other benefit by using V-NMN compared to NMN?\\n\\nI am open to feedbacks and I will update my score if the authors can handle my concerns.\"}",
"{\"title\": \"A bit more detail\", \"comment\": \"Thank you for your review. It is quite short, so it would be good to expand upon what you think makes this paper, perhaps in the form of a discussion with Reviewer 2, whose score is significantly different from yours.\"}",
"{\"title\": \"Please consider other reviews\", \"comment\": \"Reviewer 2's review is very short. I do not see much substance or argument supporting the quite strict score (4) in favour of rejecting the paper. Regardless of the fact that it does not harmonise with the assessments provided by the other reviewers, it is not appropriate to make a recommendation of this nature without giving the authors a clear indication of what needs improving in the paper. If the true failing of this paper is that the technical part is hard to follow, is this due to poor presentation, or due to concerns with the actual applicability or scalability of the method proposed?\\n\\nPlease take a moment to read the other reviews, in particular Reviewer 3, as well as the author response(s), if and when they are made, and consider whether you can flesh out the concerns underlying your assessment in a way which the authors can respond to, rebut, or take into account when revising their paper.\"}",
"{\"title\": \"Nice paper, well written and through evaluation\", \"review\": \"This paper proposes a discrete, structured latent variable model for visual question answering that involves compositional generalization and reasoning. In comparison to the existing approach, this paper well addressed the challenge of learning discrete latent variables in the presence of uncertainty. The results show a significant gain in performance as well as the capability of the model to generalize composition program to unseen data effectively. The qualitative analysis shows that the proposed model not only get the correct answer but also the correct behavior that leads to the answer.\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Need improvement on the presentation\", \"review\": \"This paper proposes a variational neural module networks (V-NMN), which compared to neural module networks (NMNs), is formed in a probabilistic aspect. The authors compare the performance of V-NMN and NMN on SHAPES dataset.\\n\\nI find the technical part is hard to follow. To optimize the objective function, it involves many challenges. The authors described those challenges as well. It is not clear to me how those challenges are solved in section 2.1. I think that the presentation in section 2.1 needs to provide more details.\\n\\nIn the experiment, the authors only compare their work with NMNs without comparing it with other approaches for visual question answering. Besides accuracy, does V-NMN provide new applications that NMNs and other VQA models is not applicable because of the probabilistic formulation?\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"A good piece of work\", \"review\": \"The paper presents a new approach for performing visual query answering. System responses are programs that can explain the truth value of the answer.\\nIn the paper, both the problems of learning and inference are taken into account.\\nTo answer queries, this system takes as input an image and a question, which is a set of word from a given vocabulary. Then the question is modeled by a plan (a series of operation that must be performed to answer the query). Finally, the found answer with the plan are returned. To learn the parameters of the model, the examples are tuples composed by an image, a question, the answer, and the program.\\nExperiments performed on the SHAPES dataset show good performance compared to neural model networks by Johnson et al.\\n\\nThe paper is well written and clear. I have not found any specific problems in the paper, the quality is high and the approach seems to me to be new and worth studying.\\nThe discussion on related work seems to be good, as well as the discussion on the results of the tests conducted.\\n\\nOn page 5, in equation (3) it seems to me that something is missing in J. Moreover, In Algorithm 1, in lines 4 and 9, the B after the arrow should be written in italic.\\n\\nOverall, there are several typos that must be corrected. I suggest a double check of the English. For example:\\n- page 3, \\\"as modeling *uncertaintly* should...\\\"\\n- page 6, \\\"Given this goal, we *consrtuct* a latent *varible* ...\\\"\\n- page 8, in paragraph \\\"Effect of optimizing the true ELBO\\\", the word \\\"that\\\" is repeated twice in the 3rd row\\n- page 13, \\\"for the\\\" repeated twice in \\\"Moving average baseline\\\" paragraph. Also, in the last line of this paragraph, the sentence seems incomplete.\\n\\n\\n\\nPros\\n- The results are convincing\\n- The approach is clearly explained\\n\\nCons\\n- English must be checked\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
rJl3S2A9t7 | Policy Optimization via Stochastic Recursive Gradient Algorithm | [
"Huizhuo Yuan",
"Chris Junchi Li",
"Yuhao Tang",
"Yuren Zhou"
] | In this paper, we propose the StochAstic Recursive grAdient Policy Optimization (SARAPO) algorithm which is a novel variance reduction method on Trust Region Policy Optimization (TRPO). The algorithm incorporates the StochAstic Recursive grAdient algoritHm(SARAH) into the TRPO framework. Compared with the existing Stochastic Variance Reduced Policy Optimization (SVRPO), our algorithm is more stable in the variance. Furthermore, by theoretical analysis the ordinary differential equation and the stochastic differential equation (ODE/SDE) of SARAH, we analyze its convergence property and stability. Our experiments demonstrate its performance on a variety of benchmark tasks. We show that our algorithm gets better improvement in each iteration and matches or even outperforms SVRPO and TRPO.
| [
"reinforcement learning",
"policy gradient",
"variance reduction",
"stochastic recursive gradient algorithm"
] | https://openreview.net/pdf?id=rJl3S2A9t7 | https://openreview.net/forum?id=rJl3S2A9t7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"Hyett17leV",
"SkxZyFDtA7",
"ByeyPs1YCm",
"r1eYIN9phX",
"r1eJPAn52Q"
],
"note_type": [
"meta_review",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544724353300,
1543235800756,
1543203671203,
1541411921186,
1541226070551
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1577/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1577/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1577/AnonReviewer4"
],
[
"ICLR.cc/2019/Conference/Paper1577/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1577/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"The use of SARAH for Policy optimization in RL is novel, with some theoretical analysis to demonstrate convergence of this approach. However, concerns were raised in terms of clarity of the paper, empirical results and in placement of this theory relative to a previous variance reduction algorithm called SVRPG. The author response similarly did not explain the novelty of the theory beyond the convergence results of what was given by the paper on SVRPG. By incorporating some of the reviewer comments, this paper could be a meaningful and useful contribution.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Interesting idea that needs a bit more work\"}",
"{\"title\": \"Respond to all reviewers\", \"comment\": \"We sincerely thank all reviewers for the valuable remarks!\\n\\nWe would like to emphasize that our paper is not an incremental one. We believe that variance reduced (VR) gradient methods (SARAH and SVRG) serve as potential alternatives to incorporate into the TRPO framework [Xu 2017], which might significantly outperform the PG-type algorithms accompanied by importance sampling [Papini 2018]. We aimed to provide the first theoretical analysis in order to support the experiments of VR gradient methods [Xu 2017], and the differential equation approximation for VR is a novel and powerful tool to analyze such.\\n\\nDespite saying that, we agree that our experiments might not be sufficient to convince some of our proposal. This is partly due to the limited time for running large-scale experiments. Following reviewers' remarks, will try to work more smaller test experiments to support our proposal and fix all the clarity/presentation issues and typos in our next submission.\"}",
"{\"title\": \"TRPO with SARAH optimization: great idea, but inconclusive results\", \"review\": \"This paper investigates how the SARAH stochastic recursive gradient algorithm can be applied to Trust Region Policy Optimization. The authors analyze the SARAH algorithm using its approximating ordinary and stochastic differential equations. The empirical performance of SARAPO is then compared with SVRPO and TRPO on several benchmark problems.\\n\\nAlthough the idea of applying SARAH to reduce the variance of gradient estimates in policy gradient algorithms is interesting and potentially quite significant (variance of gradient estimates is a major problem in policy gradient algorithms), I recommend rejecting this paper at the present time due to issues with clarity and quality, particularly of the experiments.\\n\\nNot enough of the possible values for experimental settings were tested to say anything conclusive about the performance of the algorithms being compared. For the values that were tested, no measures of the variability of performance or statistical significance of the results were given. This is important because the performance of the algorithms is similar on many of the environments, and it is important to know if the improved performance of SARAPO observed on some of the environments is statistically significant or simply due to the small sample size.\\n\\nThe paper also needs improvements in clarity. Grammatical errors and sentence fragments make it challenging to understand at times. Section 2.3 seemed very brief, and did not include enough discussion of design decisions made in the algorithm. For example, the authors say ``\\\"the Fisher Information Matrix can be approximated by Hessian matrix of the KL divergence when the current distribution exactly matches that of the base distribution\\\" but then suggest using the Hessian of the KL of the old parameters and the new parameters which are not the same. What are the consequences of this approximation? Are there alternative approaches?\\n\\nThe analysis in section 3 is interesting, but the technique has been applied to SGD before and the results only seem to confirm findings from the original SARAH paper.\\n\\nTo improve the paper, I would suggest moving section 3 to an appendix and using the extra space to further explain details and conduct additional simpler experiments. Additional experiments on simpler environments and policy gradient algorithms (REINFORCE, REINFORCE with baseline) would allow the authors to try more possible values for experimental settings and do enough runs to obtain more conclusive results about performance. Then the authors can present their results applying SARAH to TRPO with some measure of statistical significance.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Sarah applied to policy optimization with comparable performance to SVRG\", \"review\": \"The paper extends Sarah to policy optimization with theoretical analysis and experimental study.\\n\\n1) The theoretical analysis under certain assumption seems novel. But the significance is unknown compared to similar analysis. \\n\\n2) The analysis demonstrates the advantage of Sarah over SVRG, as noted in Remark 1. It would be better to give explicit equations for SVRG in order for comparison.\\n\\n3) Experimental results seem to show empirically that the SARAH is only comparable to SVRG.\\n\\n4) Presentation needs to be improved.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"Advantages of the proposed method over SVRG + policy gradient method are unclear.\", \"review\": \"This paper proposes a new policy gradient method for reinforcement learning.\\nThe method essentially combines SARAH and trust region method using Fisher information matrix.\\nThe effectiveness of the proposed method is verified in experiments.\\n\\nSARAH is a variance reduction method developed in stochastic optimization literature, which significantly accelerates convergence speed of stochastic gradient descent.\\nSince the policy gradient often suffers from high variance during the training, a combination with variance reduction methods is quite reasonable.\\nHowever, this work seems to be rather incremental compared to a previous method adopting another variance reduction method (SVRG) [Xu+2017, Papini+2018].\\nMoreover, the advantage of the proposed method over SVRPG (SVRG + policy gradient) is unclear both theoretically and experimentally.\\n[Papini+2018] provided a convergence guarantee with its convergence rate, while this paper does not give such a result.\\nIt would be nice if the authors could clarify theoretical advantages over SVRPG.\", \"minor_comment\": [\"The description of SVRG updates in page 2 is wrong.\", \"The notation of H in Section 3.1 (\\\"ODE analysis\\\") is not defined at this time.\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
Hkesr205t7 | Learning shared manifold representation of images and attributes for generalized zero-shot learning | [
"Masahiro Suzuki",
"Yusuke Iwasawa",
"Yutaka Matsuo"
] | Many of the zero-shot learning methods have realized predicting labels of unseen images by learning the relations between images and pre-defined class-attributes. However, recent studies show that, under the more realistic generalized zero-shot learning (GZSL) scenarios, these approaches severely suffer from the issue of biased prediction, i.e., their classifier tends to predict all the examples from both seen and unseen classes as one of the seen classes. The cause of this problem is that they cannot properly learn a mapping to the representation space generalized to the unseen classes since the training set does not include any unseen class information. To solve this, we propose a concept to learn a mapping that embeds both images and attributes to the shared representation space that can be generalized even for unseen classes by interpolating from the information of seen classes, which we refer to shared manifold learning. Furthermore, we propose modality invariant variational autoencoders, which can perform shared manifold learning by training variational autoencoders with both images and attributes as inputs. The empirical validation of well-known datasets in GZSL shows that our method achieves the significantly superior performances to the existing relation-based studies. | [
"zero-shot learning",
"variational autoencoders"
] | https://openreview.net/pdf?id=Hkesr205t7 | https://openreview.net/forum?id=Hkesr205t7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"HJlG73zxgN",
"ryx31P9PyV",
"r1xcP-e0Am",
"SJgxOVxoCm",
"rkeR7Nxi07",
"r1ge1qmYAm",
"ryxUSkCdRm",
"SklW-AE5nQ",
"r1xKsv3F2Q",
"S1g_k-jDhX"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544723482438,
1544165092414,
1543532898236,
1543337064009,
1543336997641,
1543219672459,
1543196477803,
1541193209390,
1541158816755,
1541021919859
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1576/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1576/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1576/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1576/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1576/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1576/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1576/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1576/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1576/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1576/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper addresses generalized zero shot learning (test data contains examples from both seen as well as unseen classes) and proposes to learn a shared representation of images and attributes via multimodal variational autoencoders.\", \"the_reviewers_and_ac_note_the_following_potential_weaknesses\": \"(1) low technical contribution, i.e. the proposed multimodal VAE model is very similar to Vedantam et al (2017) as noted by R2, and to JMVAE model by Suzuki et al, 2016, as noted by R1. The authors clarified in their response that indeed VAE in Vedantam et al (2017) is similar, but it has been used for image synthesis and not classification/GZSL. (2) Empirical evaluations and setup are not convincing (R2) and not clear -- R3 has provided a very detailed review and a follow up discussion raising several important concerns such as (i) absence of a validation set to test generalization, (ii) the hyperparameters set up; (iii) not clear advantages of learning a joint model as opposed to unidirectional mappings (R1 also supports this claim). The authors partially addressed some of these concerns in their response, however more in-depth analysis and major revision is required to assess the benefits and feasibility of the proposed approach.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Meta-Review\"}",
"{\"title\": \"Thank you for your comment\", \"comment\": \"Thank you very much for your comment on our reply. We would like to respond to your questions as much as possible.\\n\\n>> Edits with respect to the GZSL-bias discussion\\nFirst of all, we apologize for having given you an answer that confused you. To determine the hyper-parameters (including the number of epochs), as you mentioned, we did use a part of the classes in the training set as validation data for \\\"zero-shot\\\" validation (this is written in our updated paper in detail, so please see section 5). Therefore, we would like to emphasize that hyper-parameters were not arbitrarily selected from the results on the test set. Furthermore, in our experiments, we confirmed that even if the number of epochs is large, the classification accuracy of our model does not decrease so badly (but I'm sorry that this paper includes only results up to 50 epochs).\\n\\nOn the other hand, *after* setting hyper-parameters, we used *all* of the training set for the training. In other words, in order to *monitor* whether the model is generalized during training, we did not prepare the validation data, which is what we wanted to answer in our previous reply and which is also done in other GZSL studies. Also please note that we cannot check whether the GZSL-specific biased problem is occurring during hyper-parameter setting and training due to \\\"zero-shot\\\" validation. It may be possible to monitor the biased problem during training by \\\"generalized zero-shot\\\" validation, i.e., by using part of all seen classes (other than validation classes) as validation data, too. However, this has a problem that the number of data for training is further reduced. In our study, we proposed to perform manifold learning as a way to prevent unseen classes from overlapping the seen classes in latent space. As you pointed out, there is no theoretical guarantee that this method always generalizes to unseen classes, but considering the above problems and our results, we believe that our method has effectiveness.\\n\\n>> Regarding advantages of learning a joint model as opposed to unidirectional mappings\\nThe problem of attribute space is that the placement within the space of the classes is pre-defined so that it may not be well separated between classes when the attribute representation is bad, which is also pointed out in Chao et al. (2016). On the other hand, since there is no bias due to such pre-defined in image representation, if manifold learning of images is properly performed, it can obtain a well-separated representation between classes. Furthermore, if attributes are also used as inputs, more separated representation might be obtained. We thought that if we could acquire such representation (shared manifold representation), we could map the unseen data to places that do not overlap with the training set (seen data) and solve the biased problem. However, this insight is empirical and, as you pointed out, we should have included the results of comparison with attribute space in the paper.\"}",
"{\"title\": \"Comments regarding the Author Response\", \"comment\": [\"Thanks to the authors for providing detailed comments and explanations wherever applicable and revising the paper to reflect the same as well. In light of the comments below (regarding author responses to review comments and clarifications), I am slightly skeptical of the results presented in the paper. As such, I will be sticking to my original rating for the paper.\", \"Edits with respect to the GZSL-bias discussion: Thanks for clarifying the seen-unseen bias issue in detail. I will merge 2 discussion points here and highlight what is missing. Firstly, the authors clarified that they did not hold out a set of seen classes as validation to test generalization during training the joint model. Assuming there is no access to the test-split during training, absence of a validation set implies the choice of hyper-parameters for training MIVAE was not done in a manner to test generalization. As such, an arbitrary choice of stopping training at 200 epochs across all experiments (assuming this is across datasets) is odd. The validation subset of seen classes in itself could be used to perform hyper-parameter sweeps -- the resulting values from which could be used to train the entire setup again on the entirety of the seen training set. Secondly, given this fact, it seems that no notion of generalization to an unseen split (even for the choice of hyper-params) was done and hence from Section 3.1 (and other sections) it is unclear to me how the \\u201cshared representation\\u201d from the proposed approach could help in alleviating the bias issue. In general, I am now curious about how the hyper-parameters were chosen -- I think this clarification is quite important. The paper the authors point to w.r.t. this issue does indeed use a validation split if I\\u2019m correct -- see Section 6.1, line - \\u201cThe hyper-parameters were chosen \\u2026 were used while training the model on complete data.\\u201d of https://arxiv.org/pdf/1712.03878.pdf.\", \"Regarding advantages of learning a joint model as opposed to unidirectional mappings: Thanks for responding to this comment in detail. However, the response still does not explicitly state why learning a joint model -- that allows one to do inference from attributes to images, images to attributes and latent variable to either modalities -- is better than just learning a single unidirectional mapping from attributes to images or images to attributes. I agree that the shared representation is richer -- but it is not convincing enough as to why is this needed in the context of GZSL in text? The authors should talk more about the experiment they refer to regarding inference in latent space regarding this.\"]}",
"{\"title\": \"Reply to Reviewer 3 (2/2)\", \"comment\": \">> - The authors should explicitly mention if they are using the proposed split throughout all baselines and approaches for GZSL evaluations.\\nFollowing your comment, we added a description about the split of datasets.\\n\\n>> - In section 5.2, the reasons in the 3rd paragraph...\\nWe think that it relates to the number of training data of each dataset. In order to learn the generalized relations of different modalities, a sufficient amount of data is needed. However, in SUN and CUB, there are not much training data compared with the number of their classes, so it is difficult to learn relations between modalities. Therefore, it is considered that the terms explicitly bringing the relation closer (equation 3) contributed to the improvement of their performance. On the other hand, since AWA and aPY have relatively sufficient data, it is considered that equation 2 could be properly learned without adding equation 3.\\n\\nIn any case, we believe that we need to further verify this phenomenon in more detail.\"}",
"{\"title\": \"Reply to Reviewer 3 (1/2)\", \"comment\": \"Thank you very much for positive comments and apologize for our late reply.\\n\\n>> While the paper overall does a good job of explaining the motivation as well as the approach, some of the sections (and sentences within) could be written better to express the point being made. \\nThank you for pointing it out. We have modified several sentences to make it as readable as possible throughout this paper.\\n\\n>> A minor correction. The paper claims...\\n>> Specifically, the first paragraph in the introduction seems to be structured more from a few-shot setting.\\n>> Similarly, the second paragraph in the introduction could be written more succinctly to express the point being made.\\n>> The sentences -- \\u201cMoreover, it is difficult\\u2026.widely available\\u201d -- are difficult to understand.\\n>> Tables 4 and 5 should be positioned after the references section.\\nThank you for pointing out in detail. These have been fixed in the current version, so we would be grateful if you could check them.\\n\\n>> As such, the authors should stress on the advantages learning a joint latent model over both modalities offers as opposed to unidirectional mappings while mentioning the above points.\\nFollowing your comment, we modified section 3.1 and added the advantages of learning on shared representation. Below we will briefly explain these advantages.\\n\\nFirstly, shared representation is rich compared to attribute representation because it integrates both image and attribute. In attribute space, the performance of GZSL is significantly reduced unless each class is properly represented as attributes in advance (Chao et al. 2016). On the other hand, shared representation is not only richer than a representation of attributes but also robust against it. Therefore, in the shared space, relations between modalities are learned more properly than attribute space.\\n\\nAnother advantage is that since shared space does not depend on input dimensions or representation, we can perform zero-shot learning with more complex and sophisticated inputs.\\n\\nIn this study, we did not monitor learning using validation data but used all of them as training data during training. To our knowledge, this is also done in other GZSL studies (Verma, V. Kumar, et al., 2018). This is because in GZSL the test data also includes the seen classes, so if we do not use a part of the seen classes for training, the performance of seen data in the test set might be decreased. In all experiments, the training is terminated with 200 epochs. \\n\\n>> - Learning the joint latent space for images and attributes has been referred to as learning a shared manifold in the paper...\\nIn this paper, we referred to learning the representation that generalizes to the unseen classes as \\\"manifold learning\\\". The reason for this is because we wanted to differentiate from learning relationships between both modalities. Moreover, \\\"shared manifold learning\\\" refers to learning to generalize both relationships between modalities and separation between classes. \\n\\nActually, if manifold learning is properly performed in the shared space, the position of the arbitrary unseen class in the shared space should be obtained by interpolating the representation of the seen classes. However, it may not be very accurate to use the word \\\"manifold learning\\\" in order to refer to such things.\\n\\n>> - During inference, the authors operate in the latent space...\\nAs explained above, since shared space is more rich representation than attribute space, we thought that we can obtain generalized relations in shared space with higher accuracy. In a simple experiment in ZSL, we confirmed that the case of shared space has a higher performance than that of the attribute space.\\n\\n>> - On page 4, regarding the term L_dist in the objective for MIVAE...\\nAs you pointed out, this was our mistake. In the current version, we modified this sentence.\\n\\n>> Section 5 experiments suggest the learning rate used in practice was 10^3. Assuming a typo, this should be presumably 10^-3.\\nThank you for pointing it out. We fixed 10^3 to 10^-3.\"}",
"{\"title\": \"Reply to Reviewer 2\", \"comment\": \"Thank you very much for your valuable feedback and sorry for our late response.\\n\\n>> However, the idea of using multimodal VAE for ZSL isn't new or surprising and has been used in earlier papers too.\\nVZSL (Wang et al., 2017) cited in our paper is known as an example of a study applying VAE to ZSL, but to the best of our knowledge, there are not many studies applying multimodal VAE to ZSL.\\n\\n>> The proposed multimodal VAE model is very similar to the existing ones, such as Vedantam et al (2017), who proposed a broad framework with various types of regularizers in the multimodal VAE framework.\\nAs you pointed out, Vedantam et al. (2017) uses multimodal learning using two modalities, attributes and images, and generates unseen images from corresponding attributes. However, this work is not intended to solve the problem of zero-shot learning, because it does not predict the class labels of unseen classes.\\n\\nIn addition, rather than introducing a completely novel model, we showed that manifold learning in shared space using VAEs is effective to resolve a problem of relation-based GZSL. As written in our paper, the conventional relation-based GZSL had an inherent problem of failing to predict the unseen classes in the test data (please note that this problem only occurs in GZSL, where the test class contains the seen class). This is because the mapping to the shared space of the unseen classes does not generalize well and overlaps with the seen classes, which we call the biased problem. In this study, we showed that manifold learning using VAEs can appropriately place unseen classes in shared space by interpolating from seen classes.\\n\\nFrom the results in table 3, the accuracy of the proposed method was improved significantly in the unseen classes while the accuracy does not change very much in the seen classes, which means that that the proposed method resolves the biased problem directly. To our knowledge, there is no other study showing that this problem in relation-based is improved.\\n\\n>> The paper doesn't compare with several recent ZSL and GZSL approaches, some of which have reported accuracies that look much better than the accuracies achieved by the proposed method. \\nAs mentioned above, our study focused on the bias problem of relation-based GZSL and proposed MIVAE as a method to solve it, which is the main contribution of this paper. Therefore, in our experiments, we compared the proposed method with relation-based studies which have the biased problem in order to confirm whether the problem was resolved. This is the reason why we did not compare it to synthesis-based methods in this paper.\\n\\nFurthermore, as we wrote in the section of related works, synthesis-based needs to generate images from attributes, so it might be difficult to improve accuracy if attributes or images become complicated. On the other hand, the relation-based method (shared representation method in particular) allows us to select class-attributes closest to the given image in the space of shared representation, which means that we can perform ZSL without depending on the complexity of the input information. \\nOur method solved the inherent bias problem while maintaining this advantage of relation-based, so we believe that our method is highly extensible.\"}",
"{\"title\": \"Reply to Reviewer 1\", \"comment\": \"Thank you very much for informative comments and sorry for my late reply.\\n\\nAs you pointed out, in terms of learning the shared representation of different modalities, the proposed method is considered to be almost the same as existing relation-based GZSL. However, the most important difference is that our method performs manifold learning on the shared representation using VAEs.\\n\\nIn GZSL, the test data also includes the seen classes, so it is necessary for both examples of the seen classes and the unseen classes to be properly placed on the shared latent space. However, in the conventional relation-based method, they could not successfully map the examples of the unseen classes in test data to the shared representation because this mapping tends to degenerate or overlap with the seen classes. In other words, this representation did not \\\"generalize\\\" to the unseen classes. Therefore, even though the accuracy of the seen classes is high, the accuracy of the unseen classes results in very low.\\n\\nOn the other hand, MIVAE proposed in this paper performs manifold learning on shared representations that integrate the two modalities, which means that the position of the unseen classes in the latent space can be estimated by \\\"interpolating\\\" from the training data (seen classes). Therefore, the problem of degeneration and overlapping with seen class is resolved, which improves the accuracy of the unseen classes.\\n\\nMoreover, please note that in conventional ZSL, since the seen classes are not included in the test dataset, there is no problem that the mapping to the latent space overlaps with the seen classes at the testing time. Therefore, the performance of MIVAE does not differ much from the existing method in conventional ZSL.\\n\\nIn summary, this research resolves the problem that the mapping of the unseen classes in GZSL is not well arranged, by manifold learning on shared representation. As you pointed out, MIVAE itself is a straightforward extension of JMVAE, but we believe that our research has novelty in the sense that it first showed that the method using VAEs is effective in GZSL.\"}",
"{\"title\": \"application of multimodal VAE for zero shot learning\", \"review\": \"This paper proposes a multimodal VAE model for the problem of generalized zero shot learning (GZSL). In GZSL, the test classes can contain examples from both seen as well as unseen classes, and due to the bias of the model towards the seen classes, the standard GZSL approaches tend to predict the majority of the inputs to belong to seen classes. The paper proposes a multimodal VAE model to mitigate this issue where a shared manifold learning learn for the inputs and the class attribute vectors.\\n\\nThe problem of GZSL is indeed important. However, the idea of using multimodal VAE for ZSL isn't new or surprising and has been used in earlier papers too. In fact, multimodal VAEs are natural to apply for such problems. The proposed multimodal VAE model is very similar to the existing ones, such as Vedantam et al (2017), who proposed a broad framework with various types of regularizers in the multimodal VAE framework. Therefore, the methodological novelty of the work is somewhat limited.\\n\\nThe other key issue is that the experimental results are quite underwhelming. The paper doesn't compare with several recent ZSL and GZSL approaches, some of which have reported accuracies that look much better than the accuracies achieved by the proposed method. The paper does cite some of these papers (such as those based on synthesized examples) but doesn't provide any comparison. Given that the technical novelty is somewhat limited, the paper falls short significantly on the experimental analysis.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Interesting and novel extension of joint-VAEs in zero-shot learning. Could be written with more clarity and justification regarding some design choices.\", \"review\": \"The paper proposes an approach to generalized zero-shot learning by learning a shared latent space between the images and associated class-level attributes. To learn such a shared latent space and mapping for the same which is generalized and robust -- the authors propose \\u2018modality invariant variational autoencoders\\u2019 -- which allows one to perform shared manifold learning by training VAEs with both images and attributes as inputs. Empirical results demonstrate improvements over existing approaches on the harmonic mean metric present in the generalized zero-shot learning benchmark. Other than the concerns mentioned below, I like the basic idea adopted in the paper to extend Vedantam et. al. (2018)\\u2019s joint-VAEs (supporting unimodal inference) to the framework of generalized zero-shot learning. The proposed approach clearly results in improvements over baselines and existing approaches.\", \"comments\": [\"A minor correction. The paper claims the bias towards seen classes at inference for the existing GZSL approaches is due to the inability of obtaining training data for the unseen classes. In my opinion, this should be rephrased as the inability to learn a generalized enough representation (joint or otherwise) that is aware of the shift in distribution from seen to unseen classes (images or attributes) as this information is not available apriori.\", \"Writing Clarity Issues. In general, there is significant repetitions along certain lines throughout the introduction and approach. While the paper overall does a good job of explaining the motivation as well as the approach, some of the sections (and sentences within) could be written better to express the point being made. Specifically, the first paragraph in the introduction seems to be structured more from a few-shot setting. The paper would benefit from talking about few-shot learning first and then extending to the extreme setting of zero-shot learning. Similarly, the second paragraph in the introduction could be written more succinctly to express the point being made. The sentences -- \\u201cMoreover, it is difficult\\u2026.widely available\\u201d -- are difficult to understand. Tables 4 and 5 should be positioned after the references section.\", \"A point repeatedly made in the paper suggests that learning unidirectional mappings from images to attributes (or otherwise) suffers from generalization to unseen classes. While I agree with this statement, most methods in GZSL hold out a subset of seen classes as validation (unseen) classes while learning such a mapping -- which I believe was also being done while learning the joint model in MIVAE (Can the authors confirm this? Is yes, how were these classes chosen?). As such, the authors should stress on the advantages learning a joint latent model over both modalities offers as opposed to unidirectional mappings while mentioning the above points.\", \"Learning the joint latent space for images and attributes has been referred to as learning a shared manifold in the paper -- with associated terms such as manifold representation being used as well. Sharing a latent space need not imply learning an entire manifold as the subspace captured by the latent space might as well be localized in the manifold in which it exists. Can the authors comment more on this connection with respect to the points around \\u201cshared manifold learning\\u201d?\", \"During inference, the authors operate in the latent space to find the most-relevant class by enumerating over all classes the KL-divergence between the unimodal encoder embeddings. Is there a particular reason the authors chose to operate in the latent space as opposed to operating in a modality space? Specifically, given an image the authors could have used the p(a|z) decoder to infer the attribute given the encoded z -- and subsequently finding the 1-nearest neighbor in that space. Any reason why this approach was not adopted?\", \"On page 4, regarding the term L_dist in the objective for MIVAE, the authors draw the connections made in the appendix of Vedantam et. al. (2018) regarding the minimization of KL-divergence between the bimodal and a unimodal variational posterior(s). While the connection being made is accurate, the subsequent solution modes identified in the following paragraph -- \\u201cWhen equation 2 becomes minimum\\u2026\\u201d -- do not seem accurate. At minimality, unimodal encoders should be equivalent to the bimodal encoder marginalized over the absent rv under the conditional distribution of the data. Could the authors comment on whether the version presented in the paper is intended or is merely a typographical mistake?\", \"Section 5 experiments suggest the learning rate used in practice was 10^3. Assuming a typo, this should be presumably 10^-3.\", \"Experimental Issues.\", \"The authors should explicitly mention if they are using the proposed split throughout all baselines and approaches for GZSL evaluations. It\\u2019s not explicitly mentioned in the text and is an important detail that should not be left out. Only the appendix mentions the number of seen/unseen classes.\", \"How did the authors select a validation split (held out seen classes) to train MIVAE? Did they directly borrow the training and validation splits present in the proposed split? Or did they create a split of their own? If latter, how was the split created? In general, I am curious about how the MIVAE checkpoint for inference was chosen.\", \"In section 5.2, the reasons in the 3rd paragraph elaborating \\\\lambda_map=1 vs 0 not being too different for AWA and aPY are not clear. Could the authors comment a bit more on them?\", \"The authors adressed the issues raised/comments made in the review. In light of my comments below to the author responses -- I am not inclined towards increasing my rating and will stick to my original rating for the paper.\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Good Generalized Zero-shot learning experimental results but limited contributions\", \"review\": \"The paper considers the problem of (Generalized) Zero-Shot Learning. Most zero-shot learning methods embed images and text/attribute representations into a common space. The main difference here seems to be that Variational AutoEncoder (VAEs) are used to learn the mappings that take different sources as input (images and attributes).\\nAs in JMVAE (Suzuki et al., 2016) (which was not proposed for zero-shot learning), decoders are then used to reconstruct objects from the latent space to the input sources.\\n\\nMy main concerns are about novelty. The contribution of the paper is limited or not clear at all, even when reading Section C in the appendix. The proposed approach is a straightforward extension of JMVAE (Suzuki et al., 2016) where a loss function is added (Eq. (3)) to minimize the KL divergence between the outputs of the encoders (which corresponds to optimizing the same problem as most zero-shot learning approaches).\\nThe theoretical aspect of the method is then limited since the proposed loss function actually corresponds to optimize the same problem as most zero-shot learning approaches but with VAEs.\\n\\nConcerning experiments, Generalized Zero shot learning (GZSL) experiments seem to significantly outperform other methods, whereas results on the standard zero-shot learning task perform as well as state-of-the-art methods. \\nDo the authors have an explanation of why the approach performs significantly better only on the GZSL task?\\n\\nIn conclusion, the contributions of the paper are mostly experimental. Most arguments in the model section are actually simply intuitions.\", \"after_the_rebuttal\": \"After reading the different reviews, the replies of the authors and the updated version, my opinion that the \\\"explanations\\\" are simply intuitions (which is related to AnonReviewer3's concern \\\"Regarding advantages of learning a joint model as opposed to unidirectional mappings\\\") has not been completely addressed by the authors. Fig. 4 does address this concern by illustrating their point experimentally. However, I agree with AnonReviewer3 that the justification remains unclear.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
ryxjH3R5KQ | Single Shot Neural Architecture Search Via Direct Sparse Optimization | [
"Xinbang Zhang",
"Zehao Huang",
"Naiyan Wang"
] | Recently Neural Architecture Search (NAS) has aroused great interest in both academia and industry, however it remains challenging because of its huge and non-continuous search space. Instead of applying evolutionary algorithm or reinforcement learning as previous works, this paper proposes a Direct Sparse Optimization NAS (DSO-NAS) method. In DSO-NAS, we provide a novel model pruning view to NAS problem. In specific, we start from a completely connected block, and then introduce scaling factors to scale the information flow between operations. Next, we impose sparse regularizations to prune useless connections in the architecture. Lastly, we derive an efficient and theoretically sound optimization method to solve it. Our method enjoys both advantages of differentiability and efficiency, therefore can be directly applied to large datasets like ImageNet. Particularly, On CIFAR-10 dataset, DSO-NAS achieves an average test error 2.84%, while on the ImageNet dataset DSO-NAS achieves 25.4% test error under 600M FLOPs with 8 GPUs in 18 hours. | [
"Neural Architecture Search",
"Sparse Optimization"
] | https://openreview.net/pdf?id=ryxjH3R5KQ | https://openreview.net/forum?id=ryxjH3R5KQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"S1xgOcptl4",
"S1lJnc8SxN",
"H1xML64ikN",
"H1eM0qbj1V",
"HJeUxsq60Q",
"r1e9RpscAX",
"S1eaY6jqRQ",
"H1l9VFs9AQ",
"Syxfj_H90X",
"B1g1aj1F07",
"S1xBci1FAm",
"H1lLBiyK0Q",
"HyewJo1YAQ",
"HJecGl3Z0m",
"H1gmII1EaX",
"SkgtFcZbTm",
"Bkxnj9SlTX",
"ryehyj0ka7",
"SJx-Kv9JTQ",
"Syemi4AR3Q",
"rylgsNqchQ",
"SkgrQMiFhQ",
"rylJA7G82Q"
],
"note_type": [
"official_comment",
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"official_comment",
"comment",
"comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1545357927709,
1545067175401,
1544404298439,
1544391370226,
1543510766471,
1543318993699,
1543318917239,
1543317809648,
1543293081590,
1543203766673,
1543203725231,
1543203645871,
1543203551504,
1542729745571,
1541826123261,
1541638784680,
1541589668335,
1541561060032,
1541543801482,
1541493914844,
1541215383661,
1541153308729,
1540920263034
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1573/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1573/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1573/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1573/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1573/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1573/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1573/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1573/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1573/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1573/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1573/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1573/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1573/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1573/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1573/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1573/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1573/Authors"
],
[
"(anonymous)"
],
[
"~Ludovic_Denoyer1"
],
[
"ICLR.cc/2019/Conference/Paper1573/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1573/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1573/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"...\", \"comment\": \"We regret the decision is so arbitrary and unconvincing, even AC cannot foresee the scientific value of the work, only concern the comparison of the results.\"}",
"{\"metareview\": \"This paper proposes Direct Sparse Optimization (DSO)-NAS to obtain neural architectures on specific problems at a reasonable computational cost. Regularization by sparsity is a neat idea, but similar idea has been discussed by many pruning papers. \\\"model pruning formulation for neural architecture search based on sparse optimization\\\" is claimed to be the main contribution, but it's debatable if such contribution is strong: worse accuracy, more computation, more #parameters than Mnas (less search time, but also worse search quality). The effect of each proposed technique is appropriately evaluated. However, the reviewers are concerned that the proposed method does not outperform the existing state-of-the-art methods in terms of classification accuracy. There's also some concerns about the search space of the proposed method. It is debatable about claim that \\\"the first NAS algorithm to perform direct search on ImageNet\\\" and \\\"the first method to perform direct search without block structure sharing\\\". Given the acceptance rate of ICLR should be <30%, I would say this paper is good but not outstanding.\", \"confidence\": \"3: The area chair is somewhat confident\", \"recommendation\": \"Reject\", \"title\": \"borderline\"}",
"{\"title\": \"Response\", \"comment\": \"We have added the MNasNet -92 (without SE) results to Table2 for comparison. Please note that, the difference on Imagenet is only 0.2% in terms of accuracy, which is quite minor compared to the error rate 25.2%. We indeed don't optimize latency intentionally in this work, however this should not be hard if we could directly test the latency on target hardware.\\n\\nAnother noteworthy point is that MNasNet does not report their search cost in their paper. As noted in a recent work (https://arxiv.org/abs/1812.00332), MNasNet needs about 10^4 GPU hours for search! In contrast, we only need 6 GPU hours. The difference here is more than 1000 times! If you concern more about practical use, we think the cost of searching the model is also an important factor for practical use in NAS. We would like to ask the area chair to consider this point in the decision.\"}",
"{\"title\": \"compare with MnasNet\", \"comment\": \"Should compare with MnasNet.\\nThis work is both slower and less accurate than the existing mnasNet. From a practical deployment perspective, reporting a low FLOP number is not the correct evaluation metric.\"}",
"{\"title\": \"Reply to the authors\", \"comment\": \"Thank you for your detailed response.\\n\\nI am satisfied with your answers to my questions, and I think this work deserves to be seen by the wider community as a good comparison point to other architecture search schemes, such as DARTS.\\n\\nAs such, I will bump my score up to 7, but I implore the authors to do another rewrite as the grammar still leaves much to be desired; you don't want to put people off reading your work because of something so trivial.\"}",
"{\"title\": \"Response\", \"comment\": \"Thanks for your clarification. We have updated the results on CPU on the table above. We accidentally report swap our result and NASNet. The results are more reasonable on CPU now.\"}",
"{\"title\": \"Updated results on CPU\", \"comment\": \"Thanks for your response. We double check the previous results, finding that we accidently swap the results of NasNet and DSO-Nas on mxnet cpu test in the Table we reported before. Thus, our method is only 25% slower than DARTS. To make a more complete comparison, we turn on MKL-DNN and update our cpu results. The test device is Intel Xeon CPU E5-2670 v3. The platform is MXNet.\\n ---------------------------------------------------------------------------------------------------------------------------\\nmodel mxnet(GPU) mxnet(CPU) mxnet(CPU+mkldnn) TensorRT(GPU) \\n---------------------------------------------------------------------------------------------------------------------------\\nMobileNet 1.94 280.66 26.51 1.01\\n---------------------------------------------------------------------------------------------------------------------------\\nMnasNet 2.85 73.75 7.14 1.71\\n---------------------------------------------------------------------------------------------------------------------------\\nDARTS 6.91 83.74 17.00 -\\n---------------------------------------------------------------------------------------------------------------------------\\nNasNet 9.32 198.47 30.58 -\\n---------------------------------------------------------------------------------------------------------------------------\\nDSO-Nas 7.00 111.38 22.34 4.25\\n---------------------------------------------------------------------------------------------------------------------------\\n\\nThe results here are more reasonable in the updated version. The trends are similar with and without MKLDNN in CPU. In particular, our method is slightly faster than MobileNet, 25% slower than DARTS, and about 30% faster than NasNet. Note that we haven\\u2019t specifically optimize latency as indicated in paper. This is definitely a promising direction to pursue in future work.\"}",
"{\"title\": \"Is this the right focus?\", \"comment\": \"I'm curious, why is it important to compare the resultant network to MobileNet, when it performs significantly better?\\n\\nThe additional branches that indeed cause some increase in latency will contribute towards these performance gains, so it's effectively a trade-off.\\n\\nI don't want to speak on behalf of the authors, but surely the focus of this paper is their optimization method rather than the latency of the end product.\"}",
"{\"title\": \"Only reporting FLOPs is misleading\", \"comment\": \"Thanks the authors for the updated results.\\n\\nDSO-NAS is slower than both DARTS and MnasNet, upto 3x slower, although they have similar FLOPs. Only reporting FLOPs is misleading. This validated my concern. \\n\\nThe CPU latency for MobileNet looks very slow. I'm concerned if you have turned on MKL-DNN (https://github.com/intel/mkl-dnn) when measuring the CPU speed? MobileNet can easily run below 100ms on a 3 year old Android phone.\\n\\nMobileNet+TF-Lite+Android is a well-established measurement setup for fair comparisons. MobileNet is also designed for mobile platform. The authors are encouraged to perform apple to apple comparison.\"}",
"{\"title\": \"Response\", \"comment\": \"Thanks for your valuable comment. You have indeed raised a very good point to NAS community.\\n\\nFirst, it should be noted that the latency of a model is highly dependent on the hardware platform and its corresponding implementation. For example, we test the latency of MobileNet and several state-of-the-art NAS methods, including MnasNet, DARTS, NasNet and our DSO-Nas in different hardware architectures and platforms. During testing, the batch-size is 1 and the input image size is 224 \\u00d7 224. For GPU testing, a single NVIDIA GeForce GTX 1080Ti is used. The convolution library is CUDNN 7.0. For CPU testing, the test device is Intel i5-6600K CPU. Each network is randomly initialized and evaluated for 500 times. The average runtime is reported.\\n\\n---------------------------------------------------------------------------------------------------------------------------\\nmodel mxnet(GPU) mxnet(CPU) TensorRT(GPU) \\n---------------------------------------------------------------------------------------------------------------------------\\nMobileNet 1.94 194.18 1.01\\n---------------------------------------------------------------------------------------------------------------------------\\nMnasNet 2.85 62.32 1.71\\n---------------------------------------------------------------------------------------------------------------------------\\nDARTS 6.91 64.86 -\\n---------------------------------------------------------------------------------------------------------------------------\\nNasNet 9.32 92.12 -\\n---------------------------------------------------------------------------------------------------------------------------\\nDSO-Nas 7.00 149.53 4.25\\n---------------------------------------------------------------------------------------------------------------------------\\n\\nThe results test on GPU with MXNet shows that DARTS, NasNet and DSO-Nas have higher latency than MobileNet and MnasNet. This is because the network structures of DARTS, NasNet and DSO-Nas have more fragments than MobileNet and MnasNet due to the unlimited search space. The searched structure of block in DARTS, NasNet and DSO-Nas has a lot of small operators which will reduce degree of parallelism on GPU as shown in ShuffleNetV2 [1]. As for the CPU test results, we found that the latency of MobileNet is much higher, since the memory access is no longer the bottleneck. Compared within NAS method, our method is similar to DARTS, while better than NASNet in terms of accuracy.\\n\\nWhen using TensorRT, all the methods benefit from the deliberated implementation in GPU.\\n\\nThus, several important factors have considerable affection on latency, including network architectures, hardware architectures and platforms. For our DSO-NAS, we don\\u2019t assume any target hardware platform, thus it is hard to directly optimize running latency. In one hand, we could optimize surrogate metric to latency such as MAC as illustrated in [1]; on the other hand, directly optimizing latency is on our schedule for future works as in the conclusion part. We may combine the spirit of MnasNet and our DSO-NAS in one unified framework, however it is out the scope of this single paper.\\n\\n[1] Ma, N., Zhang, X., Zheng, H.T. and Sun, J., 2018. ShuffleNet v2: Practical guidelines for efficient cnn architecture design. ECCV 2018.\"}",
"{\"title\": \"Response\", \"comment\": \"Thanks for your thoughtful review. We have given serious considerations of your concerns and revise our manuscript to accommodate your suggestions. Please see the details below.\", \"q1\": \"\\u201cThere are a few grammatical/spelling errors that need ironing out.\\u201d\", \"a1\": \"We have fixed the typos and grammatical errors in the revision.\", \"q2\": \"\\u201cPioneering work is not necessarily equivalent to \\\"using all the GPUs\\\"\\u201d\", \"a2\": \"This claim is indeed not accurate we have delete this claim in the revision.\", \"q3\": \"\\u201cThere are better words than \\\"decent\\\" to describe the performance of DARTS, as it's very similar to the results in this work!\\u201d\", \"a3\": \"We have changed the word to \\u201cimpressive\\u201d in the revision. However, DSO-NAS indeed outperforms DARTS on ImageNet dataset as illustrated in Table2.\", \"q4\": \"\\u201cFrom figure 2 it's not clear why all non-zero connections in (b) are then equally weighted in (c). Would keeping the non-zero weightings be at all helpful?\\u201d\", \"a4\": \"In the search stage, the scaling factors are only used to indicate which operators should be pruned. The value of scaling factors do not represent the importances of kept operators since they can be merged into the weights of convolution.\\nWe also add experiments in CIFAR-10 to compare the performance between keeping the non-zero weightings and equal weightings. The result shows that both of them yield similar performances.\\n---------------------------------------------------------------------------------------------------------------------------\\nArchitecture \\t params(M) \\ttest error\\n---------------------------------------------------------------------------------------------------------------------------\\nDSO-NAS-share+c/o 3.0 \\t2.84\\n---------------------------------------------------------------------------------------------------------------------------\\nDSO-NAS-share+c/o+k/w 3.0 2.88\\n---------------------------------------------------------------------------------------------------------------------------\\nDSO-NAS-full+c/o 3.0 \\t2.95\\n---------------------------------------------------------------------------------------------------------------------------\\nDSO-NAS-full+c/o+k/w 3.0 \\t2.96\\n---------------------------------------------------------------------------------------------------------------------------\\nwhere \\u201cc/o\\u201d represents that training the searched architectures with cutout and \\u201ck/w\\u201d represents keeping the non-zero weightings in the architectures.\", \"q5\": \"\\u201cWhy have you chosen the 4 operations at the bottom of page 4?\\u201d\", \"a5\": \"These four operations were used by ENAS and commonly included in the search space of most NAS papers.\", \"q6\": \"\\u201cHow do you specifically encode the number of surviving connections?\\u201d\", \"a6\": \"We don\\u2019t directly encode the number of surviving connections. Instead, the number of surviving connections is determined by the weight for L1 regularization, which can be incorporated with certain budget.\", \"q7\": \"\\u201cMeasuring in GPU days is only meaningful if you use the same GPU make for every experiment. Which did you use?\\u201d\", \"a7\": \"All of our experiments were conducted by NVIDIA GTX 1080Ti GPU, which was also used by ENAS and DARTS. We have added it in the paper.\"}",
"{\"title\": \"Response\", \"comment\": \"Thanks for pointing out the pros and cons of our method. We address your concerns as follows:\\n\\nQ1. \\u201cThe search space of the proposed method, such as the number of operations in the convolution block, is limited.\\u201d\", \"a1\": \"First, the size of search space is not determined by the number of operations but the number of connections. The search space of our method is different from exiting NAS methods in that the number of input of certain operation is not limited.\\n\\nSecond, the search space without block share is even much larger than existing NAS methods. \\n\\nThird, we can trivially extend our DSO-NAS to accommodate more operations such as dilated conv like our ongoing experiments on PASCAL VOC semantic segmentation task, we extend our search space to accommodate 3x3 and 5x5 separable convolution with dilated = 2. The following table shows the performance of our model on the PASCAL VOC 2012 semantic segmentation task, where DSO-NAS-cls represents the architecture searched on ImageNet with block structure sharing and DSO-NAS-seg represents the architecture searched on PASCAL VOC segmentation task.\\n---------------------------------------------------------------------------------------------------------------------------\\nArchitecture mIOU Params(M) FLOPS(B) \\n---------------------------------------------------------------------------------------------------------------------------\\nDSO-NAS-cls 72.1 6.5 13.0\\n---------------------------------------------------------------------------------------------------------------------------\\nDSO-NAS-seg(more operations) 72.7 6.7 13.2\\n---------------------------------------------------------------------------------------------------------------------------\\nWe combine DSO-NAS with Deeplab v3 and search for the architecture of feature extractor with block sharing. All above models have been pre-trained on ImageNet classification task first. It\\u2019s notable that the architecture searched on semantic segmentation task with additional operations achieve better performance in our preliminary experiment, indicating that our DSO-NAS is capable to incorporate additional operations. We will present the full experiments of semantic segmentation in the future revision.\", \"q2\": \"\\u201cThe technical contribution of the proposed method is not high, because the architecture space of neural network is similar to the prior works.\\u201d\", \"a2\": \"Please refer to Q1. Moreover, we never claim the main contribution of our work lies in augmenting the search space. And in fact, most existing NAS papers share the same architecture search space, the main differences between them is the search strategy. We believe that judging the novelty of a NAS paper solely by its architecture space is unfair.\"}",
"{\"title\": \"Response\", \"comment\": \"Thanks for your valuable comments. It helps us to prepare the revision. We address all your concerns in the revision as below.\", \"q1\": \"Was the auxiliary tower used during the training of the shared weights W?\", \"a1\": \"Auxiliary tower is used only in the retraining stage.\", \"q2\": \"\\u201cDid the experiments on CIFAR-10 and ImageNet use the cosine learning rate schedule?\\u201d\", \"a2\": \"\", \"cifar\": \"In the pretrain stage and search stage, the learning rate is fixed to 0.1 with batch size 128; In the retraining stage, we use cosine learning rate schedule.\", \"imagenet\": \"In the pretrain stage and search stage, the learning rate is fixed to 0.1 with batch 224; In the retraining stage, we use linear decay learning rate schedule.\", \"q3\": \"\\u201cFigure 4 does not illustrate M=4 and N=4, e.g. which operation belongs to which layer?\\u201d\", \"a3\": \"In the revision, we replace the Figure 4 with a new version which has more details. As show in Figure 4, all the operators in level 4 are pruned.\", \"q4\": \"\\u201cThe sparse regularization of \\\\lambda induces great difficulties in optimization\\u201d\", \"a4\": \"The non-smooth regularization introduced by l1 regularization makes traditional stochastic SGD failed to yield sparse results. If we need exact zero, we have to use heuristic thresholding on the \\\\lambda learned, which has already been demonstrated in SSS [1] that is inferior. Besides, traditional APG method is not friendly for deep learning as extra forward-backward computation is required, also as shown by SSS.\", \"q5\": \"\\u201cMissed citation: MnasNet also incorporates the cost of architectures in their search process. On ImageNet, your performance is similar to theirs. I think this will be a good comparison.\\u201d\", \"a5\": \"We have added the result of MnasNet [2] in Table 2. Indeed, MnasNet achieves similar results with us with less FLOPs. However, it is also need to note that MnasNet evaluates more than 8K models, which introduces much higher search cost than our method. Moreover, the design space of MnasNet is significant different from other existing NAS methods including ours. It is interesting to explore the combination of MnasNet with ours in the future work.\", \"q6\": \"\\u201cThe paper has some grammatical errors.\\u201d\", \"a6\": \"We have fixed the typos and grammatical errors in the revision.\", \"q7\": \"About \\u201cfirst NAS algorithm to perform direct search on ImageNet\\u201d\", \"a7\": \"We check this claim again and find methods like MnasNet [2] and one-shot architecture search [3] also have the ability to perform direct search on ImageNet, we have delete this claim in the paper. However, to the best of our knowledge, our method is the first method to perform directly search without block structure sharing. We also report preliminary results that directly search on task beyond classification (semantic segmentation). Please refer to Q1 of Reviewer3 for details.\\n\\n[1] Data-Driven Sparse Structure Selection for Deep Neural Networks. ECCV 2018.\\n[2] MnasNet: Platform-Aware Neural Architecture Search for Mobile. https://arxiv.org/pdf/1807.11626.pdf\\n[3] Understanding and simplifying one-shot architecture search. ICML 2018.\"}",
"{\"title\": \"latency\", \"comment\": \"Latency is a practical measurement of performance. FLOPS is not.\\n\\nThe complicated learned architecture and many branches in Figure 4 make me concerned about the actual latency of the model and the practicality of deploying it, despite the low reported FLOPs.\\n\\nThe authors are encouraged to report the latency in Table 2, and compare it with MnasNet/MobileNet.\"}",
"{\"title\": \"Reply to \\\"Relevant Reference\\\"\", \"comment\": \"Thanks for your comment! This paper is indeed very related to our discussion about network structure learning. We will add reference and discussion to it in the revised version.\"}",
"{\"title\": \"Thanks for the constructive suggestions\", \"comment\": \"Sure, we will add these references, and discuss the relationships with them in the revision of rebuttal.\"}",
"{\"comment\": \"Thx for your detailed answer. But I still strongly recommend to add some references that use AutoML approaches for model compression. And in these papers, they indeed explicitly discuss the relationships between network pruning and Neural Architecture Search. I think this is a useful straightforward extension from network pruning by modifying the search space.\", \"1\": \"\\\"AMC: Automl for model compression and acceleration on mobile devices\\\", ECCV2018.\", \"2\": \"\\\"N2N LEARNING: NETWORK TO NETWORK COMPRESSION VIA POLICY GRADIENT REINFORCEMENT LEARNING\\\", ICLR2018.\", \"title\": \"Response to authors\"}",
"{\"title\": \"It is just the main contribution of the paper.\", \"comment\": \"In the front\\n\\\"As we all know, network pruning (i.e., filter pruning, channel pruning and so on) can be treated as neural architecture search.\\\" If you said \\\"as we all know\\\", it is better to have a reference here. As far as I know, this claim is just one of the ICLR submissions this year: https://openreview.net/forum?id=rJlnB3C5Ym¬eId=SyxGOzHu2Q I think you should comment on their paper with this claim.\\n\\nFirst, what you said is just one main contribution of our paper clearly listed in introduction section. The existing NAS methods all start from empty block and then add operators. Our method conveys another totally novel view of NAS that is you can start from full view then prune the useless ones! And more importantly, we prove it works at least comparable or even better than previous approaches! Nobody before us ever think about NAS in this way.\\n\\nSecond, the design space is fundamentally different from model pruning. Pruning from such dense connected block requires deliberate design of the optimization and training scheme. That is why we design three stages training method for DSO-NAS. Moreover, while SSS focuses on pruning the neurons, groups or blocks, our method focuses on pruning the connections between different layers, namely structural connections. The optimization method is indeed from the ECCV paper, but we treat it as an existing, well-developed component to use, and does not declare it as our main contribution.\\n\\nAbove all, we think the justification of \\\"the same approach, solving the same problem, is submitted to two different communities\\\" are arbitrary and unsound.\"}",
"{\"comment\": \"Hi all,\\n\\nWe can understand this paper from another perspective. \\nAs we all know, network pruning (i.e., filter pruning, channel pruning and so on) can be treated as neural architecture search.\\nGiven a pre-trained model which can be treated as a fully-connected DAG, network pruning aims to remove redundant filters and its connections to make DAG sparse.\\nThis paper belongs to fine-grained pruning and the used approach is nearly the same as \\\"Data-Driven Sparse Structure Selection for Deep Neural Networks\\\" which is published at ECCV2018.\\nThat means the same approach, solving the same problem, is submitted to two different communities.\", \"title\": \"Network Pruning is Neural Architecture Search.\"}",
"{\"comment\": \"Hi,\\n\\nHere is a relevant reference published at CVPR 2018, with a close idea: Learning Time/Memory-Efficient Deep Architectures with Budgeted Super Networks -- Tom Veniat, Ludovic Denoyer.\\n\\n In that work, edges are pruned by using a budgeted cost directly integrated in the objective function, and optimized through stochastic gradient descent. Could also be used as a comparison.\", \"title\": \"Relevant Reference\"}",
"{\"title\": \"Official Review\", \"review\": \"Summary:\\nThis paper proposes Direct Sparse Optimization (DSO)-NAS, which is a method to obtain neural architectures on specific problems, at a reasonable computational cost.\\n\\nThe main idea is to treat all architectures as a Directed Acyclic Graph (DAG), where each architecture is realized by a subgraph. All architectures in the search space thus share their weights, like ENAS (Pham et al 2018) and DARTS (Liu et al 2018a). The DAG\\u2019s edges can be pruned via a sparsity regularization term. The optimization objective of DSO-NAS is thus:\\n\\nAccuracy + L2-regularization(W) + L1-sparsity(\\\\lambda),\\n\\nwhere W is the shared weights and \\\\lambda specifies which edges in the DAG are used.\", \"there_are_3_phases_of_optimization\": \"1. All edges are activated and the shared weights W are trained using normal SGD. Note that this step does not involve \\\\lambda.\\n2. \\\\lambda is trained using Accelerated Proximal Gradient (APG, Huang and Wang 2018).\\n3. The best architecture is selected and retrained from scratch.\\n\\nThis procedure works for all architectures and objectives. However, DSO-NAS further proposes to incorporate the computation expense of architectures into step (2) above, leading to their found architectures having fewer parameters and a smaller FLOP counts.\\n\\nTheir experiments confirm all the hypotheses (DSO-NAS can find architectures, having small FLOP counts, having good performances on CIFAR-10 and ImageNet).\", \"strengths\": \"1. Regularization by sparsity is a neat idea.\\n\\n2. The authors claim to be the first NAS algorithm to perform direct search on ImageNet. Honestly, I cannot confirm this claim (not sure if I have seen all NAS papers out there), but if it is the case, then it is impressive.\\n\\n3. Incorporating architecture costs into the search objective is nice. However, this contribution seems to be orthogonal to the sparsity regularization, which, I suppose, is the main point of the paper.\", \"weaknesses\": \"1. Some experimental details are missing. I\\u2019m going to list them here:\\n- Was the auxiliary tower used during the training of the shared weights W?\\n\\n- Figure 4 does not illustrate M=4 and N=4, e.g. which operation belongs to which layer?\\n\\n- Did the experiments on CIFAR-10 and ImageNet use the cosine learning rate schedule [1]? If or if not, either way, you should specify it in a revised version of this paper, e.g. did you use the cosine schedule in the first 120 steps to train the shared parameters W, did you use it in the retraining from scratch?\\n\\n- In Section 3.3, it is written that \\u201cThe sparse regularization of \\\\lambda induces great difficulties in optimization\\u201d. This triggers my curiosity of which difficulty is it? It would be nice to see this point more elaborated, and to see ablation study experiments.\\n\\n2. Missed citation: MnasNet [2] also incorporates the cost of architectures in their search process. On ImageNet, your performance is similar to theirs. I think this will be a good comparison.\\n\\n3. The paper has some grammatical errors. I obviously missed many, but here are the one I found:\\n\\n- Section 3.3: \\u201cDifferent from pruning, which the search space is usually quite limited\\u201d. \\u201cwhich\\u201d should be \\u201cwhose\\u201d?\\n\\n- Section 4.4.1: \\u201cDSO-NAS can also search architecture [...]\\u201d -> \\u201cDSO-NAS can also search for architectures [...]\\u201d\\n\\nReferences.\\n[1] SGDR: Stochastic Gradient Descent with Warm Restarts. https://arxiv.org/pdf/1608.03983.pdf\\n\\n[2] MnasNet: Platform-Aware Neural Architecture Search for Mobile. https://arxiv.org/pdf/1807.11626.pdf\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"If we focus on the balance between the classification accuracy and computational efficiency, the proposed method is promising\", \"review\": [\"Summary\", \"This paper proposes a neural architecture search method based on a direct sparse optimization, where the proposed method provides a novel model pruning view to the neural architecture search problem. Specifically, the proposed method introduces scaling factors to connections between operations, and impose sparse regularizations to prune useless connections in the network. The proposed method is evaluated on CIFAR-10 and ImageNet dataset.\", \"Pros\", \"The proposed method shows competitive or better performance than existing neural architecture search methods.\", \"The experiments are conducted thoroughly in the CIFAR-10 and ImageNet. The selection of the datasets is appropriate. Also, the selection of the methods to be compared is appropriate.\", \"The effect of each proposed technique is appropriately evaluated.\", \"Cons\", \"The search space of the proposed method, such as the number of operations in the convolution block, is limited.\", \"The proposed method does not outperform the existing state-of-the-art methods in terms of classification accuracy.\", \"The technical contribution of the proposed method is not high, because the architecture space of neural network is similar to the prior works.\", \"Overall, if we focus on the balance between the classification accuracy and computational efficiency, the proposed method is promising.\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Review\", \"review\": [\"The authors present an architecture search method where connections are removed with sparse regularization. It produces good network blocks relatively quickly that perform well on CIFAR/ImageNet.\", \"There are a few grammatical/spelling errors that need ironing out.\", \"e.g. \\\"In specific\\\" --> \\\"Specifically\\\" in the abstract, \\\"computational budge\\\" -> \\\"budget\\\" (page 6) etc.\", \"A few (roughly chronological comments).\", \"Pioneering work is not necessarily equivalent to \\\"using all the GPUs\\\"\", \"There are better words than \\\"decent\\\" to describe the performance of DARTS, as it's very similar to the results in this work!\", \"From figure 2 it's not clear why all non-zero connections in (b) are then equally weighted in (c). Would keeping the non-zero weightings be at all helpful?\", \"Why have you chosen the 4 operations at the bottom of page 4? It appears to be a subset of those used in DARTS.\", \"How do you specifically encode the number of surviving connections? Is it entirely dependent on budget?\", \"You should add DARTS 1st order to table 1.\", \"Measuring in GPU days is only meaningful if you use the same GPU make for every experiment. Which did you use?\", \"The ablation study is good, and the results are impressive.\", \"I propose a marginal acceptance for this paper as it produces impressive results in what appears to be a short amount of search time. However, the implementation details are hazy, and some design choices (which operations, hyperparameters etc.) aren't well justified.\", \"------------\"], \"update\": \"Score changed based on author resposne\\n------------\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
ryxsS3A5Km | Continual Learning via Explicit Structure Learning | [
"Xilai Li",
"Yingbo Zhou",
"Tianfu Wu",
"Richard Socher",
"Caiming Xiong"
] | Despite recent advances in deep learning, neural networks suffer catastrophic forgetting when tasks are learned sequentially. We propose a conceptually simple and general framework for continual learning, where structure optimization is considered explicitly during learning. We implement this idea by separating the structure and parameter learning. During structure learning, the model optimizes for the best structure for the current task. The model learns when to reuse or modify structure from previous tasks, or create new ones when necessary. The model parameters are then estimated with the optimal structure. Empirically, we found that our approach leads to sensible structures when learning multiple tasks continuously. Additionally, catastrophic forgetting is also largely alleviated from explicit learning of structures. Our method also outperforms all other baselines on the permuted MNIST and split CIFAR datasets in continual learning setting. | [
"continuous learning",
"catastrophic forgetting",
"architecture learning"
] | https://openreview.net/pdf?id=ryxsS3A5Km | https://openreview.net/forum?id=ryxsS3A5Km | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"B1gOc2flxE",
"Hygcmic3AX",
"r1e_1DrcAX",
"HygXZSH50m",
"ByxWsETU07",
"SJxiRZ7A6Q",
"Skx9o-X0TQ",
"ryxJKZm067",
"rygRL-QRTX",
"BkekeLd52X",
"B1e--EHYnQ",
"HkeheEjO3X"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544723599892,
1543445281890,
1543292640310,
1543292154800,
1543062681170,
1542496723324,
1542496673559,
1542496630878,
1542496598065,
1541207527244,
1541129209231,
1541088243749
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1572/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1572/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1572/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1572/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1572/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1572/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1572/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1572/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1572/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1572/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1572/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1572/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper presents a promising approach for continual learning with no access to data from the previous tasks. For learning the current task, the authors propose to find an optimal structure of the neural network model first (select either to reuse, adapt previously learned layers or to train new layers) and then to learn its parameters.\\n\\nWhile acknowledging the originality of the method and the importance of the problem that it tries to address, all reviewers and AC agreed that they would like to see more intensive empirical evaluations and comparisons to state-of-the-art models for continual learning using more datasets and in-depth analysis of the results \\u2013 see details comments of all reviewers before and after rebuttal. \\nThe authors have tried to address some of these concerns during rebuttal, but an in-depth analysis of the results (evaluation in terms on accuracy, efficiency, memory demand) using different datasets still remains a critical issue.\", \"two_other_requests_to_further_strengthen_the_manuscript\": \"1) an ablation study on the three choices for structural learning (R3), and especially the importance of \\u2018adaptation\\u2019 (R3 and R1)\\nThe authors have tried to address this verbally in their responses but a proper ablation study would be desirable to strengthen the evaluation.\\n2) Readability and proofreading of the manuscript is still unsatisfying after revision.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Meta-Review\"}",
"{\"title\": \"Response after update\", \"comment\": \"Are they fair comparisons (evaluation only in terms of accuracy)? Different methods expand the network different amount. Hence, they should be compared on this metric too.\\n\\nAs mentioned in the paper, we make sure that all methods use similar amount of parameters. In particular, we make sure that all other methods at least match the number of parameters for our final model (after 10 tasks). In other words, all compared methods has same or more capacity as compared to our model, and we believe this comparison represents a fair comparison.\\n\\nWe agree that the expansion amount is also an important metric, and we will this metric in the final version.\"}",
"{\"title\": \"Response\", \"comment\": \"We have added variational continual learning result. The result was not added in the first version because running the VCL with more parameters uses a lot of memory, and thus can only run on CPU, which is a bit slow.\\n\\nWe tried deep generative replay, however we are not able to get reasonable results on permuted MNIST with 10 permutations. We tried various hyper-parameter settings, and performance was reasonable when the number of tasks is within five (average performance at around 96%). When number of tasks go beyond five, performance drops on previous tasks is quite significant, some tasks dropped to ~60%\\n\\nWe have added suggested references in related work.`\"}",
"{\"title\": \"2nd paper revision\", \"comment\": \"We have added one more experiment on split CIFAR-100. As reviewers suggested, MNIST dataset may not be a strong evaluation set, and therefore we added CIFAR-100 experiments since it represent a more realistic settings.\"}",
"{\"title\": \"Paper has improved, but I believe it needs more work\", \"comment\": \"I have read the authors' response and the updated manuscript, and I applaud their efforts to improve the paper.\\n\\nHowever, while the paper is improved, regrettably, I still feel it falls short of publication.\\n\\nIn particular, as I stated in my original review, the comparisons (even with the newly added baselines) are only applied to permuted MNIST, and the VDD performance baselines are quite simple. Permuted MNIST, while used in the past, is arguably no longer considered a strong evaluation, as the tasks are relatively independent [4].\\nThe references I suggested don't appear in the paper, except for [4], which is only mentioned in passing to introduce single-headed MNIST.\\n\\nFinally, the writing issues still seem to persist, such as sentence fragments and a few typos throughout.\\neg. in the newly added section 4.3: \\\"In particular, since our model tends to add new parameters at the first layer. For all methods...\\\"\\n\\nAs with my original review, I think this approach has potential, but I think the writing issues need to be addressed, references added, and comparisons performed on VDD or another strong dataset.\"}",
"{\"title\": \"Response\", \"comment\": \"Thanks for your feedback. We have added additional results comparing our method with other more recent and relevant methods. Please refer to the updated manuscript.\", \"regarding_the_questions\": \"- What's the intuition behind implementing the \\u201cadapt\\u201d operator as additive bias over the previous weights, rather than just copying the previous weights and fine tuning?\\n\\nThe role of adaptor is to strike a balance between number of parameters and performance. As mentioned in the end of section 3.1, we have different cost for select each option. Adaptor provides a way of using and modifying previous representation without incurring any forgetting by adding a relatively small amount of parameter overhead.\\n\\n- In the general case, if the architecture search is a continuous relaxation (softmax combination of operators), why is the \\\"adapt\\\" operator necessary? Wouldn't this already be a linear combination of new and old parameters? (In the example case of a 1\\u00d71 adaptor it makes sense, but this is a special restricted case which adapts with a smaller set of parameters)\\n\\nIn the adaptor case, when searching the combination of the old parameters with 1x1 conv forms an option. For example, in case we have two options, reuse and adaptor, the softweight is over the original parameter and the original parameter plus adaptor combined, so here the second part is treated as one option. To some extend what you are suggesting is true, however, this does not exactly corresponds to what is happening (as we explained above).\\n\\n- How is the structure regulariser backpropagated into the parameters of each layer? As I understand, it is composed of a constant discrete term z (number of parameters in each option), multiplied by architecture softmaxes alpha; the gradient with respect to each alpha is a constant, and so this has the effect of scaling the gradients of each operator.\\n\\nIn our implementation, the structure regularizer does not backprop to the parameters of each layer. Instead, the regularizer serves as a penalty for different choices, and thus has effect on the magnitude of alphas. Since alpha controls the weight for different options, this would influence the choice of different options during structure learning.\\n\\n- For the \\\"reuse - tuned\\\" case, isn\\u2019t the model effectively maintaining a new network for each task?\\n\\nNo. When the model is reused, the parameters are tuned, and the tuned parameter is used both for current tasks that it is finetuned on as well as all previous tasks.\"}",
"{\"title\": \"Response\", \"comment\": \"Thanks for your feedback. We have added additional results comparing our method with other more recent and relevant methods. Please refer to the updated manuscript.\", \"regarding_the_questions\": \"-\\tIn the equation (4), I wonder that, in the model, the hyperparameter(lambda_i or beta_i) of regularizer looks different according to the task, is it correct?\\n\\nYes they can be different for each tasks, this is more of a design choice. However, in our implementation and experiments, to make things easier, we just used the same hyperparameter for all tasks.\\n\\n-\\tAs shown in the Fig. 2) three choice-reuse, adaptation, and, new, is decided in the layer level. But with a semantic intuition, such that two different task can share specific features and simultaneously each of them requires the different neural space to learn discriminative ones at layer l, it seems better if the model could search structure much flexible. Is there some of experimental trial or plan about these kind of joint-adoption?\\n\\nThis is a very good point. Ideally we would like to be able to do more finer grained search, and that is definitely desired. In practice, we could only make the search space more restricted so that the search can be done in a more efficient manner. Of course one is not restricted to use only the options that we provided in our implementation. More finer grained and search is definitely possible, for example, learning to share at filter/neuron level instead of layer level. This is more of a balance between training efficiency and final performance. The current implementation highlights the importance of taking structure into account. However, one should not limit themselves with only the options that we demonstrated. As long as the search space is reasonably sized and operations are plausible, it could be incorporated in our framework. This leads to interesting future work directions.\\n\\n-\\tWhat is the main contribution of adaptation? I wonder that only reuse and new can work well including the role of adaptation, or not.\\n\\nThe role of adaptation is to strike a balance between number of parameters and performance. As mentioned in the end of section 3.1, we have different cost for select each option. Adaptor provides a way of using and modifying previous representation without incurring any forgetting by adding a relatively small amount of parameter overhead.\"}",
"{\"title\": \"Response\", \"comment\": \"Thanks for your feedback. We have added additional results comparing our method with other more recent and relevant methods. Please refer to the updated manuscript.\"}",
"{\"title\": \"Paper revision summary\", \"comment\": \"We thank all reviewers for providing constructive feedback that further improves the paper. We have highlighted the changes in text by using blue color. Minor editing changes are not marked. In this update revision we did following changes.\\n\\n1) We added more analysis on forgetting, which we think provides more insights into the method. In addition to use simple L-2 based regularization we finetuned our model without using any regularization, and we still obtained interesting result where the forgetting is minimal. This further suggests the importance of structure learning when learning continual tasks.\\n\\n2) As all reviewers suggested, we added more comparisons to more recent, existing methods. In particular, we compared ours with the more recent methods such as dynamically expandable network, incremental moment matching, progressive network, hard attention to task, etc on permuted MNIST dataset. We show that our method is performs competitive or better as compared to all these method.\\n\\n3) Provided more details in appendix\\n\\n4) Corrected editorial errors as pointed out by reviewers.\\n\\nDue to the time limit, we only completed experiments on permuted MNIST. Additionally we are also running experiment on split MNIST so that we have more comparisons, and we will update another version with those results before the deadline.\"}",
"{\"title\": \"Review of \\\"Continual Learning via Explicit Structure Learning\\\"\", \"review\": \"The paper considers the problem of sequential learning where data access for the previous tasks is completely prohibited. Authors propose a conceptually simple framework to learn structures (it is the selection of reusing, adapting previously learned layers or training new layers) as well as corresponding parameters in the sequential learning.\\n\\nThe paper is potentially interesting and providing possibly important framework for life-long learning. It is well written in most of cases and easy to follow (however I got the impression that the paper was rushed in the last minute; there are some trivial typos and very low resolution images etc.)\\n\\nHowever, I have a huge concern about the empirical evaluations. This area is really huge and has attracted lots of interest from many researchers, meaning that we lots of methods to compare. Nevertheless, authors only focus on providing insights on effects of different components of the propose model. This is also critical but comparing against state-of-the-arts is also very important. Especially, comparing against Lee et al 2017 seems essential. I can see the difference against that paper from the authors' argument in the related work, but that is the difference not comparison. It would be great to compare the performances as well as the number of increased memory sizes as the number of task increases.\\n\\nMoreover, the details should be provided; for instance provide the explicit form of R(s). \\n\\n---------------------------------------------\\n\\nThanks for the update. But are they fair comparisons (evaluation only in terms of accuracy)? Different methods expand the network different amount. Hence, they should be compared on this metric too.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"review\", \"review\": [\"This paper proposes a new approach to mitigate the catastrophic forgetting for continual learning. The model is composed to the neural architecture search and parameter learning based on the intuition that largely different tasks should allow to use different network structure to train them. In structure learning, they introduce three candidate to decide network architecture, reuse, adaptation and new. In the experiments, they show that their model outperforms SGD and EWC.\", \"Basically, the intuition of structure learning and the validation of that is straight forward and easy to follow. However, I\\u2019m not sure that the proposed model can outperform the recent continual learning methods, such as IMM(Lee et al, 2017), DEN or RCL(Ju Xu et al, 2018). There is only a relatively weak(and old) comparison with l2, and EWC.\", \"In the equation (4), I wonder that, in the model, the hyperparameter(lambda_i or beta_i) of regularizer looks different according to the task, is it correct?\", \"As shown in the Fig. 2) three choice-reuse, adaptation, and, new, is decided in the layer level. But with a semantic intuition, such that two different task can share specific features and simultaneously each of them requires the different neural space to learn discriminative ones at layer l, it seems better if the model could search structure much flexible. Is there some of experimental trial or plan about these kind of joint-adoption?\", \"What is the main contribution of adaptation? I wonder that only reuse and new can work well including the role of adaptation, or not.\", \"Is there any experiments to compare the recent continual learning methods(as I mentioned), in terms of AUC(or accuracy) and the network capacity?\", \"Minor remarks,\"], \"page_3\": \"\\u201cis been\\u201d -> is\\n\\t\\u201cunlikely\\u201d-> unlike\", \"page_4\": \"\\u201csharealbe\\u201d -> shareable\", \"page_5\": \"\\u201c, After\\u201d -> , after\\n\\t\\u201cpermuated\\u201d -> permuted\", \"page_6\": \"\\u201cFig. 5\\u201d -> Fig. 4\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Interesting idea, but needs a stronger experimental justification\", \"review\": \"The proposed approach aims to mitigate catastrophic forgetting in continual learning (CL) problems by structure learning: determining whether to reuse or adapt existing parameters, or initialise new ones, when faced with a new task. This is framed as an architecture search problem, applying ideas from Differentiable Architecture Search (DARTS). The approach is verified on the Permuted MNIST dataset and evaluated on the Visual Decathlon, showing an improvement.\\n\\nI think this is an interesting idea with potential, and is worth exploring, and the paper is well-structured and easy to follow.\\n\\nUnfortunately, I feel the paper fails to consider recent work on CL, both in terms of discussion and benchmarking. The only previous work that is compared is EWC, on permuted MNIST, and the Visual Decathlon performance is only compared to simple baselines (such as adding an adapter or fine tuning) which makes it difficult to gauge the contribution.\\nThere are recent works, some with better results on more difficult problems, such as Variational Continual Learning [1], Progress and Compress [2], or (Variational) Generative Experience Replay [3][4].\\nGiven the approach is based on dynamically adding parameters or modules, Progressive Networks and Dynamically Expandable Networks (both cited) are especially relevant and should be compared (I believe the former may be related to the \\u201cadapter\\u201d baseline, but this should be made explicit).\\n\\nI have some questions / discussion points:\\n- What's the intuition behind implementing the \\u201cadapt\\u201d operator as additive bias over the previous weights, rather than just copying the previous weights and fine tuning?\\n- In the general case, if the architecture search is a continuous relaxation (softmax combination of operators), why is the \\\"adapt\\\" operator necessary? Wouldn't this already be a linear combination of new and old parameters? (In the example case of a 1\\u00d71 adaptor it makes sense, but this is a special restricted case which adapts with a smaller set of parameters)\\n- How is the structure regulariser backpropagated into the parameters of each layer? As I understand, it is composed of a constant discrete term z (number of parameters in each option), multiplied by architecture softmaxes alpha; the gradient with respect to each alpha is a constant, and so this has the effect of scaling the gradients of each operator.\\n- For the \\\"reuse - tuned\\\" case, isn\\u2019t the model effectively maintaining a new network for each task?\", \"i_also_have_a_number_of_other_comments\": \"- Reference to figure in page 6 should be figure 4, not 5.\\n- I think the readability of the paper would benefit from another few proofreads; there are a number of grammatical issues throughout, and several sentence fragments, eg. in the top para of page 2: \\u201c..., it has the potential to encourage information sharing. Since now the irrelevant part can be handled\\u2026\\u201d.\\n\\nI would encourage the authors to strengthen the experimental comparison by incorporating stronger, external baselines, and improving some of the minor writing issues.\\n\\n[1] Nguyen, Cuong V., et al. \\\"Variational Continual Learning.\\\" ICLR, 2018.\\n[2] Schwarz, Jonathan, et al. \\\"Progress & Compress: A scalable framework for continual learning.\\\" ICML, 2018.\\n[3] Shin, Hanul, et al. \\\"Continual learning with deep generative replay.\\\" NIPS, 2017.\\n[4] Farquhar, Sebastian, and Yarin Gal. \\\"Towards Robust Evaluations of Continual Learning.\\\" arXiv, 2018.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
ByzcS3AcYX | Neural TTS Stylization with Adversarial and Collaborative Games | [
"Shuang Ma",
"Daniel Mcduff",
"Yale Song"
] | The modeling of style when synthesizing natural human speech from text has been the focus of significant attention. Some state-of-the-art approaches train an encoder-decoder network on paired text and audio samples (x_txt, x_aud) by encouraging its output to reconstruct x_aud. The synthesized audio waveform is expected to contain the verbal content of x_txt and the auditory style of x_aud. Unfortunately, modeling style in TTS is somewhat under-determined and training models with a reconstruction loss alone is insufficient to disentangle content and style from other factors of variation. In this work, we introduce an end-to-end TTS model that offers enhanced content-style disentanglement ability and controllability. We achieve this by combining a pairwise training procedure, an adversarial game, and a collaborative game into one training scheme. The adversarial game concentrates the true data distribution, and the collaborative game minimizes the distance between real samples and generated samples in both the original space and the latent space. As a result, the proposed model delivers a highly controllable generator, and a disentangled representation. Benefiting from the separate modeling of style and content, our model can generate human fidelity speech that satisfies the desired style conditions. Our model achieves start-of-the-art results across multiple tasks, including style transfer (content and style swapping), emotion modeling, and identity transfer (fitting a new speaker's voice). | [
"Text-To-Speech synthesis",
"GANs"
] | https://openreview.net/pdf?id=ByzcS3AcYX | https://openreview.net/forum?id=ByzcS3AcYX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"ByeTrXX-IE",
"Skl9IrsZxE",
"SJg099o2RX",
"BJl11ws2AX",
"B1lxfzs30Q",
"r1gu2kHsAQ",
"BygbTKVoR7",
"ryl7IwesAX",
"rkeebN1oR7",
"ByejWaRc0Q",
"H1gefjatCm",
"HJgZD2nYRQ",
"Hklxr33Y0m",
"rJe7lY3KCQ",
"BJebIsvXpm",
"BJxRuqPQam",
"HyeiSBJQT7",
"r1esueVMp7",
"Byg1SllT37",
"SJeKU5Ic27"
],
"note_type": [
"comment",
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1551082308886,
1544824146456,
1543449238379,
1543448278774,
1543447047644,
1543356335883,
1543354808742,
1543337802602,
1543332856393,
1543331074905,
1543260935650,
1543257176995,
1543257144071,
1543256299108,
1541794632551,
1541794422361,
1541760322672,
1541714034621,
1541369910561,
1541200464748
],
"note_signatures": [
[
"~tao_xia1"
],
[
"ICLR.cc/2019/Conference/Paper1570/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1570/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1570/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1570/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1570/AnonReviewer4"
],
[
"ICLR.cc/2019/Conference/Paper1570/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1570/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1570/AnonReviewer4"
],
[
"ICLR.cc/2019/Conference/Paper1570/AnonReviewer4"
],
[
"ICLR.cc/2019/Conference/Paper1570/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1570/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1570/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1570/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1570/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1570/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1570/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1570/AnonReviewer4"
],
[
"ICLR.cc/2019/Conference/Paper1570/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1570/AnonReviewer1"
]
],
"structured_content_str": [
"{\"comment\": \"Good paper ,make a mark.\", \"title\": \"mark\"}",
"{\"metareview\": \"The paper proposes using GANs for disentangling style information from speech content, and thereby improve style transfer in TTS. The review and responses for this paper have been especially thorough! The authors significantly improved the paper during the review process, as pointed out by the reviewers. Inclusion of additional baselines, evaluations and ablation analysis helped improve the overall quality of the paper and helped alleviate concerns raised by the reviewers. Therefore, it is recommended that the paper be accepted for publication.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Good contribution with nice results and analysis\"}",
"{\"title\": \"thanks\", \"comment\": \"Thanks for willing to include the ablation studies & new discussion. I trust the authors and will raise my old rating by one level.\"}",
"{\"title\": \"Thank you for the constructive suggestions!\", \"comment\": \"We sincerely appreciate your constructive suggestions!\", \"re\": \"Fader Networks\\nIt is interesting to hear that, unlike our case, adding an adversarial loss for style transfer helped achieve better results (we'd be curious to see the results; do you have a paper we can check? was your experience based on images only or also based on audio signals?) Regardless, we feel that this will be an easy-to-add baseline approach; we will include this in the final version.\"}",
"{\"title\": \"improved indeed\", \"comment\": \"I agree with the other reviewers that the paper is significantly improved compared to the last version. I appreciate the author's efforts!\\n\\nTwo of my main concerns remain, however:\\n* Some of the comparisons depend on the setting of the key hyper-params in the prosody-tacotron and GST models. E.g. the authors mention that the proposed model synthesizes speech with a closer speaking style to reference than GST does. Did you observe the same trend when you use a bigger reference embedding size or more heads in the multiheaded attention in GST? (It doesn't need to consistently beat GST, etc. but a depiction of the performance trend, or trade-off, would be helpful to better understand the work)\\n\\n* I am surprised by the author's results with Fader network style adversarial loss for style transfer, which also contradicts to my own experience. I don't think that idea is specific to image. As an obvious baseline, at least, I think the authors should put some relevant discussions around it in the paper.\\n\\nAgain, thanks for the authors' thorough comments and the willingness to change the paper contents!\"}",
"{\"title\": \"significant progress\", \"comment\": \"I want to thank you, the authors for the significant amount of work that was made to the paper during this comment period.\\n\\nI've revised my assessment upwards on the basis of this effort.\"}",
"{\"title\": \"naturalness, swapping and style transfer\", \"comment\": \"Naturalness:\\nThanks for your constructive comments. It is correct that our evaluation is focused on the disentanglement of style and content, rather than directly assessing the naturalness of the TTS results, because disentangling content/style is the major focus of our work. In hindsight, however, we do agree with your point that measuring the naturalness could have provided additional insights into how our model performs compared to the baseline TTS systems. We promise to add a MOS evaluation results in the final version of our paper.\", \"swapping\": \"We also agree with the reviewer on this point. We will add human classification results on the style swapping experiment.\", \"style_transfer\": \"We appreciate your clarification on evaluation metrics for our subjective study. Yes, we do agree with your comments, and will modify our metric based on a non-parametric test.\"}",
"{\"title\": \"'bijective mapping' and 'style loss'\", \"comment\": \"Bijective mapping:\\nThanks for the constructive suggestion. We do agree that the bijective constraint might be too strict; injective mapping could be more appropriate to illustrate our setting. We have incorporated your two suggestions into the new revision. (Note: Since this discussion was at the very last minute, which was past the rebuttal period, we could not upload the new version of the paper. But the change is already made and will be reflected in the final version.)\", \"style_loss\": \"Thanks for the clarification. Yes, we do agree that prosody, in its entirety, cannot be captured using local statistics in the time-frequency domain. As we clarified above, our style loss is limited to capturing only certain elements of prosody. To reflect this, we have already removed our statement regarding style loss and prosody in the revision.\"}",
"{\"title\": \"evaluations response\", \"comment\": \"MOS:\\nThe comment was about assessment of naturalness of the resultant speech, not prescribing an MOS test specifically. In general prosodic modifications lead to decreased quality. Assessing how large this degradation is valuable in assessing this work. For what it's worth, both the GST and prosody-tacotron papers would significantly benefit from this kind of evaluation and I find it surprising that they were omitted. \\n\\n\\\"So other than some evaluation metrics used in regular TTS, we also performed a set of experiments that do not typically appear in TTS work.\\\"\\nThe main evaluation metrics for \\\"regular TTS\\\" are subjective tests that look at naturalness and to a lesser extent (given the state of the art) intelligibility. The typical tests are MOS, MUSHRA, or ABX (AXB, AXY). The intelligibility dimension has been assessed by the WER evaluation. Naturalness (or quality) has not. It might be reasonable to claim that for this work TTS quality (measured in terms of naturalness) is not important, and is therefore not evaluated.\\n\\n\\\"The most important claim in our paper is the ability to disentangle content and style. We believe this [swapping] experiment actually is most important evaluation in validating our claim.\\\" \\nI agree that this is the most important claim of the paper and the most important evaluation. This is why it is so surprising that there is no evaluation here. Rather 16 examples are offered. It would be reasonable to ask a human rater to assess the emotional content of the utterance as either neutral, happy, sad, angry. This seems to be the most direct assessment of the claim of the paper. Does the newly synthesized utterance contain the desired emotional information? The style transfer evaluation is much more effective at demonstrating this. The examples without evaluation are unconvincing.\", \"style_transfer\": \"\\\"To validate that the test follows a normal distribution would require a large amount of subjective studies. We followed the precedent in the most recent works (GST, and prosody-Tacotron).\\\"\\n1) you do not need to use a t-test. There are non-parametric tests available (specifically the Mann-Whitney U-test) that do not assume that the observations follow a normal distribution. Most (all?) statistical packages support this test. 2) the test used in GST is not described. In prosody-tacotron a 95% confidence interval is described, but not a t-test. I hope that the confidence interval is generated non-parametrically in that work. Using a mean and standard deviation derived from observations that are not normally distribtued would have generated a biased estimate of the confidence intervals. 3) even if these papers did use an unsupported statistical test, the t-test is still not valid without confirmation that the analyzed ordinal subject responses follow a normal distribution (in most cases they do not).\"}",
"{\"title\": \"clarifications reponse\", \"comment\": \"Bijective mapping:\\nEven conditioned on style tokens the mapping is not bijective. A given text, with the same condition (style), can be produced as \\\"angry\\\" with different acoustic realizations. The content, speaker and condition can all be transmitted and there are still valid variations of the realization. This could be theoretically true, if the condition is considered to be a specific prosodic realization P of speaker A speaking utterance X, and the target of generating speaker B speaking utterance X with realization P. However, 1) given the state of the art and understanding of prosody, it is very underdetermined, and not exactly useful. it is underdetermined because we do not have a way of disentagling prosodic realization from speaker identity. While we have some approaches to map from one speaker's pitch range to another, transformation of normal and affected speaking rhythm and voice qualities from one speaker to another are not well understood or all that thoroughly well studied. And 2) it's also not clear that this is the desired mapping. The goal is to retain the conditioning variable -- here a coarse description of affect. The realization of speaker A speaking utterance X with \\\"angry\\\" prosody in and of itself is not unique. Neither are the realizations of speaker B speaking utterance X with \\\"angry\\\" prosody. Even if there is a theoretical bijective mapping based on a highly specified condition, the practical mapping that is being learned here is many-to-many. The broader point is that the \\\"Ideal\\\" F is not even a function. The target is a set, not a point, f(x_txt, x_aud) = {t \\\\in trg_{txt, aud}} where trg_{txt, aud} is the set of all valid realizations of the text, txt, and conditioning information, aud, by the target speaker. \\n\\nStepping back, the concern with maximum likelihood that is being raised is that the learned F may not be injective, i.e. that the learned function may map multiple elements of the domain to the same realization and completely ignore x_{aud}. This is a fair concern. One issue with the term bijective is that determines that F should also be surjective -- that every element in \\\\hat{x} should be mappable from some x_txt and x_aud. This aspect isn't addressed by the work.\\n\\nMaking this discussion more constructive -- 1) consider removing the term \\\"Ideally\\\" from section 2. The description here is much more practical than it is ideal. 2) consider replacing bijective with injective. I believe it's more consistent with the problem that is being solved.\", \"style_loss\": \"My initial description of prosody was perhaps too pointed at addressing the (since deleted) statement in the previous draft that claimed that prosody was only the low-level characteristics. Prosody does include local time-frequency elements -- particularly as they capture voice quality. The previous point was that prosody (in its entirety) cannot be captured by these representation. Prosody includes (but is not limited to) pitch (intonation), intensity, speaking rate/rhythm, and the use of pauses (usually, but not only to impact phrasing) as well as voice quality. The use of pitch and intensity are primarily relevant in a suprasegmental context. For example, in English(es), an absolute pitch observation carries very little information, but a rising or falling pitch contour (or contextualized within the speakers pitch range or register) can have significant information on the semantics pragmatics and paralinguistics of the utterance. I did not mean to suggest that there isn't important information in the time-spectrum. However, if you consider the literature on prosody as a whole you'll find that the relative value of local spectral content is much less relevant than suprasegmental content. (This includes the references mentioned in the comment above. There are corresponding papers for each of the tasks (sarcasm recognition, emotion recognition, prosody in speaker recognition) that show that suprasegmental representations of prosody are more valuable that short time analyses.)\"}",
"{\"title\": \"Rebuttal makes things much clearer.\", \"comment\": \"Thank you for the clarifications. I feel that the material is now much more convincing after seeing the architectural presentation. It is illuminating to note that one can break up content and style to capture their essence as can be seen in figures 2, 3, 4 and 5 in the appendix. Fig 2 uses multiheaded attention to compute similarity between ref. embedding and randomly initialized tokens - this seems to be a new addition to the previous GST works (Skerry-Ryan et al 2018 and Wang et al 2018).\\n\\nOverall, This work exhibits a very high level of application - attention based seq2seq modeling with Tacotron setup, and manipulating content and style with instructive use of techniques from the formulation to the architectures used . \\n\\nI rule this as a clear accept.\"}",
"{\"title\": \"evaluations\", \"comment\": \"MOS\\nWe do not think MOS is a must have metric in our paper. Other relevant papers for stylization in TTS, e.g. prosody-Tacotron also do not include a MOS evaluation. \\nWe have performed a number of evaluations quantitatively and qualitatively and believe that these extensive evaluations are sufficient to validate our work. The most important thing is, in our paper, how to disentangle style and content such that the encoder learns to produce effective style latent codes is the most important claim. So other than some evaluation metrics used in regular TTS, we also performed a set of experiments that do not typically appear in TTS work. \\n\\nTable captions \\nThanks for your suggestion, we will revise the captions.\\n\\nSwapping \\nThe most important claim in our paper is the ability to disentangle content and style. We believe this experiment actually is most important evaluation in validating our claim. Similar experiment are typically performed in computer vision papers, e.g. \\u2018Disentangling factors of variation in deep representations using adversarial training, NIPS 2016\\u2019 (Fig.3). \\n\\nASR model\\nThe ASR model is just a tool to evaluate different methods, here we just compare the relative performance. But your suggestions are good, we will add the ground truth WER in our paper.\\n\\nStyle transfer \\nTo validate that the test follows a normal distribution would require a large amount of subjective studies. We followed the precedent in the most recent works (GST, and prosody-Tacotron).\\n\\nPermutations\\nYes, you are right. Thanks for your carefully reading the paper, we will change this in our paper.\\n\\nResults \\nThe results turn out that they are almost equivalent. \\n\\nTypos\\nThanks, we will modify the typos in our paper.\"}",
"{\"title\": \"clarifications\", \"comment\": \"Title\\nWe appreciate this point, and removed the TTS-GAN moniker as it is quite generic.\\n\\nBijective mapping\\nWe agree that regular speech synthesis is not a bijective mapping problem, because it may result in multiple meaningful results. We also mentioned this in our paper (Sec. 1 ln 6-7). However, we want to clarify our claim, by saying \\u2018bijective\\u2019, we refer to style modeling in TTS (a conditional generation), i.e. given textual string and a reference audio sample, the synthesized audio should one-to-one correspond to the given conditions (content from text and style from reference audio). If it is not a bijective mapping, e.g. one-to-many mapping, then one textual string could map to different styles, which neglects our style condition (reference audio). We have also elaborated on our claim, which can be seen in Sec. 2 (last paragraph).\\n\\nStyle loss\\nWith all due respect, we disagree with the reviewer that prosody cannot be captured in local variations in the time-frequency domain. In fact, certain prosodic characteristics, such as emotion, are captured by local statistics in the time-frequency domain. For example, Cheang and Pell (2008) have shown that a temporary reduction in the average fundamental frequency significantly correlates with sarcasm expression. \\nMore broadly, numerous past studies on prosody have been based on spectral characteristics, e.g. Wang (2015), Soleymani, et al. (2018), Barry (2018).\\nThat being said, we do agree with the reviewer that prosodic variation is often suprasegmental. Therefore, our approach to capturing speaking style can only model those prosodic variations that are characterized by local statistics. We have made this point clear in our paper in Section 3.2.\\nCheang, Henry S., and Marc D. Pell. \\\"The sound of sarcasm.\\\" Speech communication 50.5 (2008): 366-381.\\nKun-Ching Wang. \\u201cTime-Frequency Feature Representation Using Multi-Resolution Texture Analysis and Acoustic Activity Detector for Real-Life Speech Emotion Recognition\\u201d (2015).\\nSobhan Soleymani, Ali Dabouei, et al. \\u201cProsodic-Enhanced Siamese Convolutional Neural Networks for Cross-Device Text-Independent Speaker Verification\\u201d (2018).\\nShaun Barry, Youngmoo Kim. \\u201cStyle Transfer for Musical Audio Using Multiple Time-Frequency Representations\\u201d. (2018)\\n\\n\\nReconstruction loss\\nFirst, we have changed \\u2018I\\u2019 to \\u2018z_c\\u2019 to represent the latent code. \\nIf z_c is categorical, then C could be a N-way classifier. So you are right, z_c is the emotion label for EMT-4, and identities for VCTK.\\n\\u2018Latent\\u2019 is commonly used in encoder-decoder networks and generative work, we do not feel it is a confusing word.\\nThe training details are present in the last paragraph of this section. In Eq9, the first term is minimized over C and the second term is minimized over both C and G. The hyperparameters were empirically determined.\\nDifferent datasets need different numbers of training steps (for EMT-4 we trained for 200k steps, while for VCTK, we trained our model for 280k steps).\\nThe detailed description of the weights and network architecture of R can be found in our paper (last paragraph in \\u2018Style Loss\\u2019 section and line 4-5 in page 5).\\n\\nPresentation of the tables\\nDue the page limitation, we prefer to present our paper in a more compact way. However, we could move elements to the appendices if necessary.\"}",
"{\"title\": \"Thanks for your thoughtful reviews and valuable comments\", \"comment\": \"Sorry, it misleads readers by saying in this way, we will modify our description in the paper.\\n\\n\\n1. To clarify, we were trying to communicate that when training on purely paired data the network can easily to memorize all the information from the paired audio sample, i.e. both style and content components.\\nFor example, given (txt1, aud1), the network memorizes that as long as given a txt1, the result should be aud1. In this case, the style embedding tends to be neglected by the decoder, and the style encoder cannot be optimized easily. During test stage, when given (txt1, aud2), the network still produces an audio sample very similar to aud1, and the \\u2018style\\u2019 is not learned well. Our experiments on style transfer validate this claim. When comparing with GST, our synthesized audio is closer to the reference style.\\n\\n2. Through empirical experiments we found that randomly sampling is enough for training.\\n\\n3. Thanks for your suggestion. When we started this work, that idea was our first basic attempt. But it turns out, by simply adding an adversarial loss on the latent space did not produce good results. The most severe problem is it is not robust to various length reference audio samples. When the reference audio is longer than the input, the synthesized samples tend to have long duplicate tails, or sometime noises. It severely impairs the audio quality. \\nWe suspect that, to satisfy the correct classification, the style embedding is squeezed into the same scale, which is not robust to varied length sequential signals. The Fader Network was used for processing images which are a fixed dimension, this method does not seem to work well for audio. Therefore, in our current model, we promote the disentanglement by paire-wise training, which means we do not need to add an adversarial loss directly on the latent space, but on the generated samples. Our results show that this leads to more robust outcomes for sequential signals for different lengths. We will clarify this in the paper.\\n\\n4. Thanks. It is a good suggestion to replace Tacotron2 with Prosody-Tacrotron. We will modify this in our paper.\\n\\n5. The hyperparameters for our model can be seen in our implementation details and Appendix.\\nThe parameters used for other methods are the same with their original work.\\n\\n6. As in this experiment, we want to evaluate how well our model can learn the latent space. In other words, are the style embeddings produced by our model effectively representing any desired style. By showing the t-SNE visualization, we can see that, the latent space learned by our model can be well separated into clusters according to the testing data distribution. The same experiment was also done in GST (Wang et al).\\n\\n7. We appreciate that TTS-GAN is quite general. We are happy to change the name of the paper.\"}",
"{\"title\": \"Thank you for the comments.\", \"comment\": \"Thank you for the comments. We have fixed the typos in our revision.\"}",
"{\"title\": \"Thanks for your thoughtful reviews and valuable comments.\", \"comment\": \"1. Typo in p2 l2.\\nThanks, we fixed it.\\n\\n2. Clarification on formulation:\\nThank you for pointing out the discrepency. We provide detailed explanation below. In short, there is a subtle yet important distinction: We use '+' samples to regularize within-domain mapping (between (c, x_aud^+) and \\\\tilde{x}^+), while Taigman et al., (2016) use '-' to promote cross-domain mapping (between (c, x_aud^-) and \\\\tilde{x}^-)).\\n\\nTaigman's work use a pretrained function f(.) to extract latent embeddings from both the source and the target domains, i.e., z_s = f(s), z_t = f(t). They then use a decoder to map these to the target distribution, producing s2t and t2t. The s2t drives cross-domain mapping, while the t2t regularizes within-domain mapping. They use a single function f(.) to compute the embeddings from both the source (real human face) and the target (emoji human face) because the two domains share certain structures and properties, e.g., a face has two eyes with eyebrows on top. This makes t2t -- within-domain mapping -- relatively easy compared to ours (see below on why); so they include the target term in the loss (Eqn 3 in [Taigman et al., 2016]) to further promote cross-domain mapping.\\n\\nIn our work, making the analogy, the source domain is '(content, style+)' and the target is '(content, style-)'. Both domains consist of two input modalities (text and sound) with very different characteristics. So we use two functions to represent each domain: Enc_c and Enc_s. Unfortunately, this makes it difficult to even ensure that within-domain mapping is successful. So, to strengthen within-domain mapping we modify the last term of the tenary discriminator to have x_aud^+ instead of the target x_aud^-. \\n\\n3. Clarification on reconstruction loss:\\nYes, both the content c = f(x_txt) and the style s = g(x_aud^+) embeddings are deterministic. The only stochasticity comes from the data distribution. We revised the notation in the paper; please take a look. \\n\\n4. Clarification on latent reconstruction loss: \\nWe have revised our paper with network architecture details, including a block diagram of the Inference Network 'C' that computes the latent representation 'l'; see Figure 3. The inference network is simply the style encoder (Enc_s) with a new classifier on top (one FC layer followed by softmax); all the weights are shared between C and Enc_s except for the new classifier layer.\\n\\nWe agree that 'z' is a more commonly used notation to represent latent codes. We have changed the notation in the paper; thanks for the suggestion! \\n\\n5. Clarification on network architecture\\nWe have revised our paper with block diagrams of our network architecture as well as parameter settings used in our implementation (Figure 3 to 5). We have also included an attention plot (Figure 6), showing the robustness of our approach to the length of the reference audio.\\n\\n6. Clarification on stability/mode collapse:\\nIn TTS stylization, when mode collapse happens the synthesized voice samples will exhibit the same acoustic style although different reference audio samples are provided. While it is difficult to entirely prevent the mode collapse from ever happening (as is common in GAN training), we have a number of measurements (i.e., different loss terms in our adversarial & collaborative game) to alleviate the issue and to improve stability during training. Our qualitative results show more diverse synthesized samples than Tacotron-GST when different reference audio samples are given, suggesting our work clearly improves upon the state-of-the-art. Our learning curve (https://researchdemopage.wixsite.com/tts-gan/image) also suggests that training with our loss formulation is relatively stable, i.e., the three loss values seem to converge to a stable regime.\\n\\n7. Note on latent representation:\", \"perhaps_the_most_important_message_we_want_to_deliver_is\": \"We are improving upon content vs. style disentanglement in acoustic signals by means of adversarial & collaborative learning. Extracting ``acoustic styles'' such as prosody has been an extremely difficult task. The state-of-the-art GST achieves this with an attention mechanism. But, as we argue in our paper, their loss construction makes it difficult to ``wipe out'' content information from acoustic signals; this is also shown in their qualitative results where prosody style transfer fails when the length of the reference audio clip is different from what is appropriate for the content to be synthesized. Our novel loss construction enables careful conditioning of our model so that the two latent representations, content 'c' and style 's' embeddings, become more precise than the previous method could obtain. In particular, our paired and unpaired input forumation, and the adversarial & collaborative game makes our model better condition the latent space so that the content information is effectively ignored in style embedding vectors.\\n\\n8. Reference:\\nWe have incorporated those references in our revision.\"}",
"{\"title\": \"lack of details and proper comparisons\", \"review\": [\"This paper proposes to use GAN to disentangle style information from speech content. The presentation of the core idea is clear but IMO there are some key missing details and experiments.\", \"The paper mentions '....the model could simply learn to copy the waveform information from xaud to the output and ignore s....'\", \"-- Did you verify this is indeed the case? 1) The style embedding in Skerry-Ryan et al.'18 serves as a single bottleneck layer, which could prevent information leaking. What dimension did you use, and did you try to use smaller size? 2) The GST layer in Wang et al.'18 is an even more aggressive bottleneck layer, which could (almost) eliminate style info entangled with content info.\", \"The sampling process to get x_{aud}^{-} needs more careful justifications/ablations.\", \"-- Is random sampling enough? What if the model samples a x_{aud}^{-} that has the same speaking style as x_{aud}^{+}? (which could be a common case).\", \"Did you consider the idea in Fader Netowrks (Lample et al.'17)', which corresponds to adding a simple adversarial loss on the style embedding? It occurs to be a much simpler alternative to the proposed method.\", \"Table 1. \\\"Tacotron2\\\" is often referred to Shen et al.'18, not Skerry-Ryan et al.'18. Consider using something like \\\"Prosody-Tacotron\\\"?\", \"The paramerters used for comparisons with other models are not clear. Some of them are important detail (see the first point above)\", \"The author mentioned the distance between different clusters in the t-SNE plot. Note that the distance in t-SNE visualizations typically doesn't indicate anything.\", \"'TTS-GAN' is too general as the name for the proposed method.\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Good technical ideas, but suffers from clarity issues and weak evaluation.\", \"review\": \"Overview: This paper describes an approach to style transfer in end-to-end speech synthesis by extending the reconstruction loss function and augmenting with an adversarial component and style based loss component.\", \"summary\": \"This paper describes an interesting technical approach and the results show incremental improvement to matching a reference style in end-to-end speech synthesis. The three-component adversarial loss is novel to this task. While it has technical merit, the presentation of this paper make it unready for publication. The technical descriptions are difficult to follow in places, it makes some incorrect statements about speech and speech synthesis and its evaluation is lacking in a number of ways. After a substantial revision and additional evaluation, this will be a very good paper.\\n\\nThe title of the paper and moniker of this approach as \\u201cTTS-GAN\\u201d seems to preclude the fact that in the last few years there have been a number of approaches to speech synthesis using GANs. By using such a generic term, it implies that this is the \\u201cstandard\\u201d way of using a GAN for TTS. Clearly it is not. Moreover, other than the use of the term, the authors do not claim that it is. \\n\\nWhile the related works regarding style modeling and transfer in end-to-end TTS models are well described, prior work on using GANs in TTS is not. (This may or may not be related to the previous point.) For example, but not limited to:\\nYang Shan, Xie Lei, Chen Xiao, Lou Xiaoyan, Zhu Xuan, Huang Dongyan, and Li Haizhou, Statistical Parametric Speech Synthesis Using Generative Adversarial Networks Under a Multi-task Learning Framework, ASRU, 2017\\nYuki Saito, Shinnosuke Takamichi, Hiroshi Saruwatari, Text-to-speech Synthesis using STFT Spectra Based on Low- /multi-resolution Generative Adversarial Networks, ICASSP 2018\\nSaito Yuki, Takamichi Shinnosuke, and Saruwatari Hiroshi, Training Algorithm to Deceive Anti-spoofing Verification for DNN-based Speech Synthesis, ICASSP, 2017.\", \"section_2_describes_speech_synthesis_as_a_cross_domain_mapping_problem_f\": \"S -> T, where S is text and T is speech. (Why a text-to-speech mapping is formalized as S->T is an irrelevant mystery.) This is a reasonable formulation, however, this is not a bijective mapping. There are many valid realizations s \\\\subset T of a text utterance t \\\\in S. The true mapping F is one-to-many. Contrary to the statement in Section 2, there should not be a one-to-one correspondence between input conditions and the output audio waveform and this should not be assumed. This formalism can be posed as a simplification of the speech synthesis mapping problem. Overall Section 2 lays an incorrect and unnecessary formalism over the problem, and does very little in terms of \\u201cbackground\\u201d information regarding speech synthesis or GANs. I would recommend distilling the latter half of the last paragraph. This content is important -- the goal of this paper is to disentangle the style component (s) from the \\u201ceverything else\\u201d component (z) in x_{aud} by which the resultant model can be correctly conditioned on s and ignore z.\\n\\nSection 3.2 Style Loss: The parallel between artistic style in vision and speaking style in speech is misplaced. Artistic style can be captured by local information by representing color choices, brush technique, etc. Speaking style and prosodic variation more broadly is suprasegmental. That is it spans multiple speech segments (typically defined as phonetic units, phonemes, etc.). It is specifically not captured in local variations in the time-frequency domain. The local statistics of a mel-spectrogram are empoverished to capture the long term variation spanning multiple syllables, words, and phrases that contribute to \\u201cspeaking style\\u201d. (In addition to the poor motivation of using low-level filters to capture speaking style, the authors describe \\u201cprosody\\u201d as \\u201crepresenting the low-level characteristics of sound\\u201d. This is not correct.) These filter activations are more likely to capture voice quality and speaker identity characteristics than prosody and speaking style.\\n\\nSection 3.2: Reconstruction Loss: The training in this section is difficult to follow. Presumably, l is the explicit style label from the data, the emotion label for EMT-4 and (maybe) speaker id for VCTK. It is a rather confusing choice to refer to this as \\u201clatent\\u201d since this carries a number of implications from variational techniques and bayesian inference. Similarly, It is not clear how these are trained. Specifically, both terms are minimized w.r.t. C but the second is minimized only w.r.t G. I would recommend that this section be rewritten to describe both the loss functions, target variables, and the dependent variables that are optimized during training.\\n\\nSection 3.3 How are the coefficients \\\\alpha and \\\\beta determined?\\n\\nSection 3.3 \\u201cWe train TTS-GAN for at least 200k steps.\\u201d Why be vague about the training?\\n\\nSection 3.3. \\u201cDuring training R is fixed weights\\u201d Where do these weights come from? Is it an ImageNet classifier similar with a smaller network than VGG-19?\", \"section_5\": \"The captions of Tables 1 and 2 should provide appropriate context for the contained data. There is not enough information to understand what is described here without reference to the associated text.\\n\\nSection 5.1: The content and style swapping is not evaluated. While samples are provided, it is not at all clear that the claims made by the authors are supported by the data. A listening study where subjects are asked to identify the intended emotion of the utterance would be a convincing way to demonstrate the effectiveness of this technique. As it stands, I would recommend removing the section titled \\u201cContent and style swapping\\u201d as it is unempirical. If the authors are committed to it, it could be reasonably moved to the conclusions or discussion section as anecdotal evidence.\\n\\nSection 5.3: Why use a pre-trained WaveNet based ASR model? What is its performance on the ground truth audio? This is a valuable baseline for the WER of the synthesized material.\\n\\nSection 5.3 Style Transfer: Without support that the subject ratings in this test follow a normal distribution a t-test is not a valid test to use here. A non-parametric test like a Mann-Whitney U test would be more appropriate.\\n\\nSection 5.3 Style Transfer: \\u201cEach listened to all 15 permutations of content\\u201d. From the previous paragraph there should be 60 permutations.\\n\\nSection 5.3 Style Transfer: Was there any difference in the results from the 10 sentences from the test set, and the 5 drawn from the web?\", \"typos\": \"\", \"section_1_introduction\": \"\\u201cx_{aud}^{+} is unpaired\\u201d -> \\u201cx_{aud}^{-} is unpaired\\u201d\", \"section_2\": \"\\u201cHere, We\\u201d -> \\u201cHere, we\\u201d\\nSection 5.3 \\u201cTachotron\\u201d -> \\u201cTacotron\\u201d\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Review of \\u201cTTS-GAN: A GENERATIVE ADVERSARIAL NETWORK FOR STYLE MODELING IN A TEXT-TO-SPEECH SYSTEM\\u201d\", \"review\": \"This paper proposes to use a generative adversarial network to model speaking style in end-to-end TTS. The paper shows the effectiveness of the proposed method compared with Takotron2 and other variants of end-to-end TTS with intensive experimental verifications. The proposed method of using adversarial and collaborative games is also quite unique. The experimental part of the paper is well written, but the formulation part is difficult to follow. Also, the method seems to be very complicated, and I\\u2019m concerning about the reproducibility of the method only with the description in Section 3.\\n\\nComments\\n- Page 2, line 2: x _{aud} ^{+} -> x _{aud} ^{-} (?)\\n- Section 2: $T$ is used for audio and the number of words.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Good adversarial domain adaptation ideas for TTS - but details on architecture needed\", \"review\": \"This paper proposes a method to synthesize speech from text input, with the style of an input voice provided with the text. Thus, we provide content - text - and style - voice. It leverages recent - phenomenal - progress in TTS with Deep Neural Networks as seen from exemplar works such as Tacotron (and derivatives), DeepVoice, which use seq2seq RNNs and Wavenet families of models. The work is extremely relevant in that audio data is hard to generate (expensive) and content-style modeling could be useful in a number of practical areas in synthetic voice generation. It is also quite applicable in the related problem of voice conversion. The work also uses some quite complex - (and very interesting!) - proposals to abstract style, and paste with content using generative modeling. I am VERY excited by this effort in that it puts together a number of sophisticated pieces together, in what I think is a very sensible way to implement a solution to this very difficult problem. However, I would like clarifications and explanations, especially in regards to the architecture.\", \"description_of_problem\": \"The paper proposes a fairly elaborate setup to inject voice style (speech) into text. At train time it takes in text samples $x_{txt}$, paired voice samples (utterances that have $x_{txt}$ as content) $s+$ and unpaired voice samples $s-$, and produces two voice samples $x+$ (for paired <txt, utterance>) and $x-$ (for unpaired txt/utterance). The idea is that at test time, we pass in a text sample $x_{txt}$ and an UNPAIRED voice sample $x_{aud}$ and the setup produces voice in the style of $x_{aud}$ but whose content is $x_{txt}$, in other words it generates synthetic speech saying $x_{txt}$. The paper goes on to show performance metrics based on an autoencoder loss, WER and t-SNE embeddings for various attributes.\", \"context\": \"The setup seems to be built upon the earlier work by Taigman et al (2016) which has the extremely interesting conception of using a {\\\\it ternary} discriminator loss to carry out domain adaptation between images. This previous work was prior to the seminal CycleGAN work for image translation, which many speech works have since used. Interestingly, the Taigman work also hints at a 'common' latent representation a la UNIT using coupled VAE-GANs with cycle consistency (also extremely pertinent), but done differently. In addition to the GAN framework by Taigman et al, since this work is built upon Tacotron and the GST (Global Style Tokens) work that followed it, the generative setup is a sophisticated recurrent attention based seq2seq model.\", \"formulation\": \"A conditional formulation is used wherein the content c (encoding generated by text) is passed along with other inputs in the generator and discriminator. The formulation in Taigman assumes that there is an invariant representation in both (image) domains with shared features. To this, style embeddings (audio) gets added on and then gets passed into the generator to generate the speech. Both c and s seem to be encoder outputs in the formulation. The loss components of what they call \\u2018adversarial\\u2019, \\u2018collaborative\\u2019 and \\u2018style\\u2019 losses. \\n\\nAdversarial losses\\nThe ternary loss for D consists of \\n\\nDiscriminator output from \\u2018paired\\u2019 style embedding (i.e. text matching the content of paired audio sample)\\nDiscriminator output from \\u2018unpaired\\u2019 style embedding (i.e text paired with random sample of some style)\\nDiscriminator output from target ground truth style. The paper uses x_+, so I would think that it uses the paired sample (i.e. from the source) style.\\n\\nGenerator loss (also analogous to Taigman et al) consists of generations from paired and unpaired audio, possibly a loose analogue to source and target domains, although in this case we can\\u2019t as such think of \\u2018+\\u2019 as the source domain, since the input is text. \\n\\nCollaborative losses \\nThis has two components, one for style (Gatys et al 2016) and a reconstruction component. The reconstruction component again has two terms, one to reconstruct the paired audio output \\u2018x+=x_audio+\\u2019 - so that the input content is reproduced - and the other to encourage reconstruction of the latent code.\", \"datasets_and_results\": \"\", \"they_use_two_datasets\": \"one, an internal \\u2018EMT-4\\u2019 dataset with 20k+ English speakers, and the other, the VCTK corpus. Comparisons are made with a few good baselines in Tacotron2, GST and DeepVoice2.\\n\\nOne comparison technique to test disentanglement ability is to compare autoencoder reconstructions with the idea that a setup that has learnt to disentangle would produce higher reconstruction error because it has learnt to separate style and content. \\n\\nt-SNE embeddings are presented to show visualizations of various emotion styles (neutral, angry, sad and happy), and separation of male and female voices. A WER metric is also presented so that generations are passed into a classifier (an ASR system trained on Wavenet). All the metrics above seem to compare excellently (better than?) with the others.\", \"questions_and_clarifications\": \"(Minor) There\\u2019s a typo in page 2, line 2. x_{aud}^+ should be x_{aud}^-.\", \"clarification_on_formulation\": \"Making the analogy (is that even the right way of looking at this?) that the \\u2018source\\u2019 domain is \\u2018+\\u2019, and the target domain is \\u2018-\\u2019, in equation (5), the last term of the ternary discriminator has the source domain (x_{aud}^+) in it, while the Taigman et al paper uses the target term. Does this matter? I would think \\u2018no\\u2019, because we have a large number of terms here and each individual term in and of itself might not be relevant, nor is the current work a direct translation of the Taigman et al work. Nevertheless, I would like clarification, if possible, on the discrepancy and why we use the \\u2018+\\u2019 samples.\", \"clarification_on_reconstruction_loss\": \"I think the way it is presented, equation (8) is misleading. Apparently, we are sampling from the latent space of style and content embeddings for paired data. The notation seems to be quite consistent with that of the VAE, where we have a reconstruction and a recognition model, and in effect the equation (8) is sampling from the latent space in a stochastic way. However, as far as I can see, the latent space here produces deterministic embeddings, in that c = f(x_{txt}) and s = g(x_{aud}^+), with the distribution itself being a delta function. Also, the notation q used in this equation most definitely indicates a variational distribution, which I would think is misleading (unless I have misinterpreted what the style tokens mean). At any rate, it would help to show how the style token is computed and why it is not deterministic.\", \"clarification_on_latent_reconstruction_loss\": \"In equation (9), how is the latent representation \\u2018l\\u2019 computed? While I can intuitively see that the latent space \\u2018l\\u2019 (or z, in more common notation) would be the \\u2018same\\u2019 between real audio samples and the \\u2018+\\u2019, \\u2018-\\u2019 fake samples, it seems to me that they would be related to s (as the paper says, \\u2018C\\u2019 and \\u2018Enc_s\\u2019 share all conv layers) and the text. But what, in physical terms is it producing? Is it like the shared latent space in the UNIT work, or the invariant representation in Taigman? This could be made clearer with an block diagram for the architecture.\\n\\n(Major) Clarification on network architecture\\nThe work references Tacotron\\u2019s GST work (Wang et al 2018) and the related Skerry-Ryan work as the stem architecture with separate networks for style embeddings and for content (text). While the architecture itself might be available in the stem work by Wang et al, I think we need some diagrams for the current work as well for a high level picture. Although it is mentioned in words in section 3.3, I do not get a clear idea of what the encoder/decoder architectures look like. I was also surprised in not seeing attention plots which are ubiquitous in this kind of work. Furthermore, in the notes to the \\u2018inference\\u2019 network \\u2018C\\u2019 it is stated that C and Enc_s share all conv layers. Again, a diagram might be helpful - this also applies for the discriminator. \\n\\nClarification on stability/mode collapse: Could the authors clarify how easily this setup trained in this adversarial setup?\", \"note_on_latent_representation\": \"To put the above points in perspective, a small note on what this architecture does in regards to the meaning of the latent codes would be useful. The Taigman et al 2016 paper talks about the f-constancy condition (and 'invariance'). Likewise, in the UNIT paper by Ming-Yu Liu - which is basically a set of coupled VAEs + cycle consistency losses, there is the notion of a shared latent space. A little discussion on these aspects would make the paper much more insightful to the domain adaptation practitioner.\", \"reference\": \"This reference - Adversarial feature matching for text generation - (https://arxiv.org/abs/1706.03850) contains a reconstruction stream (as perhaps many other papers) and might be useful for instruction.\", \"other_relevant_works_in_speech_and_voice_conversion\": \"This work comes to mind, using the StarGAN setup, also containing a survey of relevant approach in voice conversion. Although the current work is for TTS, I think it would be useful to include speech papers carrying out domain adaptation for other tasks.\", \"stargan_vc\": \"Non-parallel many-to-many voice conversion with star generative adversarial networks.\", \"https\": \"//arxiv.org/abs/1806.02169\\n\\nI would rate this paper as being acceptable if the authors clarify my concerns, and in particular, about the architecture. It is also hard to hard to assess reproducibility in a complex architecture such as this.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
|
rJxcHnRqYQ | Local Binary Pattern Networks for Character Recognition | [
"Jeng-Hau Lin",
"Yunfan Yang",
"Rajesh K. Gupta",
"Zhuowen Tu"
] | Memory and computation efficient deep learning architectures are crucial to the continued proliferation of machine learning capabilities to new platforms and systems. Binarization of operations in convolutional neural networks has shown promising results in reducing the model size and computing efficiency.
In this paper, we tackle the character recognition problem using a strategy different from the existing literature by proposing local binary pattern networks or LBPNet that can learn and perform bit-wise operations in an end-to-end fashion. LBPNet uses local binary comparisons and random projection in place of conventional convolution (or approximation of convolution) operations, providing important means to improve memory and speed efficiency that is particularly suited for small footprint devices and hardware accelerators. These operations can be implemented efficiently on different platforms including direct hardware implementation. LBPNet demonstrates its particular advantage on the character classification task where the content is composed of strokes. We applied LBPNet to benchmark datasets like MNIST, SVHN, DHCD, ICDAR, and Chars74K and observed encouraging results. | [
"deep learning",
"local binary pattern",
"supervised learning",
"hardware-friendly"
] | https://openreview.net/pdf?id=rJxcHnRqYQ | https://openreview.net/forum?id=rJxcHnRqYQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"S1ePcf8rg4",
"rkl6xVHcam",
"BygIVXH5pX",
"BklHrzScTm",
"r1xw-56i2m",
"rJxFat292X",
"rklNFUxr2m"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1545065102909,
1542243317234,
1542243118253,
1542242876584,
1541294590544,
1541224896982,
1540847228321
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1568/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1568/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1568/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1568/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1568/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1568/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1568/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper proposed a LBPNet for character recognition, which introduces the LBP feature extraction into deep learning. Reviewers are confused on implementation and not convinced on experiments. The only score 6 reviewer is also concerned \\\"Empirically weak, practical advantage wrt to literature unclear\\\". Only evaluating on MNIST/SVHN etc is not convincing to demo the effectiveness of the proposed method.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"unconvincing\"}",
"{\"title\": \"We appreciate your valuable feedback. Please see our answers below.\", \"comment\": \"[from authors:]\", \"q\": \"\\\"English errors.\\\"\", \"a\": \"Thanks for pointing them out. We have revised them accordingly in the paper.\"}",
"{\"title\": \"We appreciate your valuable feedback. Please see our answers below.\", \"comment\": \"[from authors:]\", \"q\": \"\\\"Performance on affNIST, face recognition and pedestrian detection.\\\"\", \"a\": \"This is a good point which has also been raised by the other reviewers. Please see our reply to the previous reviewers and also see the newly-added section \\\"Preliminary Results on Object and Deformed Patterns\\\" on page 8.\"}",
"{\"title\": \"We appreciate your valuable feedback. Please see our answers below.\", \"comment\": \"[from authors:]\", \"q\": \"\\\"Does it can be easily used for other tasks such as face recognition or object detection on some relatively large datasets?\\\"\", \"a\": \"This is a good point. To answer this question, we evaluated LBPNet on three additional datasets including the INRIA pedestrian dataset, the FDDB face dataset, and the affNIST dataset, and have observed encouraging results. We report them in a new section \\\"Preliminary Results on Objects and Deformed Patterns\\\" on page 8. LBPNet indeed achieves results on par with CNN but at a significantly reduced computation and model complexity, similar to the character case. This validates the effectiveness of LBPNet on broader object types beyond characters.\"}",
"{\"title\": \"Borderline paper\", \"review\": \"1. The paper introduces the idea of some existing hand-crafted features into the deep learning framework, which is a smart way for building light weighted convolutional neural networks.\\n\\n2. I have noticed that binary patterns used in the paper are trainable, which means that these binary patterns can be seen as learned convolution filters with extremely space and computational complexity. Thus, the proposed method can also be recognized as a kind of binary network. \\n\\n3. The baseline BCNN has a different architecture to the network using the proposed method. Thus, comparisons shown in Table 3 and Table 4 are somewhat unfair.\\n\\n4. The capability of the proposed method was only verified on character recognition datasets. Does it can be easily used for other tasks such as face recognition or object detection on some relatively large datasets?\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Empirically weak, practical advantage wrt to literature unclear.\", \"review\": \"In this work, a neural network that uses local binary patterns instead of kernel convolutions is introduced. Using binary patterns has two advantages: a) it reduces the network definition to a set of binary patterns (which requires much less storage than the floating point descriptions of the kernel weights used in CNNs) and b) allows for fast implementations relying only on logical operations (particularly fast on dedicated hardware).\\n\\nThis work is mostly descriptive of a proposed technique with no particular theoretical performance guarantees, so its value hinges mostly on its practical performance on real data. In that sense, its evaluation is relatively limited, since only figures for MNIST and SVHN are provided.\\n\\nA list of additional datasets is provided in Table 5, but only the performance metric is listed, which is meaningless if it is not accompanied with figures for size, latency and speedup. The only takeway about the additional datasets is that the proposed LBPNet can match or outperform a weak CNN baseline, but we don't know if the latter achieves state-of-the-art performance (previous figures of the baseline CNN suggest it doesn't) and we don't know if there's significant gain in speed or size.\\n\\nRegarding MNIST and SVHN, which are tested in some more detail, again, we are interested in the performance-speed (or size) tradeoff, and it is unclear that the current proposal is superior. The baseline CNN does not achieve state of the art performance (particularly in SVHN, for which the state-of-the-art is 1.7% and the baseline CNN achieves 6.8%). For SVHN, BCNN has a much better performance-speed tradeoff than the baseline, since it is both faster and higher performance. Then, the proposed method, LBPNet, has much higher speed, but lower performance than BCNN. It is unclear how LBPNet's and BCNN's speeds would compare if we were to match their performances. For this reason, it is unclear to me that LBPNet is superior to BCNN on SVHN.\\n\\nAlso the numbers in boldface are confusing, aren't they just incorrect for both the Latency and Error in MNIST? Same for the Latency in SVHN.\\n\\nThe description of the approach is reasonably clear and clarifying diagrams are provided. The backpropagation section seems a bit superficial and could be improved. For instance, backpropagation is computed wrt the binary sampling points, as if these were continuous, but they have been defined as discrete before. The appendix contains a bit more detail, where it seems that backpropagation is alternated with rounding. It's not justified why this is a valid gradient descent algorithm.\\n\\nAlso how the scaling k of the tanh is set is not explained clearly. Do you mean that with more sampling points k should be larger to keep the outputs of the approximate comparison operator close to 0 and 1?\", \"minor\": \"What exactly in this method makes it specific to character recognition? Since you are trying to capture both high-level and low-level frequencies, it seems you'd be capturing all the relevant information. SVHN data are color images with objects (digits) in it, what is the reason that makes other objects not be detectable with this approach?\\n\\nEnglish errors are pervasive throughout the paper. A non-exhaustive list:\\n\\nFig 4.b: X2 should be Y2\\nparticuarly\\n\\\"to a binary digits\\\"\\n\\\"In most case\\\"\\n\\\"0.5 possibility\\\"\\n\\\"please refer to Sec ..\\\"\\n\\\"FORWARD PROPATATION\\\"\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"interesting idea, but quite confused on implementation and not convinced on experiments\", \"review\": \"This paper proposed a LBPNet for character recognition, which introduces the LBP feature extraction into deep learning. Personally I think that this idea is interesting for improving the efficiency of CNNs, as traditionally LBP has been demonstrated its good performance and efficiency in some vision tasks such as face recognition or pedestrian detection. However, I do have the following concerns about the paper:\\n\\n1. Calculation/Implementation of Eq. 4: I do not quite understand how it derived, and how to use Eq. 3 in calculation. I suggest the authors to explain more details, as this is the key for implementation of LBP layers.\\n\\n2. Effects of several factors on performance in the experiments are missing: (1) random projection map in Fig. 5, (2) $k$ in Eq. 2, and (3) the order of images for computing RHS of Eq. 3. In order to better demonstrate LBPNet, I suggest to add such experiments, plus training/testing behavior comparison of different networks. \\n\\n3. Does this network work with more much deeper?\\n\\n4. Data: The datasets used in the experiments are all well-aligned. This makes me feel that the RHS of Eq. 3 does make sense, because it will capture the spatial difference among data, like temporal difference in videos. How will the network behave on the dataset that is not aligned well, like affnist dataset?\\n\\n5. How will this network behave for the applications such as face recognition or pedestrian detection where traditionally LBP is applied?\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
SJe9rh0cFX | On the Universal Approximability and Complexity Bounds of Quantized ReLU Neural Networks | [
"Yukun Ding",
"Jinglan Liu",
"Jinjun Xiong",
"Yiyu Shi"
] | Compression is a key step to deploy large neural networks on resource-constrained platforms. As a popular compression technique, quantization constrains the number of distinct weight values and thus reducing the number of bits required to represent and store each weight. In this paper, we study the representation power of quantized neural networks. First, we prove the universal approximability of quantized ReLU networks on a wide class of functions. Then we provide upper bounds on the number of weights and the memory size for a given approximation error bound and the bit-width of weights for function-independent and function-dependent structures. Our results reveal that, to attain an approximation error bound of $\epsilon$, the number of weights needed by a quantized network is no more than $\mathcal{O}\left(\log^5(1/\epsilon)\right)$ times that of an unquantized network. This overhead is of much lower order than the lower bound of the number of weights needed for the error bound, supporting the empirical success of various quantization techniques. To the best of our knowledge, this is the first in-depth study on the complexity bounds of quantized neural networks. | [
"Quantized Neural Networks",
"Universial Approximability",
"Complexity Bounds",
"Optimal Bit-width"
] | https://openreview.net/pdf?id=SJe9rh0cFX | https://openreview.net/forum?id=SJe9rh0cFX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"Bkx6xzFmgE",
"SJgusz_wpX",
"Byx9YluPp7",
"rylbKJuw67",
"rygt00vwpm",
"rklKqbep3X",
"BkeWLpq_37",
"HyeAz3FNnQ"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544946165285,
1542058655561,
1542058114084,
1542057848809,
1542057681373,
1541370256625,
1541086536787,
1540819989977
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1567/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1567/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1567/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1567/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1567/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1567/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1567/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1567/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper addresses a well motivated problem and provides new insight on the theoretical analysis of representational power in quantized networks. The results contribute towards a better understanding of quantized networks in a way that has not been treated in the past.\\n\\nThe most moderate rating (marginally above acceptance threshold) explains that while the paper is technically quite simple, it gives an interesting study and blends well into recent literature on an important topic. \\n\\nA criticism is that the approach uses modules to approximate the basic operations of non quantized networks. As such it not compatible with quantizing the weights of a given network structure, but rather with choosing the network structure under a given level of quantization. However, reviewers consider that this issue is discussed directly and clearly in the paper. \\n\\nThe reviewers report to be only fairly confident about their assessment, but they all give a positive or very positive evaluation of the paper.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Good motivation and new theoretical insights\"}",
"{\"title\": \"Submission Updated\", \"comment\": \"We thank all reviewers for their time and valuable comments. We are grateful that reviewers found this paper interesting, important, and clear. We have carefully revised the paper following the reviewers\\u2019 suggestions to further improve the presentation and have updated the submission. The revisions we made include the following:\\n\\n1. We made the definition of two types of quantization, linear quantization and nonlinear quantization, and two types of structure, function-dependent structure and function-independent structure, more formal and moved them to Section 3 \\u201cModels and Assumptions\\u201d.\\n2. We added more discussion of the difference between our work and that of (Yarotsky, 2017), and moved that discussion from Section 2 \\u201cRelated Works\\u201d to Section 1 \\u201cIntroduction\\u201d. \\n3. We revised Figure 2 in the Appendix in connection with the proof of Proposition 1 and added some detailed descriptions.\\n4. A few minor fixes like re-organizing sentences and correcting typos.\"}",
"{\"title\": \"Response to Review #1 titled \\u201cA very interesting and rather clear paper on quantized ReLU neural networks\\u201d\", \"comment\": \"We thank you for your comments and support. We appreciate that you value our new and significant results in this new direction. We have revised Figure 2 in the Appendix in connection with the proof of Proposition 1 and added some detailed descriptions as below. A few other revisions are also made in the Appendix to improve the overall presentation. Please check the new version for details.\\n\\u201c\\nNote that a straightforward implementation will have to scale $g^{\\\\circ i}(x)$ separately (multiply by different numbers of $\\\\frac{1}{2}$) before subtracting them from $x$ because each $g^{\\\\circ i}(x)$ have a different coefficient. Then the width of the network will be $\\\\Theta(r)$. Here we use a ``pre-scale'' method to reduce the network width from $\\\\Theta(r)$ to a constant. The network constructed is shown in Figure 2. The one-layer sub-network that implements $g(x)$ and the one-layer sub-network that scales the input by $4$ are denoted as $B_g$ and $B_m$ respectively. Some units are copied to compensate the scaling caused by $\\\\frac{1}{2}$. The intermediate results $g^{\\\\circ i}(x)$ are computed by the concatenation of $B_g$ at the $(i+1)$-th layer. The first $B_m$ takes $x$ as input and multiply it by $4$. The output of $i$-th $B_m$ is subtracted by $g^{\\\\circ i}(x)$ and then fed to the next $B_m$ to be multiplied by $4$ again. There are $r$ layers of $B_m$ and all $g^{\\\\circ i}(x)$ are scaled by $2^{2(r-i)}$ respectively. As a result, we obtain $2^{2r}x-\\\\sum_{i=1}^{r}2^{2(r-i)}g^{\\\\circ i}(x)$ after the last $B_m$. Then it is scaled by $2^{-2r}$ in the later $2r$ layers to get $f^r_s(x)$. In this way, we make all $g^{\\\\circ i}(x)$ sharing the same scaling link and a constant width can be achieved.\\n\\\"\"}",
"{\"title\": \"Response to Review #3 titled \\u201cReasonable paper on an interesting topic\\u201d\", \"comment\": \"We thank you for your time and thoughtful review. In fact, the change of the topology with the level of quantization is not an issue but part of the construction of the proof. Note that the goal of this work is to provide a theoretical proof on the expressive power of quantized neural networks without any assumptions on how we obtain the quantized networks. By allowing the networks to have different topologies, we are able to mathematically prove how we can use constructed quantized networks to approximate the same target function within any given error bound. With this flexibility, we are able to obtain the bound on the number of parameters and make a fair comparison between quantized networks and unquantized networks. Since there is no previous work on the theoretical expressive power of quantized neural networks, we consider our work as a good first attempt. Of course, a natural research question is whether we can extend the theoretical result to a given network (but not a given target function as studied in this paper). We would like to explore such a question in our future research.\"}",
"{\"title\": \"Response to Review #2 titled \\u201cReview for On the Universal Approximability and Complexity Bounds of Quantized ReLU Neural Networks\\u201d\", \"comment\": \"Thank you for your positive and constructive feedback. We have made the revision to further improve the presentation following your suggestions. Please check the new version for details.\\n\\nWe made the definition of two types of quantization, linear quantization and nonlinear quantization, and two types of structure, function-dependent structure and function-independent structure, more formal as below and moved them to Section 3 \\u201cModels and Assumptions\\u201d.\", \"here_are_the_paragraphs_related_to_the_definition_of_linear_vs_nonlinear_quantization\": \"\\u201c\\nWe denote the finite number of distinct weight values as $\\\\lambda$ ($ \\\\lambda \\\\in \\\\mathbb{Z}^{+}$ and $\\\\lambda \\\\geq 2$), for both linear and nonlinear quantization. For linear quantization, without loss of generality, we assume the finite number of distinct weight values are given as $ \\\\{-1, \\\\frac{1}{\\\\lambda},\\\\frac{2}{\\\\lambda},\\\\dots,\\\\frac{\\\\lambda-1}{\\\\lambda}\\\\}$, where $\\\\{\\\\frac{1}{\\\\lambda},\\\\frac{2}{\\\\lambda},\\\\dots,\\\\frac{\\\\lambda-1}{\\\\lambda}\\\\}$ are uniformly spaced (hence called ``linear\\u2019\\u2019) in $(0,1)$ and $-1$ is used to obtain the negative weight values. For nonlinear quantization, we assume the finite number of distinct weight values are not constrained to any specific values, i.e., they can take any values as needed. \\n\\u201d\\n\\nHere are the paragraphs related to the definition of the function-dependent vs independent structures.\\n\\u201c\\nWhen constructing the network to approximate any target function $f$, we consider two scenarios for deriving the bounds. The first scenario is called function-dependent structure, where the constructed network topology and their associated weights are all affected by the choice of the target function. In contrast, the second scenario is called function-independent structure, where the constructed network topology is independent of the choice of the target function in $ f\\\\in\\\\mathcal{F}_{d,n}$ with a given $\\\\epsilon$. The principle behind these design choices (the network topology constructions and the choice of weights) is to achieve a tight upper bound as much as possible.\\n\\u201d \\n\\nWe added more discussion of the difference between our work and that of (Yarotsky, 2017), and moved that discussion from Section 2 \\u201cRelated Works\\u201d to Section 1 \\u201cIntroduction\\u201d. Details are quoted as follows:\\n\\u201c\\nWe follow the idea from (Yarotsky, 2017) to prove the complexity bound by constructing a network, but with new and additional construction components essential for quantized networks. Specifically, given the number of distinct weight values $\\\\lambda$ and a target function $f$, we construct a network that can approximate $f$ with an arbitrarily small error bound $\\\\epsilon$ to prove the universal approximability. The memory size of this network then naturally serves as an upper bound for the minimal network size. \\nThe high-level idea of our approach is to replace basic units in an unquantized network with quantized sub-networks that approximate these basic units. For example, we can approximate a connection with any weight in an unquantized network by a quantized sub-network that only uses a finite number of given weight values. Even though the approximation of any single unit can be made arbitrarily accurate in principle with unlimited resources (such as increased network depth), in practice, there exists some inevitable residual error at every approximation, all of which could propagate throughout the entire network. The challenge becomes, however, how to mathematically prove that we can still achieve the end-to-end arbitrary small error bound even if these unavoidable residual errors caused by quantization can be propagated throughout the entire network. This paper finds a solution to solve the above challenge. In doing so, we have to propose a number of new ideas to solve related challenges, including judiciously choosing the proper finite weight values, constructing the approximation sub-networks as efficient as possible (to have a tight upper bound), and striking a good balance among the complexities of different approximation steps.\\n\\u201d\"}",
"{\"title\": \"Review for On the Universal Approximability and Complexity Bounds of Quantized ReLU Neural Networks\", \"review\": \"This paper studies the expressive power of quantized ReLU networks from a theoretical point of view. This is well-motivated by the recent success of using quantized neural networks as a compression technique. This paper considers both linear quantization and non-linear quantization, both function independent network structures and function dependent network structures. The obtained results show that the number of weights need by a quantized network is no more than polylog factors times that of a unquantized network. This justifies the use of quantized neural networks as a compression technique.\\n\\nOverall, this paper is well-written and sheds light on a well-motivated problem, makes important progress in understanding the full power of quantized neural networks as a compression technique. I didn\\u2019t check all details of the proof, but the structure of the proof and several key constructions seem correct to me. I would recommend acceptance. \\n\\nThe presentation can be improved by having a formal definition of linear quantized networks and non-linear quantized networks, function-independent structure and function-dependent structure in Section 3 to make the discussion mathematically rigorous. Also, some of the ideas/constructions seem to follow (Yarotsky, 2017). It seems to be a good idea to have a paragraph in the introduction to have a more detailed comparison with (Yarotsky, 2017), highlighting the difference of the constructions, the difficulties that the authors overcame when deriving the bounds, etc.\", \"minor_comment\": \"First paragraph of page 2: extra space after ``to prove the universal approximability\\u2019\\u2019.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Reasonable paper on an interesting topic\", \"review\": \"The paper deals with the expressibility of quantized neural network, meaning where all weights come from a finite and small sized set. It proves that functions satisfying standard assumptions can be represented by quantized ReLU networks with certain size bounds, which are comparable to the bounds available in prior literature for general ReLU networks, with an overhead that depends on the level of quantization and on the target error.\\n\\nThe proofs generally go by simulating non-quantized ReLU networks with quantized ones, by means of replacing their basic operations with small quantized networks (\\\"sub-networks\\\") that simulate those same operations with a small error. Then the upper bounds follow from known results on function approximation with (non-quantized) ReLU networks, with the overhead incurred by introducing the sub-networks.\\nNotably, this approach means that the topology of the network changes. As such it not compatible with quantizing the weights of a given network structure, which is the more common scenario, but rather with choosing the network structure under a given level of quantization. This issue is discussed directly and clearly in the paper.\\n\\nOverall, while the paper is technically quite simple, it forms an interesting study and blends well into recent literature on an important topic. It is also well written and clear to follow.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"A very interesting and rather clear paper on quantized ReLU neural networks\", \"review\": \"The authors propose in this paper a series of results on the approximation capabilities of neural networks based on ReLU using quantized weights. Results include upper bounds on the depth and on the number of weights needed to reach a certain approximation level given the number of distinct weights usable. The paper is clear and as far as I know the results are both new and significant. My only negative remark is about the appendix that could be clearer. In particular, I think that figure 2 obscures the proof of Proposition 1 rather than the contrary. I think it might be much clearer to give an explicit neural network approximation of x^2 for say r=2, for instance.\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
Syx9rnRcYm | A CASE STUDY ON OPTIMAL DEEP LEARNING MODEL FOR UAVS | [
"Chandan Kumar",
"Subrahmanyam Vaddi",
"Aishwarya Sarkar"
] | Over the passage of time Unmanned Autonomous Vehicles (UAVs), especially
Autonomous flying drones grabbed a lot of attention in Artificial Intelligence.
Since electronic technology is getting smaller, cheaper and more efficient, huge
advancement in the study of UAVs has been observed recently. From monitoring
floods, discerning the spread of algae in water bodies to detecting forest trail, their
application is far and wide. Our work is mainly focused on autonomous flying
drones where we establish a case study towards efficiency, robustness and accuracy
of UAVs where we showed our results well supported through experiments.
We provide details of the software and hardware architecture used in the study. We
further discuss about our implementation algorithms and present experiments that
provide a comparison between three different state-of-the-art algorithms namely
TrailNet, InceptionResnet and MobileNet in terms of accuracy, robustness, power
consumption and inference time. In our study, we have shown that MobileNet has
produced better results with very less computational requirement and power consumption.
We have also reported the challenges we have faced during our work
as well as a brief discussion on our future work to improve safety features and
performance. | [
"Energy Efficiency",
"Autonomous Flying",
"Trail Detection"
] | https://openreview.net/pdf?id=Syx9rnRcYm | https://openreview.net/forum?id=Syx9rnRcYm | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"r1xEZFR7gE",
"BJeDB0wLTm",
"Bkl5ifgIT7",
"HJlVyMV63Q"
],
"note_type": [
"meta_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544968443533,
1541991998824,
1541960353914,
1541386716237
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1566/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1566/AnonReviewer5"
],
[
"ICLR.cc/2019/Conference/Paper1566/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1566/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper compared between different CNNs for UAV trail guidance. The reviewers arrived at a consensus on rejection due to lack of new ideas, and the paper is not well polished.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"not a well polished paper\"}",
"{\"title\": \"No substantial contribution, and large portions of the paper are unnecessary\", \"review\": \"Summary:\\n\\nThis paper considers the task of trail navigation task recently explored by Giusti et al. and Smolyanskiy et al. The authors describe their setup for physical experiments with a drone, and compare three neural network architectures for trail navigation on the IDSIA dataset. Experiments in a simulator are also reported.\", \"good_aspects_of_the_paper\": \"The pairing of simulation with trail navigation is an interesting idea, though it is not explored much in this paper.\", \"bad_aspects_of_the_paper\": \"Although the presence of physical experiments is suggested by pages 3 and 4, there are no physical experiments actually reported in the paper. In Section 5, this is revealed to be due to a hardware bug. The authors should not include these descriptions if they are not tied to reported experiments.\\n\\nOne of the main contributions of the paper is stated to be the comparison between neural network architectures. The two architectures compared to the TrailNet model from Smolyanskiy et al. are selected for their performance on the ImageNet classification task, and are shown to outperform TrailNet on salient metrics. However, comparing only three architectures is a very small comparison, and is not much of a contribution to the research problem.\\n\\nThis paper does not introduce new methods for approaching the problem of trail navigation. In its current form, it is a small comparison of existing classification architectures on the IDSIA dataset.\\n\\nThe paper also contains a number of minor errors. For instance, in Table 2 there is a footnote that leads nowhere, \\u201cintroduced in Sif\\u201d is cited incorrectly, \\u201cin recent times jet (2014)\\u201d is cited incorrectly, and the figures are grainy (this isn\\u2019t really an error, but do try to make figures crisp in the future, e.g. with pdf images).\", \"rating\": \"3: Clear rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Interesting comparison between different SOTA CNN for UAV trail guidance, but seems weak in clarifying novelty.\", \"review\": \"The paper initiates a comparison between different SOTA convolutional neural networks for UAV trail guidance with the goal of finding a better motion control for drones. They use a simulator (but not a physical UAV) to perform their experiments, which consisted on evaluating tuned versions of Inception-Resnet and MobileNet models using the IDSIA dataset, achieving good results in the path generated.\\n\\nI think that the authors have perform an interesting evaluation framework, although not novel enough according to the literature. It is also great that the authors have included an explicit enumeration of all the dimensions relevant for their analysis (which are sometimes neglected), namely, computational cost, power consumption, inference time and robustness, apart from accuracy. \\n\\nHowever, I think the paper is not very well polished: there are quite a lot of grammatical, typing and aesthetic errors. Furthermore, the analysis performed is an A+B approach from previous works (Giusti et al.2016, and Smolyanskiy et al, 2017) and, thus, it is hard to find the novelty here, since similar comparisons have been already performed. Therefore, the paper needs major improvements in terms of clarity regarding the motivations in the introduction.\\n\\nAlso, one third of the paper is devoted to the software and hardware architecture used in the study, which I think it would be better fitted in an appendix section as it is of no added scientific value. Another weakpoint is that the authors were unable to run their DNN models on a physical drone in real time due to a hardware bug... I think the paper would benefit from a more robust (real) experimentation since, as they are, the presented results and experiments are far from conclusive.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"Unclear contribution (did not implement in hardware, cited paper already did similar comparison of architectures)\", \"review\": \"The main context for this paper is two recent publications: Giusti et al.\\u2019s \\\"A Machine Learning Approach to Visual Perception of Forest Trails for Mobile Robots\\u201d (2016) and Smolyanskiy et al.\\u2019s \\\"Toward Low-Flying Autonomous MAV Trail Navigation using Deep Neural Networks for Environmental Awareness\\u201d (2017).\\n\\nGiusti introduced a dataset of trail images (later called the \\u201cIDSIA dataset\\u201d) acquired by having a hiker wear three head-mounted cameras. The forward facing image is associated with a label \\u201cgo straight\\u201d, whereas the two side images are associated with labels for \\u201cgo left\\u201d and \\u201cgo right\\u201d. Giusti then trained a convolutional neural network to predict these labels and used the network to guide a \\\"quadrotor micro aerial vehicle\\u201d. \\n\\nSmolyanskiy improves on Giusti\\u2019s work by (1) gathering additional trail image data using three cameras mounted to face forward but with lateral offsets and (2) using this additional data to train a 6 output neural network (\\u201cTrailnet\\u201d) which predicts both view orientation and lateral offset. In addition, they also combined predicted pose relative to the trail with predictions of localized objects and a depth map for potential obstacles. They compared several neural network architectures for predicting the view angle on the IDSIA data as well as the closed-loop performance of each network in avoiding collisions while operating within a UAV on a previously unseen trail. Though Trailnet did not achieve the highest accuracy (84% vs. the max 92% achieved by ResNet-18), it was the only network that achieved 100% collision avoidance on their UAV test course. \\n\\nThis paper, \\\"A CASE STUDY ON OPTIMAL DEEP LEARNING MODEL FOR UAVS\\u201d, attempts to evaluate two potentially better convolutional neural networks for UAV trail guidance. They fine tune pertained Inception-Resnet and MobileNet models to predict the IDSIA dataset. These then both achieve better accuracy on the IDSIA test set and were analyzed for inference time and power consumption. These two models are then run through a single simulated path, where both seem to perform adequately across 2 turns in the path. \\n\\nThis paper has a variety of essential flaws.\\n\\n1. A large portion of the text is devoted to their hardware and UAV control but they were not able to actually run models on a physical UAV \\\"due to a hardware bug we were facing with the FCU\\u201d. \\n2. The paper claims to \\\"introduce to the best of our knowledge, a very first comparative study of three algorithms in order to find a better motion control of a drone for detecting a trail\\u201d. This is a confusing claim since a comparison of neural network architectures is a central part of the evaluation in the Smolyanskiy paper. \\n3. The higher accuracy on view orientation does not seem relevant since it was also achieved by Smolyanskiy et al. with networks that they then showed performed worse when combined with object detection, obstacle depth inference and combined controller.\\n4. The sentence \\\"An important goal of our method is to demonstrate the effectiveness of low cost systems for the complex task of flying an autonomous drone\\u201d appears to have been plagiarized from \\u201cLearning to Fly by Crashing\\u201d (2017) which contains \\\"A important goal of our method is to demonstrate the effectiveness of low cost systems for the complex task of flying in an indoor environment\\u201d.\", \"rating\": \"2: Strong rejection\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}"
]
} |
|
Ske5r3AqK7 | Poincare Glove: Hyperbolic Word Embeddings | [
"Alexandru Tifrea*",
"Gary Becigneul*",
"Octavian-Eugen Ganea*"
] | Words are not created equal. In fact, they form an aristocratic graph with a latent hierarchical structure that the next generation of unsupervised learned word embeddings should reveal. In this paper, justified by the notion of delta-hyperbolicity or tree-likeliness of a space, we propose to embed words in a Cartesian product of hyperbolic spaces which we theoretically connect to the Gaussian word embeddings and their Fisher geometry. This connection allows us to introduce a novel principled hypernymy score for word embeddings. Moreover, we adapt the well-known Glove algorithm to learn unsupervised word embeddings in this type of Riemannian manifolds. We further explain how to solve the analogy task using the Riemannian parallel transport that generalizes vector arithmetics to this new type of geometry. Empirically, based on extensive experiments, we prove that our embeddings, trained unsupervised, are the first to simultaneously outperform strong and popular baselines on the tasks of similarity, analogy and hypernymy detection. In particular, for word hypernymy, we obtain new state-of-the-art on fully unsupervised WBLESS classification accuracy. | [
"word embeddings",
"hyperbolic spaces",
"poincare ball",
"hypernymy",
"analogy",
"similarity",
"gaussian embeddings"
] | https://openreview.net/pdf?id=Ske5r3AqK7 | https://openreview.net/forum?id=Ske5r3AqK7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"HygksWKBeV",
"H1lrpjrmkN",
"BJxjvncPpX",
"rkxUJnqPT7",
"BJl8XsqPaQ",
"rklr8qcvpX",
"HyehC-uA2m",
"SygdlyBS2Q",
"SklyIE7ZhX"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1545077143338,
1543883709022,
1542069347496,
1542069214056,
1542069022120,
1542068813059,
1541468628124,
1540865775758,
1540596806853
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1565/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1565/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1565/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1565/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1565/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1565/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1565/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1565/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1565/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"Word vectors are well studied but this paper adds yet another interesting dimension to the field.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Interesting submission\"}",
"{\"title\": \"Reply after revision\", \"comment\": \"Thank you very much for the revision.\\n\\nI am willing to improve my score to weak acceptance based on (1) improved presentation; (2) extensive experimental study. Overall this contribution is very experimental and practical (rather than being theoretical). I have overlooked the empirical results as I am not an expert in word embeddings from the practical perspective.\\n\\nIf there are specific points that need to be discussed. Please follow up here.\"}",
"{\"title\": \"Our unsupervised model is the first to competitively tackle all 3 tasks of word hypernymy (SOTA on unsupervised WBLESS), similarity and analogy\", \"comment\": \"Thank you for your positive feedback.\", \"experiments_section\": \"We rephrased the Experiments section 9 to better describe our empirical results (see below).\", \"hypernymy_experiments\": \"Our fully *unsupervised* method (unsupervised trained embeddings + unsupervised hypernymy score) obtains state-of-the-art (SOTA) on WBLESS and matches previous SOTA on Hyperlex tasks - see Tables 6 and 7.\\n\\nWe also propose to use WordNet to progressively incorporate weak supervision into the hypernymy scoring function, but not into the word embedding training phase. This likely results in lower scores compared to methods that use hypernymy supervision for training embeddings. However, our models of type \\u201cunsupervised trained embeddings + weakly-supervised hypernymy score\\u201d outperform the vast majority of methods that use supervision at training time, which is very encouraging. And our only \\u201cweak supervision\\u201d comes from 400+400=800 *word levels* of the WordNet hierarchy, without using any hypernymy relations per se. \\n\\nThe separation between these 3 types of hypernymy detection methods was not clear in the original version of our paper, but should be in our updated version - please see Tables 6 and 7.\", \"results_on_similarity_and_analogy\": \"We did not compare against published results because state-of-the-art is currently held by GloVe trained on Wikipedia 2014 + Gigaword 5. We trained only on Wikipedia 2014, because we did not have access to Gigaword 5 due to its prohibitive cost. The size of the dataset makes a significant difference for GloVe, since this algorithm gathers co-occurrences, which are relatively noisy statistics. In future work, we might acquire this dataset and re-run experiments. For now, we believe that our baseline is fair, since both the Euclidean and hyperbolic methods are trained on the same dataset. Moreover, upon acceptance, we would make our code fully available, including evaluation scripts, which should facilitate further research on this topic.\", \"questions\": \"\", \"poincar_ball\": \"we chose this model because it was used by [1] and [2], but in future work it would be interesting to investigate whether other models would lead to better optimization. In particular, we plan to investigate using the Lorentz and half-plane models.\", \"the_gyr_operator\": \"it is the rotational component of the parallel transport along geodesics, inherited from the curvature of the space. It casts the holonomy of the manifold into an linear map. It captures the default of commutativity of Mobius translations: a \\\\oplus b = gyr[a,b](b\\\\oplus a), for all a,b in D^n. Although it is defined in the ball, it can be naturally extended to the ambient Euclidean space, which yields an isometry [3, remark 1.2 and Eq.(1.32)]. We provide pointers to the interested reader in the appendix.\", \"downstream_task\": \"this is a very nice suggestion. We leave it as future work.\\n\\n\\n[1] Poincar\\u00e9 embeddings for learning hierarchical representations, Nickel & Kiela, NIPS 2017\\n[2] Hyperbolic neural networks, Ganea et al., NIPS 2018\\n[3] A gyrovector space approach to hyperbolic geometry, Ungar A.\"}",
"{\"title\": \"Several contributions: one model strong in several tasks, a novel entailment score, state of the art unsupervised hypernymy results.\", \"comment\": \"Thank you for your valuable comments. We understand that our initial presentation of experiments was suboptimal. We have updated this section. We are the first method to show competitive or state-of-the-art results simultaneously on the 3 tasks of word similarity, analogy and hypernymy detection (see also our reply for Reviewer1).\\n\\nAfter your comments, we also improved the presentation of our novel entailment score by updating section 7, in particular by introducing a pseudo-code description (see Algorithm 1). \\n\\n\\n\\u201cSome presented mathematical notions are not novel\\u201d.\\nIndeed, the definition of delta-hyperbolicity, the Fisher geometry of Gaussians being hyperbolic, and the definition of gyro-translation are not of our own. However, the combination of these notions into a new machine learning model for word embeddings and their usage for the construction of a completely new unsupervised hypernymy score allowed us to achieve high performance on different tasks with the same model, as well as state of the art on unsupervised word hypernymy detection (WBLESS results). We believe that achieving these results with the *same* model constitutes a valuable contribution. \\n\\n\\nAnalogy with gyro-translations. \\nIndeed, the use of parallel/gyro-translations to solve the analogy task can be thought of as natural. However, as explained in section 6.1, because the space has non-zero curvature, there are two solutions to the analogy problem, which poses an unexpected difficulty. Our proposed solution described in sections 6, 9 and appendix A.2 is to select a point on the geodesic between these two solutions using a 2-fold cross-validation method. Lastly, let us note that the use of Euclidean translation is prohibited, as these operations belong to the ambient space, and their use would violate the hyperbolic structure.\\n\\n\\nDelta-hyperbolicity. \\nWe gave the definition in the appendix. We chose not to include the definition in the main text, because we thought it would not improve the comprehension of the reader. Indeed, the intuition behind Gromov\\u2019s definition of the delta-hyperbolicity of a quadruple is relatively difficult to grasp. We would also like to draw your attention on the fact that we expanded this appendix with further experimental results, to better understand how hyperbolicity affects similarity results.\\n\\n\\nRelated work.\\nNickel & Kiela\\u2019s Poincar\\u00e9 Embeddings [1] is using word embeddings trained *supervised* using is-a relations, while ours is based on word embeddings trained *unsupervised* using raw text corpora. [1] evaluates on graph-reconstruction and link-prediction and hence only targets Word-hypernymy, and is not trained to perform well on Word-analogy or Word-similarity, which are tasks traditionally used to evaluate word embedding methods trained on raw text corpora. \\n\\n\\nOptimization of Gaussian embeddings. \\nAs explained at the end of section 5, this connection allows us to use Riemannian Adagrad, which performs adaptivity across Poincare balls in the cartesian product. This optimization method is intrinsic to the statistical manifold of Gaussian distributions w.r.t. their Fisher geometry, and is hence both practically powerful and mathematically principled. \\n\\n\\nLength of the paper.\\nAs suggested by Reviewer 3, in our updated version, we decided to use the full authorized length of 10 pages. The main reason for this is that we think our initial submission lacked clarity in certain places, especially in the way we presented our experimental results. We also wanted to incorporate the modifications suggested by all reviewers. We hope that you will find this new form more convincing and a better fit to the conference. \\n\\n\\n[1] Poincar\\u00e9 embeddings for learning hierarchical representations, Nickel & Kiela, NIPS 2017\"}",
"{\"title\": \"Our experimental results and improvement of presentation/discussion\", \"comment\": \"First, we would like to warmly thank all three reviewers for their valuable comments, and for the time and effort they invested in understanding our work.\\n\\nWe took into consideration all your comments, and updated our submission accordingly and significantly (especially sections 7 and 9) to better emphasize your comments, our contributions and our empirical results.\\n\\nIn terms of experiments, our method achieves state-of-the-art (SOTA) on hypernymy detection on the WBLESS dataset for the class of fully end-to-end unsupervised methods, and matches SOTA on Hyperlex, at the same time simultaneously outperforming vanilla Glove on word similarity and analogy. If the reader is interested into on single model \\u201cgood for all\\u201d, we analyze at the end of Section 9 the model \\u201c50x2D, with h(x)=x^2 and initialization trick\\u201d, which is competitive on all 3 tasks.\", \"general_paper_modifications\": \"\", \"main_text\": \"-Rewriting of the entire section 7 to better explain the computation of the word entailment score. \\n-Pseudo-code algorithm to compute the entailment score (Algorithm 1).\\n-We rewrote the experiments section 9. \\n-Updated hypernymy tables (6,7) for better classification of SOTA baseline methods in various settings and better emphasis of our results as the unsupervised hypernymy SOTA.\\n-Updated similarity and analogy tables (2,4).\\n-Explanations and thorough discussions of results for the three tasks.\\n-New plots of Hyperlex performance w.r.t. the amount of WordNet supervision we incorporate for evaluation (figure 4).\", \"appendix\": \"-Four tables of extensive similarity and analogy results (tables 8,9,12,13).\\n-Plots of 20x2D embeddings (figures 5,6,7,8).\\n-Explanation of the midpoint selection procedure for solving analogies (table 15).\\n-Section on delta-hyperbolicity expanded with new similarity results (table 17).\\n\\nMore detailed responses are provided below each review.\"}",
"{\"title\": \"Significantly improved how the paper is written\", \"comment\": \"First, let us mention that we are happy to hear that our writing style was appreciated.\\n\\nWe made significant improvements to the submission, which are listed in a general message in the thread. We tried to reply more specifically to your concerns below:\\n\\n\\nClarity of experimental results.\\nYou made a valid point saying that our experimental results needed better descriptions and interpretations. We updated this section accordingly. We hope that you will find the new version much clearer. \\n\\n\\nPseudo-code algorithm for computing entailment score.\\nThank you for this suggestion. We incorporated it, please see Algorithm 1 in section 7.\\n\\n\\nWN-Poincar\\u00e9 is 0.512.\\nThis method is using word embeddings trained *supervised* using is-a relations, while ours is based on word embeddings trained *unsupervised* using raw text corpora. We only incorporated supervised baselines in the table to show that our unsupervised method manages to also outperform almost all supervised ones, which is surprising. Thank you for pointing out that this was unclear. We hope that you will find our updated tables and experiment section clearer.\\n\\n\\n50x2D.\\nOur new similarity and analogy tables have been updated by adding the \\\"initialization trick\\\" to the 50x2D model. As can be seen, the \\\"init trick\\\" significantly improves performance for this model on similarity and analogy. Moreover, this model achieves state-of-the-art results on unsupervised hypernymy detection (tables 6 and 7). So, the 50x2D model is competitive on all three tasks. Other reasons to preserve this model: (i) better interpretability, since once can visualize embeddings in each 2D space of the product; (ii) theoretically, 100D hyperbolic corresponds to 99D Gaussian with spherical variance (i.e. sigma^2 I), while 50x2D hyperbolic corresponds to a 50D Gaussian with a diagonal covariance, i.e. 50 variance parameters. \\nThese models allocate parameters in a different manner, and hence possess different strengths. This should be clearer in our revised version.\\n\\n\\nDropping dependence in mu. \\nHeuristically, how general a concept is - when embedded as a Gaussian - should be encoded in the magnitude of its variance. Although the mean might also contain relevant information, discarding it makes the model simpler. Our empirical analysis shows that this model was sufficient to obtain state-of-the-art results among unsupervised methods on word-hypernymy. We leave further exploration of more complex models as future work.\\n\\n\\nRadagrad. \\nThe Radagrad update is easy to implement, as described in Eq.(8) of [1]. We believe this should not compromise reproducibility. Moreover, upon acceptance, we would make our own implementation of Radagrad fully available, which should facilitate further research on this topic.\\n\\n\\nLength of paper.\\nWe took into account your suggestion and used the full authorized length of 10 pages, to make the paper clearer. We hope that you will find this new write-up more comprehensible. \\n\\n\\n[1] Riemannian adaptive optimization methods, B\\u00e9cigneul & Ganea, arxiv.org/abs/1810.00760\"}",
"{\"title\": \"Poincare Glove: Hyperbolic Word Embeddings\", \"review\": \"Summary:\\nWords have implicit hierarchy among themselves in a text. Hyperbolic geometry due to the negative curvature and the delta-hyperbolicity is more suitable for representing hierarchical data in the continuous space. As a result it is natural to learn word representations/embeddings in the hyperbolic space. This paper proposes a promising approach that extends the approach presented in [1] to implement a GLOVE based hyperbolic word embedding model. The embeddings are optimized by using the Riemannian Optimization methods presented in [2]. Authors provide results on word-similarity and word-analogy tasks.\", \"questions\": \"What are the reasons for choosing a Poincare Ball model of the hyperbolic space instead of hyperboloid or other models of the hyperbolic space?\\nCan you expand on the role of gyr[.,.] in Equations 6 and 7.\\nBesides the tasks that are presented in this paper including word-analogy and the word-similarity tasks. Have you considered using the embeddings learned in hyperbolic space in a down-stream task such as NLI?\", \"pros\": \"The paper is very well-written, the motivation and the goals are quite clear.\\nThe relationship between the Gaussian embeddings and the product spaces is interesting and neat. The paper is theoretically strong and consistent.\\nThe idea of learning word-embeddings in hyperbolic space with the proposed approach is novel and relevant.\", \"cons\": \"The weakest point of this paper is the experiments. Unfortunately the results reported are underwhelming on WBLESS and the Hyperlex tasks compared to other published results. The paper presents convincing results on Word-analogy and Word-similarity tasks. However they do not compare against the published results on those tasks.\\n\\n[1] Ganea, O. E., B\\u00e9cigneul, G., & Hofmann, T. (2018). Hyperbolic Neural Networks. arXiv preprint arXiv:1805.09112.\\n[2] B\\u00e9cigneul, Gary, and Octavian-Eugen Ganea. \\\"Riemannian Adaptive Optimization Methods.\\\" arXiv preprint arXiv:1810.00760 (2018).\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Adapting Glove word embedding to the Poincare half-plane: interesting but incremental\", \"review\": \"This paper adapts the Glove word embedding (Pennington et al 2014) to a hyperbolic space given by the Poincare half-plane model. The embedding objective function is given by equation (3), where h=cosh^2 so that it corresponds to a hyperbolic geometry. The author(s) showed that their hyperbolic version of Glove is better than the original Glove. Besides that, based on (Costa et al 2015), the author provided theoretical insights on the connection between hyperbolic embeddings with Gaussian word embeddings. Besides, the author(s) proposed a measure called \\\"delta-hyperbolicity\\\", that is based on (Gromov 1987) to study the model selection problem of using hyperbolic embeddings vs. traditional Euclidean embeddings.\\n\\nOverall, I find the contributions are interesting but incremental. Therefore it may not be significant enough to be published in ICLR. Moreover, the experimental evaluation is insufficient to show the advantages of the proposed Poincare Glove model.\\n\\nAn interesting theoretical insight is that there exists an isometry between the Fisher-geodesic distance of diagonal Gaussians and a product of Poincare half-planes. This is interesting as it revealed a connection between hyperbolic embeddings with Gaussian embeddings, which is not widely known. However, this is not an original contribution. This connection is not related to the optimization of the proposed embedding, as Gaussian word embeddings are optimized based on KL divergence etc. that are easy to compute.\\n\\nThe computation of analogy based on isometric transformations is interesting but straightforward by applying translation operations in the Poincare ball. The novel contribution is minor and mainly on related empirical results.\\n\\nThe definition of the delta-hyperbolicity is missing. The explicit form of the definition should be clearly given in section 7. Again, this is not a novel contribution but an application of previous definitions (Gromov 1987).\\n\\nIn the word similarity and analogy experiments, the baseline is the vanilla Glove, this is not sufficient as it is widely known that hyperbolic embeddings can improve over Euclidean embeddings on certain datasets. The authors are therefore suggested to include another hyperbolic word embedding (e.g. Nickel and Kiela 2017) into the baselines and discuss the advantages and disadvantages of the proposed method.\\n\\nThere are no novel and well-abstracted theoretical results (theorems) given in the submission.\\n\\nThe length of the paper is longer than the recommended length (9 pages of main text).\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Quality in many respects.\", \"review\": \"The English, grammar and writing style is very good, as are the citations.\\nThe technical quality appears to me to be very good (I am not an expert in Poincare spaces).\\nThe authors demonstrate a good knowledge of the mathematical theory with the constructions made in Section 6.\\nThe experimental write-up has been abbreviated. The lexical entailment results Tables 6 and 7 are just sitting there without discussion, as far as I can see, as are the qualitative results Tables 4 and 5. The entailment results are quite complex and really need supporting interpretation. For instance, for Hyperlex, WN-Poincare is 0.512, above yours.\\nFor your entailment score you say \\\"For simplicity, we propose dropping the dependence in \\u03bc\\\". This needs more justification and discussion as it is counter-intuitive for those not expert in Poincare spaces.\\nSection 6.2 presents the entailment score. Note Nickel etal. give us a nice single formula. You however, provide 4 paragraphs of construction from which an astute reader would then have to work on to extract your actual method. I would prefer to see a summary algorithm given somewhere. Perhaps you need another appendix.\\nRADAGRAD is discussed in Section 5, but I'd have preferred to see it discussed again in Section 8 and discussed to highlight what was indded done and the differences. It certainly makes the paper non-reproducible.\\nA significant part of the theory in earlier sections is about the 50x2D method, but in experiments this doesn't seem to work as well. Can you justify this some other how: its much faster, its more interpretable? Otherwise, I'm left thinking, why not delete this stuff?\\nThe paper justifies its method with a substantial and winning comparison against vanilla GloVe. That by itself is a substantial contribution.\\nBut now, one is then hit with a raft of questions. Embedding methods are popping up like daisies all over the fields of academia. Indeed, word similarity and lexical entailment tasks themselves are proliferating too. To me, its really unclear what one needs to achieve in the empirical section of a paper. To make it worse, some folks use 500D, some 100D, some 50D, so results aren't always comparible. Demonstrating one's work is state-of-the-art against all comers is a massive implementation effort. I notice some papers now just compare against one other (e.g., Klami etal. ECML-PKDD, 2018).\\n\\nMy overall feeling is that this paper tries to compress too much into a small space (8 pages).\\nI think it really needs to be longer to present what is shown. Moreover, I would want to see the inclusion of the work on 50x2D justified. So my criticisms are about the way the paper is written, not about the quality of the work. \\nMoroever, though, one needs to consider comparisons against models other than GloVe.\", \"addendum\": \"You know, what I really love about ICLR is the effort authors make to refresh their paper and respond to reviewers. You guys did a great job. Really impressed. 50x2D now clarified and some of the hasty/unexplained bits fixed.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
B1lKS2AqtX | Eidetic 3D LSTM: A Model for Video Prediction and Beyond | [
"Yunbo Wang",
"Lu Jiang",
"Ming-Hsuan Yang",
"Li-Jia Li",
"Mingsheng Long",
"Li Fei-Fei"
] | Spatiotemporal predictive learning, though long considered to be a promising self-supervised feature learning method, seldom shows its effectiveness beyond future video prediction. The reason is that it is difficult to learn good representations for both short-term frame dependency and long-term high-level relations. We present a new model, Eidetic 3D LSTM (E3D-LSTM), that integrates 3D convolutions into RNNs. The encapsulated 3D-Conv makes local perceptrons of RNNs motion-aware and enables the memory cell to store better short-term features. For long-term relations, we make the present memory state interact with its historical records via a gate-controlled self-attention module. We describe this memory transition mechanism eidetic as it is able to effectively recall the stored memories across multiple time stamps even after long periods of disturbance. We first evaluate the E3D-LSTM network on widely-used future video prediction datasets and achieve the state-of-the-art performance. Then we show that the E3D-LSTM network also performs well on the early activity recognition to infer what is happening or what will happen after observing only limited frames of video. This task aligns well with video prediction in modeling action intentions and tendency. | [
"eidetic",
"lstm",
"video prediction",
"model",
"relations",
"rnns",
"network",
"spatiotemporal predictive learning",
"promising",
"feature"
] | https://openreview.net/pdf?id=B1lKS2AqtX | https://openreview.net/forum?id=B1lKS2AqtX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"HygdBreglN",
"rJl_P_YA0m",
"SyxImDCiA7",
"ByVd51AFRm",
"ryeBsAat0Q",
"HyxkqaatRm",
"r1e-K3aF0Q",
"HJeJHrJc27",
"BJgyWXsuh7",
"S1gVVTLVhX"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544713536472,
1543571552469,
1543395102490,
1543262096399,
1543261853179,
1543261575293,
1543261304794,
1541170487358,
1541087991009,
1540807980214
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1564/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1564/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1564/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1564/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1564/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1564/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1564/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1564/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1564/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1564/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"Strengths: Strong results on future frame video prediction using a 3D convolutional network. Use of future video prediction to jointly learn auxiliary tasks shown to to increase performance. Good ablation study.\", \"weaknesses\": \"Comparisons with older action recognition methods. Some concerns about novelty, the main contribution is the E3D-LSTM architecture, which R1 characterized as an LSTM with an extra gate and attention mechanism.\", \"contention\": \"Authors point to novelty in 3D convolutions inside the RNN.\", \"consensus\": \"All reviewers give a final score of 7- well done experiments helped address concerns around novelty. Easy to recommend acceptance given the agreement.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Well executed exploration of a 3D CNN LSTM method\"}",
"{\"title\": \"comment to authors\", \"comment\": \"Q3:\\nAs R1 said, there are works integrating 2d convolution and RNNs, like \\\"VideoLSTM convolves, attends and flows for action recognition\\\". still, novelty is not convincing.\", \"q4\": \"A typical video classification model which can see full-length videos may make decisions mainly depending on the scene information.\\nI understand this paper aims to predict the future. however, \\\"Zhou et. al, Temporal Relational Reasoning in Videos\\\" show that for recognizing actions in something-something dataset, scene clues are not enough and modeling temporal dependencies are important. so a classical classification problem on this dataset makes sense. \\n\\nThough novelty is still not fully convincing, the paper can shed insights into the topic.\"}",
"{\"title\": \"Comment to authors\", \"comment\": \"Q7 (novelty)\\n1) It may be the first work using 3d convolutions in RNNs, however there is already a previous work using 2d convolution in RNNs: \\\"Convolutional LSTM Network: A Machine Learning Approach for Precipitation Nowcasting, NIPS 2015.\\\". \\n\\nQ8 E) Please add this info to the paper.\\n\\nQ9 C) It would have been interesting to see an experiment with one of these pre-trained models, because the used 2-layer network might be not be able to learn good features for the task. \\n\\nOverall novelty is still not fully convincing, however the results support the incremental ideas proposed by the authors.\"}",
"{\"title\": \"Responses to AnonReviewer2\", \"comment\": \"\", \"q7_novelty\": \"the proposed model is a small extension of a previous work [Wang et al., 2017]\\n>> Please see the answer to Q3 above (the novelty of the paper)\", \"q8_clarity_and_motivation\": \"A) Page 2...How long-term relations are learned given Eq. 1? \\n>> Different from standard LSTMs, we are motivated by modeling the long-term relations across frames. The long-term relations are learned by the RECALL function in Eq. 1, whose inputs are the historical memory states C_{t-\\\\tau:t-1}^k (in particular, we use C_{1:t-1}^k for most experiments in this work). The RECALL function queries useful information from C_{t-\\\\tau:t-1}^k using R_t. We have clarified this point in the paper (Page 4).\\n\\nB) Page 5...Not clear why Recall(.) should model long-term relations.\\n>> The RECALL function enables an adaptive learning of short-term and long-term modeling. More specifically, in Eq. 1, C_{t-1}^k is added to C_t^k via a short-cut connection controlled by the forget gate. Intuitively, it conveys short-term information, thus allowing the RECALL function to focus on long-term relations. Empirically, the COPY task verifies that our model could make use of information from the distant memory states when future predictions are severely dependent on the distant past.\\n\\nC) Eq 1: why layer norm is required when defining C_t^k is not clear.\\n>> We use the layer normalization technique to mitigate the covariant shift and stabilize the training process, as it has been commonly used in RNNs. We have made it clear in the paper. \\n\\nD) What if the Recall is instead modeled as attention?\\n>> Making the RECALL function solely based on memory states C will make the relations between C_{t-1}^k and itself (or the relations between very short-term memory states) dominate the result of RECALL(.). Thus, we encode X_t and H_{t-1}^k into R_t, and use it as the query of the attentive RECALL function. \\n\\nE) Page 5 \\u201cto minimize the l1 + l2 loss over every pixel in the frame\\u201d is not clear.\\n>> We use different objective functions for different tasks:\\n1. Video prediction: L1 + L2 loss.\\n2. Early activity recognition: Eq. 3 in the revised paper.\", \"q9_experiments\": \"A) Page 7...Why are Seq. 1 and Seq. 2 irrelevant?\\n>> We have rephrased this part in Page 6. Basically, the COPY task is to evaluate whether our model could recall useful information from the distant memory states. A well-performed predictive model should make precise predictions regarding Seq 2, as it has seen all frames of this sequence before. But this task is difficult for previous LSTM networks. Because the Seq 1 is totally irrelevant, making predictions of Seq 1 will erase its memory of Seq 2.\\n\\nB) Sec. 4.2, \\u201cDataset and setup\\u201d: which architecture has been used here?\\n>> We have made it clear that the architecture for KTH is exactly the same as that for the Moving MNIST.\\n\\nC) Sec. 4.3...the something-something dataset is more realising than the other two \\u201ctoy\\u201d dataset. Why did the authors choose to train a 2 layers 3D-CNN encoders, instead of using existing pretrained 3D CNNs?\\n>> In this paper, our goal is to explore a generic method that can infer the action tendency and intentions from sequential video frames. We show that in a fair setting (the same training set and similar #learnable parameters), the improvements of our work come from a better model to capture and predict low-level video data trends, along with a better understanding of high-level actions.\\nAlthough using the 3D-CNN model pre-trained on video datasets may improve the results, it also makes fair comparisons among all methods very tricky. First, suppose a model improves the results; it is less clear whether it is because the model learns a better representation on the pretrained data, or it is actually better in modeling the target dataset. Second, due to the domain difference, it is hard to select which pretrained models (e.g. Sports1M or Kinetics) to use on which dataset, and the pretrained model works on one dataset (e.g. something-something) may not work well on another dataset(e.g. KTH). These issues can result in a lengthy and unclear experimental section.\"}",
"{\"title\": \"Responses to AnonReviewer1\", \"comment\": \"\", \"q3\": \"Concern about the novelty: my only big concern is about the limited novelty of the method. E3D-LSTM is the core of the novelty, which is basically an LSTM with extra gate, and attention mechanism.\\n\\nThe concern on limited novelty is mainly due to the seeming similarity to the prior work [Wang et al., 2017]. Below we clarify the differences to the prior work:\\n\\n1. Our paper is one of, if not the first, work to systematically explore 3D convolutions **inside** the RNN. More importantly, it is the first to show a carefully designed method achieves the state-of-the-art results on several public benchmarks. The improvements are otherwise not shown for any known combination of 3D convolutions and RNN. \\n\\n2. Our technical difference to the existing work includes: \\na. we study where to apply the 3D info. For example, combine 2D or 3D inputs (see Figure 1), inflate the LSTM cell to 3D (see Figure 2b), or separate the 3D convolutions in the input and LSTM cell (see Table 4). \\nb. we propose how to effectively embed the 3D convolution inside the LSTM (i.e. we introduce a new recall gate in Equation 1 for the 3D-memory transition inside the LSTM).\\n\\nAmong the recent advances in deep learning, many great models appear to be similar to prior work (e.g. ResNet and Highway Network, ConvLSTM and LSTM, C3D/I3D CNN and 2D CNN). However, it is not true as the devil is in the important details. Similarly, we build upon prior work, make only necessary, yet important, model designs, and validate the necessities with ablation studies to demonstrate their merits. Our designs are driven by a clear motivation, innovative thinkings, and validated by extensive experiments (as agreed by all reviewers). \\nWe hope this can resolve the concern on novelty.\", \"q4\": \"As the method by essence is a spatiotemporal learning model, why the method is not evaluated on full-length videos of the something-something dataset?\\n\\nThe main reason is that predicting on the full-length video may not align well with our topic. A typical video classification model which can see full-length videos may make decisions mainly depending on the scene information. As shown in Fig. 5, suppose the tasks is to predict a category \\u201cPoking a stack of [Something], so the stack collapses\\u201d. The problem would be very simple as long as the model sees the last frame which shows the outcome of the action. \\n\\nIn contrast, the early activity recognition task makes the model have no other choices but to depend on an inference of the action intentions when making decisions. It aligns well with the video prediction task, in which the sequential tendency and causality are important.\\n\\nWe notice that it would be more accurate to claim our model as a spatiotemporal predictive learning model, rather than a broad \\u201cspatiotemporal learning model\\u201d. We have revised that in the paper.\", \"q5\": \"Show the benefit on online action recognition task.\\n\\nAs suggested, we have added online early activity recognition by making the classifier only depend on a concatenation of the recurrent outputs regarding the last 5 timestamps. As such, the historical recurrent states are only kept for 5 timestamps and then discarded. In particular, we apply a sliding window of limited length to the inputs of the Recall gate, using $C_{t-5:t-1}$ instead of $C_{1:t-1}$ in Eq. 1. Experimental results are shown in Table 7. Despite the slightly decreased accuracy, applying the sliding window on the Recall gate improves the scalability of E3D-LSTM.\", \"q6\": \"How was the process of selecting 41 classes out of the something-something dataset?\\n\\nIn the original paper [Goyal et al. 2017] of the Something-Something dataset, the 41 classes (in Table 7) are listed as a standard and official dataset setting. This split contains 56k video clips for training and 7.5k for validation and is large enough and meanwhile computational convenient to compare a variety of baseline methods. We have clarified this point in the paper (Page 9).\"}",
"{\"title\": \"Responses to AnonReviewer3.\", \"comment\": \"\", \"q1\": \"In the introduction, the authors state that they account for uncertainty by better modeling the temporal sequence...\\n\\nWe have rephrased this expression for clarity in the revised paper (Page 2). Below is a copy: Future prediction errors of an imperfect model can be categorized by two factors: a) the \\u201csystematic errors\\u201d caused by a lack of modeling ability to the deterministic variations; b) the stochastic, inherent uncertainty of the future. We aim to minimize the first factor in this work.\", \"q2\": \"Analyze the work in more complex settings.\\n\\nWe have experimented with the Something-Something dataset for video prediction, but the generated frames are not satisfying even when integrated with adversarial training and variational methods. The results are not surprising as the number of training samples is too limited to capture the diverse scenes of real-world videos (due to the illumination, occlusion, camera motion, to name a few). This makes future prediction considerably difficult for all existing methods, including ours. Exploring very complex datasets will be an interesting future research direction for this task. \\n\\nHowever, as R3 suggested, we further evaluate our method on a real-world dataset for traffic flow prediction, i.e., TaxtBJ. In this dataset, traffic flows (in consecutive heat maps) are collected from the chaotic real-world environment. Predicting urban traffic conditions is a complex setting, as the heat maps are very noisy and we do not have any corresponding, underlying, additional information. Implementation details and empirical results can be found in Appendix B. We train the networks to predict 4 frames (the next 2 hours) from 4 observations and report MSE at every timestamp. As shown, our method achieves the state-of-the-art result in Table 8 and generates the most accurate predictions in Fig. 6.\"}",
"{\"title\": \"We thank reviewers for the valuable comments.\", \"comment\": \"We thank reviewers for the valuable comments. Based on the reviews, we make the following changes (we mark these changes in blue in the revised paper):\\n\\n1. As suggested by R1, we enable our method to perform the online recognition tasks and compare our online model with and without the frame-prediction loss in Table 7.\\n\\n2. As suggested by R3, we add an additional real-world dataset on traffic flow prediction and evaluate our method under this complex setting. The results are presented in Appendix B.\\n\\n3. We rephrase/clarify all of the points raised by the reviewers.\\n\\nWe will address all questions in the individual replies.\"}",
"{\"title\": \"The authors propose a futurue video prediction model based on recurrent 3D-CNNs and propose a novel memory mechanism (Eidetic Memory) to capture long term relationships inside the recurrent layer itself. They obtain surpass the state of the art on two commonly used, (relatively) simple benchmark video prediction datasets. They further apply their model to early action recognition, performing an ablation study to evaluate the strengths of each model building block.\", \"review\": \"AFTER REBUTTAL:\\n\\nThis is an overall good work, and I do think proves its point. The results on the TaxiBJ dataset (not TatxtBJ, please correct the name in the paper) are compelling, and the concerns regarding some of the text explainations have been corrected.\\n\\n-----\\n\\nThe proposed model uses a 3D-CNN with a new kind of 3D-conv. recurrent layer named E3D-LSTM, an extension of 3D-RCNN layers where the recall mechanism is extended by using an attentional mechanism, allowing it to update the recurrent state not only based on the previous state, but on a mixture of previous states from all previous time steps.\", \"pros\": \"The new approach displays outstanding results for future video prediction. Firstly, it obtains better results in short term predictions thanks to the 3D-Convolutional topology. Secondly, the recall mechanism is shown to be more stable over time: The prediction accuracy is sustained over longer preiods of time (longer prediction sequences) with a much smaller degradation. Regarding early action recognition, the use of future video prediction as a jointly learned auxiliary task is shown to significantly increase the prediction accuracy. The ablation study is compelling.\", \"cons\": \"The model does not compare against other methods regarding early action recognition. Since this is a novel field of study in computer vision, and not too much work exists on the subject, it is understandable. Also, it is not the main focus of the work.\\n\\nIn the introduction, the authors state that they account for uncertainty by better modelling the temporal sequence. Please, remove or rephrase this part. Uncertainty in video prediction is not due to the lack of modelling ability, but due to the inherent uncertainty of the task. In real world scenarios (eg. the KTH dataset used here) there is a continuous space of possible futures. In the case of variational models, this is captured as a distribution from which to sample. Adversarial models collapse this space into a single future in order to create more realistic-looking predictions. I don't believe your approach should necessarily model that space (after all, the novelty is on better modelling the sequence itself, not the possible futures, and the model can be easily extended to do so, either through GANs or VAEs), but it is important to not mislead the reader.\\n\\nIt would have been interesting to analyse the work on more complex settings, such as UCF101. While KTH is already a real-world dataset, its variability is very limited: A small set of backgrounds and actions, performed by a small group of individuals.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"well-written, well-experimented paper with limited novelty\", \"review\": \"The paper proposes a spatiotemporal modeling of videos based on two currently available spatiotemporal modeling paradigms: RNNs and 3D convolutions. The main idea of this paper is to get the best world of both in a unified way. The method first encodes a sequence of frames using 3D-conv to capture short-term motion patterns, passes it to a specific type of LSTM (E3D-LSTM) which accepts spatiotemporal feature maps as input. E3D-LSTM captures long-term dependencies using an attention mechanism. Finally, there are 3D-conv based decoders which receive the output of E3D-LSTM and generate future frames. The message of the paper, I believe, is that 3D-conv and RNNs can be integrated to perform short and long predictions. They show in the experiments how the model can remember far past for reasoning and prediction.\\nThe nice point of the method is that it is heavily investigated through experiments. It's evaluated on two datasets, with ablation studies on both. Moreover, the paper is well-written and clear. technically, the paper seems correct.\\nHowever, my only big concern is about the limited novelty of the method. E3D-LSTM is the core of the novelty, which is basically an LSTM with extra gate, and attention mechanism.\", \"other_comments\": [\"As the method by essence is a spatiotemporal learning model, why the method is not evaluated on full-length videos of the something-something dataset for classical action classification task, in order to compare it with the full architecture of I3D, or S3D?\", \"While the paper discusses self-supervised learning, I would suggest showing its benefit on online action recognition task. One without frame-prediction loss and one with.\", \"the something-something dataset has 174 classes, how was the process of selecting 41 classes out of it?\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Nice experiments although it lacks a bit of novelty\", \"review\": \"# 1. Summary\\nThis paper presents a model for future video prediction, which integrates 3D convolutions into RNNs. The internal operations of the RNN are modified by adding historical records controlled via a gate-controlled self-attention module. The authors show that the model is effective also for other tasks such as early activity recognition.\", \"strengths\": [\"Nice extensive experimentation on video prediction and early activity recognition tasks and comparison with recent papers\", \"Each choice in the model definition are motivated, although some clarity is still missing (see below)\"], \"weaknesses\": \"* Novelty: the proposed model is a small extension of a previous work (Wang et al., 2017) \\n\\n\\n# 2. Clarity and Motivation\\nIn general, the paper is clear and general motivation makes sense, however some points need to be improved with further discussion and motivation:\\n\\nA) Page 2 \\u201cUnlike the conventional memory transition function, it learns the size of temporal interactions. For longer sequences, this allows attending to distant states containing salient information\\u201d: This is not obvious. Can the authors add more details and motivate these two sentences? How is long-term relations are learned given Eq. 1? \\nB) Page 5 \\u201cThese two terms are respectively designed for short-term and long-term video modeling\\u201d: How do you make sure that Recall(.) does not focus on the short-term modeling instead? Not clear why this should model long-term relations.\\nC) Page 5 and Eq 1: motivation why layer norm is required when defining C_t^k is not clear\\nD) What if the Recall is instead modeled as attention? The idea is to consider only C_{1:t-1}^k (not consider R_t) and have an attentional model that learn what to recall based only on C. Also, why does Recall need to depend on R_t?\\nE) Page 5 \\u201cto minimize the l1 + l2 loss over every pixel in the frame\\u201d: this sentence is not clear. How does it relate to Eq. 2?\\n\\n\\n# 3. Novelty\\nNovelty is the major concern of this paper. Although the introduced new concepts and ideas are interesting, the work seems to be an extension of ST-LSTM and PredRNN where Eq 1 is slightly modified by introducing Recall. \\nIn addition the existing relation between the proposed model and ST-LSTM is not clearly state. Page 2, first paragraph: here the authors should state that model is and extension of ST-LSTM and highlight what are the difference and advantage of the new model.\\n\\n\\n# 4. Significance of the work\\nThis paper deals with an interesting and challenging topic (video prediction) as well as it shows some results on the early activity recognition task. These are definitively nice problem which are far to be solved. From the application perspective this work is significant, however from the methodological perspective it lacks a bit of significance because of the novelty issues highlighted above.\\n\\n\\n# 5. Experimentation\\nThe experiments are robust with nice comparisons with recent methods and ablation study motivating the different components of the model (Table 1 and 2). Some suggested improvements:\\n\\nA) Page 7 \\u201cSeq 1 and Seq 2 are completely irrelevant, and ahead of them, another sub-sequence called prior context is given as the input, which is exactly the same as Seq 2\\u201d: The COPY task is a bit unclear and need to be better explained. Why are Seq. 1 and 2 irrelevant? I would suggest to rephrase this part.\\nB) Sec. 4.2, \\u201cDataset and setup\\u201d: which architecture has been used here?\\nC) Sec. 4.3, \\u201cHyper-parameters and Baselines\\u201c: the something-something dataset is more realising that the other two \\u201ctoy\\u201d dataset. Why did the authors choose to train a 2 layers 3D-CNN encoders, instead of using existing pretrained 3D CNNs? I would suspect that the results can improve quite a bit.\\n\\n\\n# 6. Others\\n* The term \\u201cself-supervised auxiliary learning\\u201d is introduced in the abstract, but at this point it\\u2019s meaning is not clear. I\\u2019d suggest to either remove it or explain its meaning.\\n* Figure 1(a): inconsistent notation with 2b. Also add citation (Wang et al., 2017) since it ie the same model of that paper\\n\\n-------\\n# Post-discussion\", \"i_increased_my_rating\": \"even if novelty is not high, the results support the incremental ideas proposed by the authors.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
HkxKH2AcFm | Towards GAN Benchmarks Which Require Generalization | [
"Ishaan Gulrajani",
"Colin Raffel",
"Luke Metz"
] | For many evaluation metrics commonly used as benchmarks for unconditional image generation, trivially memorizing the training set attains a better score than models which are considered state-of-the-art; we consider this problematic.
We clarify a necessary condition for an evaluation metric not to behave this way: estimating the function must require a large sample from the model. In search of such a metric, we turn to neural network divergences (NNDs), which are defined in terms of a neural network trained to distinguish between distributions. The resulting benchmarks cannot be ``won'' by training set memorization, while still being perceptually correlated and computable only from samples. We survey past work on using NNDs for evaluation, implement an example black-box metric based on these ideas, and validate experimentally that it can measure a notion of generalization.
| [
"evaluation",
"generative adversarial networks",
"adversarial divergences"
] | https://openreview.net/pdf?id=HkxKH2AcFm | https://openreview.net/forum?id=HkxKH2AcFm | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"SyxUeT2bxV",
"SJxruT-rCm",
"HkgTQTbBAm",
"HJlDkTZHCQ",
"H1gq7h-BAm",
"Bkx0g3Yn27",
"BkxqKQQ527",
"HJl4hUgtnX"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544830190334,
1542950252548,
1542950180872,
1542950111101,
1542949921998,
1541344245936,
1541186434158,
1541109420247
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1563/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1563/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1563/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1563/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1563/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1563/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1563/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1563/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper argues for a GAN evaluation metric that needs sufficiently large number of generated samples to evaluate. Authors propose a metric based on existing set of divergences computed with neural net representations. R2 and R3 appreciate the motivation behind the proposed method and the discussion in the paper to that end. The proposed NND based metric has some limitations as pointed out by R2/R3 and also acknowledged by the authors -- being biased towards GANs learned with the same NND metric; challenge in choosing the capacity of the metric neural network; being computationally expensive, etc. However, these points are discussed well in the paper, and R2 and R3 are in favor of accepting the paper (with R3 bumping their score up after the author response).\\nR1's main concern is the lack of rigorous theoretical analysis of the proposed metric, which the AC agrees with, but is willing to overlook, given that it is nontrivial and most existing evaluation metrics in the literature also lack this. \\nOverall, this is a borderline paper but falling on the accept side according to the AC.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Tackles an important problem with arguable success\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"Thank you very much for the review. We'd like to respond as follows:\\n\\n> I personally feel that if sample generation is the only goal, then this trivial algorithm is perfectly fine because, statistically, the empirical distribution is in many, though not all, ways, a good estimator of the underlying true probability measure (this is the idea that is used in the statistical technique of Bootstrap for example). \\n\\nWe absolutely agree! We write in the final paragraph \\\"In our work we assume that our final task is not usefully solved by memorizing the training set, but for many tasks such memorization is a completely valid solution. If so, the evaluation should reflect this...\\\"\\n\\n> However the underlying goal in unsupervised learning problems where GANs are used is hardly sample generation. The GANs also output a whole function in the form of a generative network which converts random samples into samples from the underlying generating distribution. This generative network is arguably more important and more useful than just the samples that it generates. An evaluation scheme for GANs should focus on the generative network directly rather than on a set of its generating samples. \\n\\nWe agree that learning a generative network with a specific structure is a very important task in unsupervised learning. The argument that GAN research should be steered away from sample generation is certainly interesting. However without taking an opinion on that argument, we observe that a significant number of strong papers have been oriented at the final task of unconditional sample generation (e.g. https://arxiv.org/abs/1710.10196, ICLR 2017 oral). Since presumably this trend will continue, we believe that it\\u2019s valuable to work towards proper benchmarks for this task. And developing proper benchmarks requires a definition of the task which is nontrivial, i.e. for which training set memorization isn\\u2019t a perfect solution.\\n\\n\\n> A measure D_CNN is proposed as a benchmark. It must be remarked that D_CNN is not even properly defined (for example, there is a function \\\\Delta in its definition but it is never explained what this function is).\\n\\nWe give a detailed specification of D_CNN in Appendix D, and we\\u2019re releasing code along with this paper which will serve as a canonical reference. However, we think of our D_CNN as an example instantiation of the idea of NNDs -- as such, we don\\u2019t think the specifics are relevant to most of our experiments or conclusions.\\n\\n> D_CNN is a variant of the existing notion of Neural Network Divergences. Only a numerical study (with no theory) is done to illustrate the utility of D_CNN for evaluating samples generated by GANs. The entire paper is very anecdotal with very little rigorous theory.\\n\\nWe see sections 2-4 of our paper as a unification and expansion of existing theory from the particular point of view of whether an evaluation metric requires a large sample to be evaluated and whether neural network divergences satisfy this property. We believe this is a useful contribution which stands apart from the empirical results we present in Section 5.\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"Thanks for taking the time to write this review. We'd like to respond to your points as follows:\\n\\n> (...) experimented with small datasets, both are not necessarily within scope but a welcome addition\\n\\nWe'd like to clarify why we consider the small-test-set experiment to be a crucial contribution. We've updated the paper (sections 3.3 and 4.1) to explain that a small test set might be hazardous specifically for NNDs, which are designed to require a large sample to estimate. Without evidence that the small-sample estimates correlate very well with the large-sample estimates, we wouldn't effectively be able to use NNDs for evaluation except in settings where our test set is much larger than our training set.\\n\\n> if someone wants to compare the generalisation and diversity of samples between GANs, they would need to train the exact same critic CNN to be able to make a comparison. (...) In general, given evaluating the metric requires training a network from scratch, it will be very difficult to make this consistent.\\n\\nYou\\u2019re absolutely right that it\\u2019s very difficult to reproduce network training identically across implementations and hardware. We\\u2019ve added a discussion of this problem in a section titled \\u201cDrawbacks of NNDs for Evaluation\\u201d. In short, NND-based evaluation will likely require standardized open-source hardware-independent implementations. In general, we don\\u2019t claim to have complete solutions for these problems - instead, we present a framework and a path forward for evaluating generative models based on samples alone. However, we do note that for our specific metric, CNN Divergence, the variance across multiple training runs of the critic network is quite small, as outlined in Appendix E.\\n\\n> However the authors do not provide any principled way to determine the right size of the \\\"critic\\\" network.\\n\\nUltimately, the best critic size will depend on the downstream application of the generative model. Since this downstream task is usually not well-defined theoretically, determining the \\u201cright\\u201d critic size by theory is a very difficult task and it\\u2019s perhaps best left as an empirical choice. More generally, we avoid attempting to prescribe hyperparameters or define a specific evaluation procedure in this work.\\n\\n> In table 3 we indeed see that this is the case, however the authors argue that perhaps the GAN is simply the better model. \\n\\nThis is a very important point; thanks for raising it. To clarify, we don't mean to suggest that the GAN is the \\\"best\\\" model in any aboslute sense. Instead, it simply is the model that performs best in terms of the CNN divergence. We believe the CNN divergence is more sensitive to certain properties of the learned distribution than, for example, log likelihood. Whether this means the GAN is better or worse will depend on the intended use of the generative model. We discuss this in a few places in the paper:\\n\\nIn Section 4.1, \\u201cTraining Against The Metric\\u201d, we argue that the NND\\u2019s tendency to \\u201cunfairly\\u201d favor models trained against it appears to be mild compared to metrics like the Inception Score, which very greatly favors models trained against it, even though those models produce samples which resemble pure noise.\\nIn Appendix A (newly added), we summarize and highlight new evidence for past arguments against any universal notion of a \\u201cbest\\u201d metric or model: in short, different metrics always tend to prefer different models.\\n\\nWe note that some studies (e.g. https://arxiv.org/abs/1705.05263, https://arxiv.org/abs/1705.08868) have considered the performance of models trained against an NND in terms of log-likelihood by using a flow-based (invertible) generator and found that GAN training performs very poorly in terms of likelihood. This is a similar point to the one we make here - a model trained against one class of objective (e.g. via maximum likelihood) might not be expected to perform well against another class of objectives.\\n\\n> I am worried by the fact that both PixelCNN++ and IAF-VAE perform worse than the training set on this benchmark.\\n\\nAny useful metric will exhibit some trade-off between, for example, sample quality and diversity. In terms of why PixelCNN++ and IAF-VAE perform worse than the training set under CNN divergence, CNN divergence likely \\\"prefers\\\" sample quality to diversity to the extent that it prefers a small, perfectly realistic sample (i.e. the training set). We note that while the PixelCNN++ and IAF-VAE are certainly effective generative models, samples from those models are clearly distinguishable from the training set. We\\u2019ve updated the paper with a detailed discussion of this topic in Appendix A.\\n\\n> Nits\\n\\nThanks for catching this! We\\u2019ve fixed it in an update.\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"Thank you for the thoughtful review! We'd like to respond to one point in particular:\\n\\n> On the down side, I think the proposed DNN metric is not exactly useful. It would be a subset of the metric that an MMD would give and it would focus only in some properties of the images but not on the whole distribution. So, if this metric does not capture the relevant aspects of the problem the GAN is trying to imitate, it will fail to provide that metric that we are looking for.\", \"we_agree_completely_that_nnds_have_inductive_biases_which_cause_them_to_ignore_certain_properties_of_the_distribution\": \"for example, our \\u201cCNN divergence\\u201d is likely to ignore small spatial shifts in its inputs. However, we actually see this as an advantage: NNDs let us design metrics which are sensitive only to the properties that are important for the final task, and invariant to the rest. We think this point is best made by Theis et al. (https://arxiv.org/abs/1511.01844), who argue that evaluation metrics should reflect the downstream task, and Huang et al. (https://arxiv.org/abs/1708.02511), who argue theoretically and empirically that NNDs in particular are good losses for generative modeling *because* of their inductive biases. We addressed this briefly in our section titled \\u201cPerceptual Correlation\\u201d, but we think it definitely deserves a longer discussion -- so we\\u2019ve updated the paper with a separate section (Appendix A) which clarifies this point in detail with examples and references.\\n\\nConcerning the MMD in particular, we review past work on its use for model evaluation in section 4. We\\u2019ve updated that paragraph to add an important point: the MMD with a generic kernel tends not to be very discriminative in high dimensions. Reddi et al. (https://arxiv.org/abs/1406.2083) show that the power of a two-sample test based on the MMD decreases polynomially in high dimensions, for many types of distributions. NNDs, on the other hand, leverage the inductive biases of neural networks in order to produce a discriminative metric even in high dimensions.\"}",
"{\"title\": \"Thank you for the reviews! Paper updated.\", \"comment\": [\"We\\u2019d like to thank all the reviewers for your thoughtful comments. We\\u2019ve made the following significant updates to our paper based on your feedback:\", \"Clarified throughout that our goal is to present a promising approach and motivate future work, rather than directly to propose a benchmark. To that end, added section 4.1, \\\"Drawbacks of NNDs for Evaluation\\\".\", \"Added a detailed discussion of the need for evaluation metrics tailored to a specific task in Appendix A, \\\"The Importance of Tradeoffs in Evaluation Metrics\\\".\", \"Clarified the situation with bias from a small test set in sections 3.3 and 4.1\", \"Additionally, we've responded to your comments individually below.\"]}",
"{\"title\": \"Good beginning. Algorithm is not that interesting\", \"review\": \"This paper is quite interesting as it tries to find a new metric for evaluating GANs. IS is a terrible metric, as memorization would achieve high score and test log-likelihood cannot be evaluated. I like the long discussion at the beginning of the paper about what a metric for evaluating implicit generative models would need to be a valid and useful metric. This problem is of great importance for GANs as proving that GANs solve the density estimation problem would be extremely hard and even more so, making sure we are close to a good solution with any finite sample even more so (I am talking to non-trivial examples in high dimensions). It is clear that in order to make GANs, in particular, or implicit models, in general, useful, we need to find metrics that would allow us to achieve progress. This paper is a direction in what it needed. In this sense I think the paper can be a good starting point for the discussion that we are not having right now, because we are too focused on making sure they converge, but not how they can be useful.\\n\\nOn the down side, I think the proposed DNN metric is not exactly useful. It would be a subset of the metric that an MMD would give and it would focus only in some properties of the images but not on the whole distribution. So, if this metric does not capture the relevant aspects of the problem the GAN is trying to imitate, it will fail to provide that metric that we are looking for. \\n\\nI would see this paper as a great workshop paper, in the sense of old-fashion NIPS workshops in which new ideas were tested and discussed. But it clearly would like the polished papers that we see in conferences these days. Bernhard Schoelkopf told me once, after receiving the ICML reviews, \\u201cPeople now focus more on reasons to reject a paper than in reason for accepting a paper.\\u201d (note that I am quoting from memory, the bad use of English in mine not his). There are many reasons to reject this paper, but also some reason to accept the paper.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Well written overview of GAN benchmarks\", \"review\": \"The paper is well written, with a clear description of the properties a good benchmark should have, an analysis of the current solutions and their shortcomings and an extensive experimental evaluation of the CNN divergence metric. The authors also compared with non GAN methods and experimented with small datasets, both are not necessarily within scope but a welcome addition. The authors also open source their code.\\n\\nIn the section \\u201cOutperforming Memorization\\u201d, the authors mention a way to tune capacity of the \\u201ccritic\\u201d network and influence its ability to overfit on the sample. This means that if someone wants to compare the generalisation and diversity of samples between GANs, they would need to train the exact same critic CNN to be able to make a comparison. However the authors do not provide any principled way to determine the right size of the \\\"critic\\\" network. In general, given evaluating the metric requires training a network from scratch, it will be very difficult to make this consistent. This makes the proposed benchmark more impractical to use than its alternatives.\\n\\nIn the section \\u201ctraining against the metric\\u201d, the authors mention that a main criticism is the fact that a GAN directly optimises for the NND loss. In table 3 we indeed see that this is the case, however the authors argue that perhaps the GAN is simply the better model. I am worried by the fact that both PixelCNN++ and IAF-VAE perform worse than the training set on this benchmark. It seems like this particular benchmark would then work well specifically for GANs, but would (still) not allow us to compare with models trained using maximum likelihood.\\n\\nIn conclusion, I think the paper is well written and the authors clearly make progress towards a dependable benchmark for GANs. The paper does not introduce any new method, but instead has a thorough analysis and discussion of current methods which is worthwhile by itself.\", \"the_authors_show_a_range_of_results_using_a_cnn_based_divergence\": \"on PixelCNN++, GANs, overfitted GANs, WGAN-GP and conclude that it\\u2019s a better metric than IS/FID at the expense of requiring much more computation to evaluate. They also perform a test with limited compute and show that the results correlate well with a bigger dataset, but show some bias.\", \"nits\": \"Page 7, second paragraph, fifth line, spurious \\u201cq\\u201d\\n\\n########\\nRevision\\n\\nI would like to thank the authors for a thoughtful revision and response. I have updated my score to a 7 and think this paper is a worthy contribution to ICLR. The new drawback section is well written and informative.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Not a thorough paper\", \"review\": \"The paper aims to come up with a criterion for evaluating the quality of samples produced by a Generative Adversarial Network. The main goal is that the criterion should not reward trivial sample generation algorithms such as the one which generates samples uniformly at random from the samples in the training set. I personally feel that if sample generation is the only goal, then this trivial algorithm is perfectly fine because, statistically, the empirical distribution is in many, though not all, ways, a good estimator of the underlying true probability measure (this is the idea that is used in the statistical technique of Bootstrap for example). However the underlying goal in unsupervised learning problems where GANs are used is hardly sample generation. The GANs also output a whole function in the form of a generative network which converts random samples into samples from the underlying generating distribution. This generative network is arguably more important and more useful than just the samples that it generates. An evaluation scheme for GANs should focus on the generative network directly rather than on a set of its generating samples.\\n\\nEven if one were to regard the premise of the paper as valuable, the paper still does a poor job meeting its objective. A measure D_CNN is proposed as a benchmark. It must be remarked that D_CNN is not even properly defined (for example, there is a function \\\\Delta in its definition but it is never explained what this function is). D_CNN is a variant of the existing notion of Neural Network Divergences. Only a numerical study (with no theory) is done to illustrate the utility of D_CNN for evaluating samples generated by GANs. The entire paper is very anecdotal with very little rigorous theory.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
HygtHnR5tQ | Generative Adversarial Networks for Extreme Learned Image Compression | [
"Eirikur Agustsson",
"Michael Tschannen",
"Fabian Mentzer",
"Radu Timofte",
"Luc van Gool"
] | We propose a framework for extreme learned image compression based on Generative Adversarial Networks (GANs), obtaining visually pleasing images at significantly lower bitrates than previous methods. This is made possible through our GAN formulation of learned compression combined with a generator/decoder which operates on the full-resolution image and is trained in combination with a multi-scale discriminator. Additionally, if a semantic label map of the original image is available, our method can fully synthesize unimportant regions in the decoded image such as streets and trees from the label map, therefore only requiring the storage of the preserved region and the semantic label map. A user study confirms that for low bitrates, our approach is preferred to state-of-the-art methods, even when they use more than double the bits. | [
"Learned compression",
"generative adversarial networks",
"extreme compression"
] | https://openreview.net/pdf?id=HygtHnR5tQ | https://openreview.net/forum?id=HygtHnR5tQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"H1xn5CiC14",
"HylAU71xyN",
"HJlImthqRX",
"rkeIvu250Q",
"ByxQGdh5AX",
"BJlLGD3c07",
"SJlxoKZWpQ",
"rke92n2h3X",
"ryee31V52m"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544629908316,
1543660374485,
1543321886303,
1543321693895,
1543321611404,
1543321358467,
1541638551898,
1541356721679,
1541189544255
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1562/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1562/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1562/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1562/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1562/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1562/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1562/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1562/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1562/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper proposes a GAN-based framework for image compression.\\n\\nThe reviewers and AC note a critical limitation on novelty of the paper i.e., such a conditional GAN framework is now standard. The authors mentioned that they apply GAN for extreme compression for the first time in the literature, but this is not enough to justify the novelty issue.\\n\\nAC thinks the proposed method has potential and is interesting, but decided that the authors need new ideas to publish the work.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Limited novelty\"}",
"{\"title\": \"Questions remaining\", \"comment\": \"We hope the reviewers saw our rebuttal and we would be happy to answer any remaining questions.\"}",
"{\"title\": \"Reply to AnonReviewer1\", \"comment\": \"We thank the reviewer for the feedback.\\n\\nRegarding novelty, we refer to the top comment ( https://openreview.net/forum?id=HygtHnR5tQ¬eId=BJlLGD3c07 ). We will also add additional references on prior use of GANs for reducing blurriness as suggested.\", \"regarding_the_interplay_between_losses\": \"We note that the apparent conflict between the entropy term and MSE loss is motivated by rate-distortion theory (see Cover & Thomas, 2012, Chapter 13) and it is the standard approach to train compression networks in the current lossy compression literature (e.g. Balle et al., 2017, Theis et al., 2017, Minnen et al. (2018)). Note that in our approach, the entropy term is implicit in the dimensionality of the bottleneck and thereby only acts as an upper bound (see Eq. 5 in paper), implemented as a hard constraint rather than a regularizer (see below). It therefore does not conflict with the GAN loss, but rather provides the generator/decoder with conditional information. Regarding the interaction between the GAN loss and the MSE loss, we observe that the MSE loss stabilizes the training as it penalizes collapse of the GAN (see Fig 25, Appendix F.8, p. 29), and observe that the distortion (measured in PSNR) varies as expected when we vary the entropy constraint and turn off the GAN/distortion losses.\", \"convergence_plots\": \"we have added convergence plots in Figures 26&27 in Appendix F.8, p. 30-31. We note that the loss fluctuates heavily across iterations due to the small batch size (one), but the smoothed losses are stable. For all our experiments, both on Cityscapes and OpenImages, we kept the weights of the losses and ratio between discriminator/generator iterations constant and at point did our (GC and SC) models collapse during training for either dataset.\", \"regarding_gradients_through_the_discrete_bottleneck\": \"We use the differentiable relaxation of the quantizer as proposed in (Mentzer et al., 2018, see p4), for which we omitted the details for brevity. Essentially, the \\u201chard\\u201d (actual) quantization function argmax is replaced (for the backward pass only) with a \\u201csoft\\u201d quantization implemented with a softmax. For the forward pass, we use argmax, s.t. the decoder always receives quantized values. Other approaches from the learned compression literature use an approximation based on adding noise (Balle et al., 2016a, b) or using rounding for the forward pass and identity for the backward pass (Theis et al., 2017). It seems that all of these methods work reasonably well in the context of learned compression.\\n\\nWe do visualize the learned discrete representation by sampling uniformly from the bottleneck and generating bottlenecks learned via WGAN-GP in Section 6.1. It can be seen in Figure 5 that uniform bottlenecks yield \\u201csoups of visual words\\u201d, but global coherence is lost. When the bottlenecks are generated via WGAN-GP, the global coherence becomes much better. This means that the quantized representation captures image content well beyond the pixel level.\", \"regarding_the_regularizer\": \"It seems that the reviewer has misread the statement about regularization: the reconstruction loss, not the noise acts as a regularizer.\"}",
"{\"title\": \"Reply to AnonReviewer2\", \"comment\": \"We thank the reviewer for the feedback.\\n\\nIf you define the \\\"image quality\\\" as being SSIM/PSNR, obviously there is no benefit since that is not what we optimize for. However, we stress the contrast between the results in Figure 2 in the paper: while the PSNR optimized approaches have a higher \\\"image quality\\\" in terms of PSNR, they look much blurrier and have more artifacts.\\nMotivated by this, we performed an extensive user study to confirm that our system results of higher visual quality, detailed in Section 6.1.\\n\\nRegarding novelty of GAN, we refer to the top comment ( https://openreview.net/forum?id=HygtHnR5tQ¬eId=BJlLGD3c07).¬eId=BJlLGD3c07 ). We will add further references for multiple G-Ds and local/global discriminators.\", \"regarding_ablation_study\": \"we refer to Figure 2. for a comparison between using GAN or MSE loss, as well as the user study on CityScapes (Fig. 4). The user study furthermore shows that as the entropy constraint is varied (by increasing $C$), the visual quality improves. Additionally, in Table 2 (Appendix F.8, p. 28) we consider the effect of varying the entropy constraint and the GAN/distortion losses for the PSNR on the Cityscapes test set (we stress again though that the PSNR on its own is not an indicator for the visual quality as seen in our user study, where we outperform state-of-the-art methods which have superior PSNR).\"}",
"{\"title\": \"Reply to AnonReviewer3\", \"comment\": \"We thank the reviewer for the feedback.\\n\\nPlease see the top level comment ( https://openreview.net/forum?id=HygtHnR5tQ¬eId=BJlLGD3c07 ) for general comments.\", \"regarding_the_quality_metrics\": \"the MS-SSIM and PSNR are worse because of the GAN loss. When the network synthesizes texture, these details do not align with the original texture in the image, causing a higher PSNR than corresponding blurry regions from MS-SSIM/PSNR optimized models. For this reason, we conducted a thorough user study to assess the quality of our results in comparison with other methods, based on human perception (Sec 6.1 and Fig. 4).\\n\\nWe stress the contrast in Figure 2 between our model trained with GAN and the MSE baseline using the same architecture. While the PSNR for the MSE baseline is more than 2dB higher than ours, the perceptual quality is clearly worse. In the additional results provided in Table 2 (Appendix F.8, p. 28) we show how the PSNR on Cityscapes vary as we vary the entropy constraint and the distortion/GAN losses.\\n\\n\\\\lambda in Eq. (6) simply weights the distortion loss. If \\\\lambda is very large, it dominates the total loss and the network will behave as a standard learned compression system. If it zero, then only the GAN loss and the entropy loss/constraint remain. Similar to the challenges faced when training a standard GAN for high resolution images (the difference here is that we have access to quantized features from the encoder), we observe a collapse when turning off the distortion losses (Fig 25, Appendix F.8, p. 29). Without the distortion losses, the only signal for the generator/decoder is the \\\"view of the discriminator\\\" of the realism of the image, without any reference to the encoded input image. In contrast, when the distortion loss is added, the encoder and the decoder/generator get direct gradient information on how to to improve the output in reference to the input image.\"}",
"{\"title\": \"General Comment to all Reviewers\", \"comment\": [\"All three reviewers are concerned with the novelty of using GANs for compression, so we address this here. We agree with the reviewers that multi-scale discriminators and the use of GANs to prevent blur have been considered before, and we do not claim that these on their own are novel contributions of ours.\", \"However, we stress the following:\", \"Our approach for combining GANs with extreme compression has not been explored before. We focus on a new direction of image compression, where the algorithm focuses on realistic instead of faithful reconstructions w.r.t. PSNR/MS-SSIM, thereby obtaining mostly artifact-free reconstructions at extremely low bitrates. Notice how in Figure 1, all other approaches produce some sort of blocking artifacts or extreme blurring, while our approach synthesizes realistic tree textures. We believe that this is a valuable novel direction for the compression literature.\", \"No previous works thoroughly studied GANs in the context of full resolution image compression. We showed its effectiveness by setting new state-of-the-art in visual quality based on a user study (with dramatic bitrate savings), ablated on the differences between optimizing with a GAN loss vs MSE only (both visually (Fig. 3) and in a user study (Fig. 4)) and show that the compressed representations learned are more meaningful than for MSE models when decoded (Fig. 20).\", \"No previous learned compression works explored these very low bitrates before for full resolution images. With our approach, we are the first to produce visually pleasing results at those bitrates.\"]}",
"{\"title\": \"Impressive results, but some details unclear\", \"review\": \"This paper proposed GAN-based framework for image compression and show improved results over several baseline methods. Although the approach is not very novel by itself, the adaption and combination of existing methods for the proposed solution is interesting. Although the bpp are consistently lower, the quality metrics used for comparison seem unclear.\", \"pros\": [\"The reported compression results with a GAN-based framework for large images are impressive\", \"Comprehensive set of results with Kodak, RAISE1K and Cityscapes datasets\", \"The paper is well written with the core results and idea being well articulated\"], \"cons\": [\"Primary concern: The quality metrics are unclear esp. for GC models, since traditional metrics such MS-SSIM and PSNR are noted to worse and primarily visual inspection is used for comparison, making it less concrete. Would also to help include these metrics for comparison\", \"Eqn6: \\\\lamda balancing the distortion between GAN loss and entropy terms - can the authors elaborate on this ? Furthermore, the ensuing statement that the addition of the distortion term, results in acting like a regularizer - seems like only a conjecture, can the authors additionally comment on this as well.\"], \"minor_issues\": [\"The comparison of improvement in compression is reported using relative percentage numbers in some places as the improvement and others as lack of therein. It would help to use a common reporting notation throughout the text, this helps readability/understandability\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"official review for \\\"Generative Adversarial Networks for Extreme Learned Image Compression\\\"\", \"review\": \"This paper proposed an interesting method using GANs for image compression. The experimental results on several benchmarks demonstrated the proposed method can significantly outperform baselines.\", \"there_are_a_few_questions_for_the_authors\": \"1.The actually benefit from GAN loss: the adversarial part usually can benefit the visual quality but is not necessary related to image quality (e.g. SSIM, PSNR). \\n\\n2.The novelty of the model: GAN models with multiple G-Ds or local/global discriminators is not novel (see the references).\\n\\n3.Do you have ablation study on the effects of conditional GAN and compression part to the model?\", \"references\": \"a. Xi et al. Pedestrian-Synthesis-GAN: Generating Pedestrian Data in Real Scene and Beyond \\nb. Yixiao et al. FD-GAN: Pose-guided Feature Distilling GAN for Robust Person Re-identification\", \"revision\": \"the rebuttal can not address my concerns, especially the image quality assessment and the novelty of the paper parts. I will keep my original score but not make strong recommendation to accept the paper.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Review\", \"review\": \"This paper proposes to use GAN to address the image compression problem. It is shown to achieve superior results over the past work in two different settings (GC and SC).\", \"novelty\": \"It has been well discovered in the literature of GANs that they can resolve the problem of blurriness in generation, compared to the traditional MSE loss. This paper proposes to combine a GAN loss with MSE, together with an entropy loss. However similar approaches were used such as video prediction [1] from 2016. The paper lacks a few references like this.\", \"major_questions\": [\"How do the different loss terms play against each other? The entropy term and the MSE apparently conflict with each other. And how would this affect L_gan? I would like to request some more analysis of this or ablation study on different terms.\", \"How well does the GAN converge? A plot of G and D loss is often presented with GAN approaches.\", \"Discrete latent variable is in itself an interesting problem [2]. I see the image compression as a task to discover a discrete latent variable with minimal storage. Perhaps one most important problem is how to estimate the gradient through the discrete bottleneck. But the paper doesn't provide much insights or experiments on this.\", \"I'm not fully convinced by the claim of the noise that this paper uses to combine the code can act as a regularizer. Adding the noise makes the decoder output stochastic, but the compression problem seems to be deterministic by nature, unlike many other generation problems.\", \"[1] https://arxiv.org/abs/1511.05440\", \"[2] https://arxiv.org/abs/1711.00937\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
rylKB3A9Fm | Assessing Generalization in Deep Reinforcement Learning | [
"Charles Packer*",
"Katelyn Gao*",
"Jernej Kos",
"Philipp Krahenbuhl",
"Vladlen Koltun",
"Dawn Song"
] | Deep reinforcement learning (RL) has achieved breakthrough results on many tasks, but has been shown to be sensitive to system changes at test time. As a result, building deep RL agents that generalize has become an active research area. Our aim is to catalyze and streamline community-wide progress on this problem by providing the first benchmark and a common experimental protocol for investigating generalization in RL. Our benchmark contains a diverse set of environments and our evaluation methodology covers both in-distribution and out-of-distribution generalization. To provide a set of baselines for future research, we conduct a systematic evaluation of state-of-the-art algorithms, including those that specifically tackle the problem of generalization. The experimental results indicate that in-distribution generalization may be within the capacity of current algorithms, while out-of-distribution generalization is an exciting challenge for future work. | [
"reinforcement learning",
"generalization",
"benchmark"
] | https://openreview.net/pdf?id=rylKB3A9Fm | https://openreview.net/forum?id=rylKB3A9Fm | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"H1xk21fgeN",
"HyecLpxACQ",
"BJlEPEq56X",
"HJg2zNq5aQ",
"HygylNq5aQ",
"rkGd7996m",
"ryloTLBC2m",
"B1lNZjECnQ",
"B1gkuMGYnX"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544720294824,
1543535954507,
1542263899701,
1542263828258,
1542263782675,
1542263657516,
1541457602550,
1541454587696,
1541116519161
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1561/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1561/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1561/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1561/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1561/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1561/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1561/AnonReviewer4"
],
[
"ICLR.cc/2019/Conference/Paper1561/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1561/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"The manuscript proposes benchmarks for studying generalization in reinforcement learning, primarily through the alteration of the environment parameters of standard tasks such as Mountain Car and Half Cheetah. In contrast with methodological innovations where a numerical argument can often be made for the new method's performance on well-understood tasks, a paper introducing a new benchmark must be held to a high standard in terms of the usefulness of the benchmark in studying the phenomenon under consideration.\\n\\nReviewers commended the quality of writing and considered the experiments given the set of tasks to be thorough, but there were serious concerns from several reviewers regarding how well-motivated this benchmark is and restrictions viewed as artificial (no training at test-time), concerns which the updated manuscript has failed to address. I therefore recommend rejection at this stage, and urge the authors to carefully consider the desiderata for a generalization benchmark and why their current proposed set of tasks satisfies (or doesn't satisfy) those desiderata.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Important topic, poorly motivated benchmark\"}",
"{\"title\": \"Addition to our previous comment\", \"comment\": \"The results of the OpenAI Retro contest are consistent with our conclusion that vanilla deep RL algorithms usually generalize better than EPOpt and RL^2. As a recap, the OpenAI Retro contest was a transfer learning challenge on Sonic the Hedgehog games. Given a set of training levels, teams were tasked with training a policy and a fast learner that could be used to fine-tune the policy given a million time steps at test time. Apart from the test-time fine-tuning, this corresponds to training on R and testing on E in our framework. The winning team\\u2019s strategy was to train a single policy using PPO on all the training levels, with each level weighted equally (i.e. vanilla PPO in our paper), and then fine-tune it using PPO at test time. A blog post with details about the contest results is found here: https://blog.openai.com/first-retro-contest-retrospective/.\"}",
"{\"title\": \"Replying to AnonReviewer4's comments\", \"comment\": \"Thank you very much for your feedback.\\n\\nIn our revision we will make it clearer that we focus on generalization to changes in the environment dynamics. Other works consider generalization in the same context, i.e., \\u201ctesting performance degradation in the presence of systematic physical differences between training and test domains\\u201d (Rajeswaran et al. 2017). Whiteson et al. (2017) and Zhang et al. (2018) also consider this type of generalization. We believe that state-of-the-art algorithms should be able to solve these simpler generalization tasks (e.g., not overfitting to training domain in simulator) before addressing more complex ones such as the combinatorial generalization discussed in \\u201cRelational inductive biases, deep learning, and graph networks\\u201d by Battaglia et al. (2018).\\n\\nWe chose these set of tasks for several reasons. They are classic tasks in RL that are implemented in the widely-used OpenAI Gym and used in previous literature on generalization in deep RL. Varying their parameters such as length and mass for Pendulum is a simple way to create environments with \\u201csystematic physical differences\\u201d. It also enables us to differentiate between interpolation and extrapolation in a way that reflects the real world; environment version R can be thought of as a distribution of normal situations and version E can be thought of as a distribution of edge cases, which are unusual in some sense. These tasks are often considered simple, but we believe that this view is because the classic RL setup considers one fixed environment configuration and that considering variations in the environment presents new challenges.\\n\\nThe distribution of parameters for each environment version was carefully chosen by watching video footage of agents trained on environment D to determine realistic ranges for possible success (which were used to construct R), and non-realistic ranges (which were used to construct E). The binary success metric is admittedly subjective, but is also chosen carefully to correlate with what a user would consider the learning objective in a given simulator (for example, if you learned to walk, you should be able to get to 20 meters on the track). We believe that this type of metric should be supported in Gym environments because it separates policy performance evaluation from the reward shaping used for training (which may vary between different software implementations of the same environment).\\n\\nWe agree that there is a gray area associated with our choice to assess only approaches to generalization that do not allow policy updates at test time. Our choice at the beginning of the project was motivated by a desire to do a thorough evaluation of a few methods and we decided to not include algorithms that make gradient updates to the model at test time, such as MAML. \\n\\nWe have rewritten the introduction and conclusion to emphasize the main takeaways of our baseline evaluations. On average, the vanilla deep RL algorithms, despite their reputation for brittleness (Henderson et al. 2017), interpolate and extrapolate as well or better than EPOpt and RL^2, which are specifically designed for generalization. In other words, simply training a policy that is oblivious to environment changes on random perturbations of the default environment configuration can be very effective. The only exception is PPO with the FF architecture, where EPOpt generalizes a bit better than the vanilla algorithm. \\n\\nThe effectiveness of EPOpt and RL^2 is highly dependent on the base algorithm (A2C or PPO) and the environment, although intuitively they should be general-purpose approaches. For instance, in most environments EPOpt appears to be effective only when combined with PPO under the FF architecture. Exploring why this occurs would be an interesting avenue for future work. We found that the training of RL^2 is less stable than that of the vanilla deep RL algorithms, possibly due to the fact that the RC policy takes as input the trajectories of multiple episodes instead of one episode; an example is shown in Figure 4. This partially explains its poorer generalization performance, but more investigation is needed to ascertain the true cause. It is important to note that while the EPOpt paper evaluates on Hopper and HalfCheetah, the RL^2 paper does not evaluate on any of the six tasks we consider. \\n\\nThank you for the reference to Nair et al. (2016) We have included it in Section 2 as an early example of evaluating generalization in RL. We have expanded the description of the RC architecture to clarify the policy inputs for the two RL^2 algorithms versus the other algorithms.\"}",
"{\"title\": \"Replying to AnonReviewer3's comments\", \"comment\": \"Thank you very much for your feedback.\\n\\nWe did a pretty thorough hyperparameter search; there are no additional hyperparameters for RL^2 compared to the PPO/A2C equivalents (apart from the added KL divergence coefficient for RL^2-PPO, and the choice of episodes-per-trial). It may be the case that RL^2 is relatively sample inefficient, however we also noticed that RL^2 is relatively volatile during training. EPOpt has two additional hyperparameters - the number of \\u201cnormal\\u201d iterations before beginning to use the worst-epsilon trajectories, and epsilon. We use the corresponding values reported in the EPOpt paper (their experiments are on Hopper and HalfCheetah).\\n\\nIn our revision we have redefined the generalization summary numbers. Our previous definition of Interpolation (geometric mean of RR and EE) and extrapolation (geometric mean of DR, DE, and RE) led to some confusing results where Interpolation=0 but Extrapolation>0 because the algorithms found it harder to train on E. We have removed EE from Interpolation and the results are now more sensible.\\n\\nSection 4 in the PPO paper (Schulman et al. 2017) describes the using KL divergence as a penalty in the loss function. Its coefficient is currently set to zero in the OpenAI Baselines PPO implementation, but we found that a nonzero coefficient improved training stability in RL^2-PPO, which is relatively volatile otherwise. The KL divergence coefficient becomes an additional hyperparameter in our grid search (the range does include zero which removes the penalty from the loss function).\"}",
"{\"title\": \"Replying to AnonReviewer2's comments\", \"comment\": \"Thank you very much for your feedback.\\n\\nWe apologize if this was not clear, but the works cited in the third paragraph of Section 1 are not benchmarks or empirical studies; they propose algorithms designed to build agents that generalize. However, they have widely varying experimental setups, both in terms of environments (e.g. MuJoCo) and their variations to which the trained agents are supposed to generalize (e.g. a heavier robot torso). They also do not use common metrics for generalization performance. Therefore, it is difficult to fairly compare them and to determine which perform best in what situations. Furthermore, many consider interpolation but not extrapolation. This was the motivation for our work.\\n\\nWhiteson et al. (2011) and Duan et al. (2016) cited in Section 2 are more similar to our work in that they focus on how to evaluate RL algorithms. Whiteson et al. (2011) propose a similar experimental protocol to us, differentiating interpolation and extrapolation, but consider simple tasks and tabular learning. Duan et al. (2016) appear to consider only interpolation on simple tasks (no locomotion). Neither evaluate methods specifically designed for generalization. OpenAI Retro is a benchmark for transfer learning in RL, considering Sonic the Hedgehog games, but gives information about test environment configurations during training and does not differentiate between interpolation and extrapolation.\\n\\nWe believe that the binary success metric has several advantages. First, it makes the results much more interpretable across tasks, environment conditions, and different software implementations by separating the reported performance level from reward shaping. For example, the reward structure for HalfCheetah is different from Roboschool to RLLab and it is not at all clear how it differs for HalfCheetah-v0 and HalfCheetah-v1 in Gym. Second, it is in line with the original spirit of RL as \\u2018goal seeking\\u2019 discussed in Sutton and Barto (2017).\"}",
"{\"title\": \"Note regarding latest revision (11/14/2018)\", \"comment\": [\"Updates to results:\", \"We now report results (mean and standard deviation) over five complete runs of the hyperparameter grid search.\", \"We have redefined the generalization summary numbers. Our previous definition of Interpolation (geometric mean of RR and EE) and extrapolation (geometric mean of DR, DE, and RE) led to some confusing results where Interpolation=0 but Extrapolation>0 because the algorithms found it harder to train on E. We have removed EE from Interpolation (now just RR) and the results are now more sensible.\", \"Section 7 (results and discussion) has been updated to match the new numbers (and the new definition of Interpolation), however the overall conclusions / themes did not change.\"], \"updates_to_appendix\": [\"Section D has been added which explains some unintuitive results from MountainCar.\", \"Section E has been added, which investigates the effect of EPOpt and RL^2 on training. We observe that training appears to be stabilized by increased randomness in environments (R and E, vs D).\", \"Section F has been added, which investigates the effect of environment difficulty on learned policies. We illustrate how training on a range of environment configurations (instead of a fixed/deterministic environment) may encourage policies more robust to changes in system dynamics at test time.\", \"The complete set of training curves and evaluation videos discussed in Section E and F are available at the following (anonymous) Google drive link: https://drive.google.com/drive/folders/1H5aBv-Lex6WQzKI-a_LCgJUER-UQzKF4\"]}",
"{\"title\": \"Review\", \"review\": \"This paper presents a new benchmark for studying generalization in deep RL along with a set of benchmark results. The benchmark consists of several standard RL tasks like Mountain Car along with several Mujoco continuous control tasks. Generalization is measured with respect to changes in environment parameters like force magnitude and pole length. Both interpolation and extrapolation are considered.\\n\\nThe problem considered in this paper is important and I agree with the authors that a good set of benchmarks for studying generalization is needed. However, a paper proposing a new benchmark should have a good argument for why the set of problems considered is interesting. Similarly, the types of generalization considered should be well motivated. This paper doesn\\u2019t do a good job of motivating these choices.\\n\\nFor example, why is Mountain Car a good task for studying generalization in deep RL? Mountain Car is a classic problem with a two-dimensional state space. This is hardly the kind of problem where deep RL shines or is even needed at all. Similarly, why should we care whether an agent trained on the Cart Pole task can generalize to a pole length between 2x and 10x shorter than the one it was trained on without being allowed to update its policy? Both the set of tasks and the distributions of parameters over which generalization is measured seem somewhat arbitrary.\\n\\nSimilarly, the restriction to methods that do not update its policy at test time also seems arbitrary since this is somewhat of a gray area. RL^2, which is one of the baselines in the paper, uses memory to adapt its policy to the current environment at test time. How different is this from an agent that updates its weights at test time? Why allow one but not the other?\\n\\nIn addition to these issues with the proposed benchmark, the baseline results don\\u2019t provide any new insights. The main conclusion is that extrapolation is more difficult than interpolation, which is in turn more difficult than training and testing on the same task. Beyond that, the results are very confusing. Two methods for improving generalization (EPOpt and RL^2) are evaluated and both of them seem to mostly decrease generalization performance. I find the poor performance of RL^2-A2C especially worrisome. Isn\\u2019t it essentially recurrent A2C where the reward and action are fed in as inputs? Why should the performance drop by 20-40%?\\n\\nOverall, I don\\u2019t see the proposed tasks becoming a widely used benchmark for evaluating generalization in deep RL. There are just too many seemingly arbitrary choices in the design of this benchmark and the lack of interesting findings in the baseline experiments highlights these issues.\", \"other_comments\": [\"\\u201cMassively Parallel Methods for Deep Reinforcement Learning\\u201d by Nair et al. introduced the human starts evaluation condition for Atari games in order to measure generalization to potentially unseen states. This should probably be discussed in related work.\", \"It would be good to include the exact architecture details since it\\u2019s not clear how rewards and actions are given to the RL^2 agents.\"], \"rating\": \"3: Clear rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Review\", \"review\": \"This paper proposes a benchmark for for reinforcement learning to study generalization in stationary and changing environments. A combination of several existing env. from OpenAi gym is taken and several ways to set this parameters is proposed. Paper provides a relatively thorough study of popular methodologies on this benchmark.\\n\\nOverall, I am not sure there is a pressing need for this benchmark and paper does not provide an argument why there is an urgent need for one.\\n\\nFor instance, paragraph 3 on page 1 details a number of previous studies. Why those benchmarks are in-adequate?\\nOn page at the end of second paragraph a number of benchmarks from transfer learning literature is mentioned. Why not just use those and disallow model updates?\\nIn the same way, it is not clear why new metric is introduced? How does it correlate with standard reward metrics?\\n\\nOverall, as empirical study, I think this work is interesting but I think paper should justify why we need this new benchmark.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Interesting topic and solid experiments\", \"review\": \"Update: Lower the confidence and score after reading other comments.\\n===\\n\\nIn this paper, the authors benchmark several RL algorithms on their abilities of generalization. The experiments show interpolation is somehow manageable but extrapolation is difficult to achieve. \\n\\nThe writing quality is rather good. The authors make it very clear on how their experiments run and how to interpret their results. The experiments are also solid. It's interesting to see that both EPOpt and RL^2, which claim to generalize better, generalize worse than the vanilla counterparts. Since the success rates are sometimes higher with more exploration, could it be possible that the hyperparameters of EPOpt and RL^2 are non-optimal? \\n\\nFor interpolation/extrapolation tasks, all 5 numbers (RR, EE, DR, DE, RE) are expected since the geometric mean is always 0 once any of the numbers is 0. \\n\\nWhat does ``\\\"KL divergence coefficient\\\" in RL^2-PPO mean? OpenAI's Baselines' implementation includes an entropy term as in A2C.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}"
]
} |
|
r1xYr3C5t7 | Neural Message Passing for Multi-Label Classification | [
"Jack Lanchantin",
"Arshdeep Sekhon",
"Yanjun Qi"
] | Multi-label classification (MLC) is the task of assigning a set of target labels for a given sample. Modeling the combinatorial label interactions in MLC has been a long-haul challenge. Recurrent neural network (RNN) based encoder-decoder models have shown state-of-the-art performance for solving MLC. However, the sequential nature of modeling label dependencies through an RNN limits its ability in parallel computation, predicting dense labels, and providing interpretable results. In this paper, we propose Message Passing Encoder-Decoder (MPED) Networks, aiming to provide fast, accurate, and interpretable MLC. MPED networks model the joint prediction of labels by replacing all RNNs in the encoder-decoder architecture with message passing mechanisms and dispense with autoregressive inference entirely. The proposed models are simple, fast, accurate, interpretable, and structure-agnostic (can be used on known or unknown structured data). Experiments on seven real-world MLC datasets show the proposed models outperform autoregressive RNN models across five different metrics with a significant speedup during training and testing time. | [
"Multi-label Classification",
"Graph Neural Networks",
"Attention",
"Graph Attention"
] | https://openreview.net/pdf?id=r1xYr3C5t7 | https://openreview.net/forum?id=r1xYr3C5t7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"H1lKkzYbx4",
"B1e7a3fmC7",
"r1lSDhf707",
"BJeNI3GmAm",
"HJeVrnzmR7",
"Hylf42fm0X",
"rkghLczXCQ",
"H1gKXqMmRQ",
"S1gib5GQ0m",
"B1eexqMXCX",
"BkecRtfmCX",
"BygWhYGQC7",
"rkgQSKGQRQ",
"rJxlVtfmAQ",
"ryxPMFfmC7",
"rygMZtfmR7",
"rylTyYG7AX",
"HkgBCuGmRm",
"B1lmRDA037",
"ByeckYZ937",
"Sylw5GMBnm"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544815073255,
1542823098889,
1542823005053,
1542822987919,
1542822971700,
1542822954359,
1542822483615,
1542822433093,
1542822403156,
1542822376406,
1542822354006,
1542822313508,
1542822203155,
1542822183844,
1542822159253,
1542822137721,
1542822117042,
1542822092708,
1541494731276,
1541179617987,
1540854414804
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1560/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1560/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1560/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1560/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1560/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1560/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1560/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1560/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1560/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1560/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1560/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1560/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1560/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1560/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1560/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1560/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1560/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1560/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1560/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1560/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1560/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"The reviewers highlighted aspects of the work that were interesting, particularly on the chosen topic of multi-label output of graph neural networks. However, no reviewer was willing to champion the paper, and in aggregate all reviewers trend towards rejection.\", \"confidence\": \"3: The area chair is somewhat confident\", \"recommendation\": \"Reject\", \"title\": \"Not a clear acceptance\"}",
"{\"title\": \"Summary of Changes\", \"comment\": [\"We would like to thank our reviewers for providing valuable comments and questions. Please see our revised manuscript, which we have updated to reflect the following changes.\", \"We have changed our title from \\u201cGraph2Graph Networks for Multi-Label Classification\\u201d to \\u201cNeural Message Passing for Multi-Label Classification\\u201d\", \"We have changed our method name from Graph2Graph Networks to Message Passing Encoder-Decoder (MPED) Networks for MLC\", \"We have revised the presentation of our approach from the graph angle to neural message passing. We feel the current writing is more intuitive, concise, and easier to follow. We have revised the model figure (Figure 1) into a more intuitive way to show the modular components for message passing between inputs, from inputs to labels, and between labels. In addition, we have revised the writing to reflect this modular component approach of our method. We want to emphasize that our model has not changed at all.\", \"We have addressed the helpful reviews about not using known input graph structure by adding a new dataset, SIDER, which is an MLC dataset for predicting multiple side effects using the molecule structure (graph) of a drug. Now our experiments contain seven real world MLC datasets which cover a wide spectrum of input data types, including: raw English text (sequential form), bag-of-words (tabular form), and drug molecules (graph form).\", \"We also addressed the concerns about not using known output graphs by expanding our experiments and considering the known graph structure among labels for six datasets. For Reuters, Bibtex, Delicious, and Bookmarks we have extracted label graphs using label similarity from WordNet. For RCV1, we use the known topological graph from the RCV1 dataset. For TFBS, we use String-DB to obtain the label graph representing TF-TF protein interactions. This new set of results are added using MPED Prior G_DEC in Table 2.\", \"We have addressed each reviewer's individual points in separate comments.\"]}",
"{\"title\": \"Novelty of our paper\", \"comment\": \"We thank reviewer 3 for the comments. We have updated our manuscript to provide a clearer explanation of our motivation and method. However, we have the following confusions.\\n1. We would like to ask for a clarification on the graph-coarsening idea, since our method does not use graph coarsening. \\n2. We don\\u2019t believe our paper is related to Ying et. al who develop a pooling method for graph classification. Our paper is not for graph classification at all.\\n3. Mrowca et al. introduce a hierarchical representation of node states for state prediction. We believe this is a perpendicular research direction and the Mrowca et al. paper is not about multi-label classification. Although their task is different than ours, a similar hierarchical representation could be added our model in future work. We have added this our related work section. \\n4. Most importantly, we do not claim novelty in our graphical neural network methodology. Rather, our method is a novel approach for MLC using neural message passing through an encoder-decoder architecture.\"}",
"{\"title\": \"Baselines\", \"comment\": \"We thank reviewer 3 for bringing up an important point about baselines.\\n1. The novelty of our approach is about applying message passing neural networks for MLC. In our experiments, we use state-of-the art MLC baselines including Seq2Seq MLC, SPEN MLC, and Binary Relevance MLC. \\n2. Our message passing method is fundamentally similar to Velickovic et al. 2017 who used attention message passing for graph classification. Our contribution is not about a novel graph neural network, but rather it is a novel approach for MLC. Our current choice of graph attention based neural message passing is able to specify different weights to different nodes in a neighborhood, without requiring any kind of costly matrix operation such as inversion (Kipf & Welling, 2016) or depending on knowing the graph structure a priori. Adding variations of graph neural networks will certainly enrich our paper, which is one of our future, for instance a hierarchical graph representation for our decoder.\"}",
"{\"title\": \"Writing is fairly dense and could be explained in under 8.5 pages\", \"comment\": \"We thank reviewer 3 for pointing out this drawback of our explanation. We have since updated the manuscript to reflect such our approach using an encapsulated representation of message passing modules and it now fits in under 8 pages.\"}",
"{\"title\": \"Can the same core model be useful for things beyond MLC as well?\", \"comment\": \"We thank reviewer 3 for asking this important question. In general, our message passing encoder-decoder method is a generic framework for any encoder-decoder model where the input and outputs can be represented as graphs. As explained in the updated version of our paper, this could be applied to other input or outputs such as drug molecule inputs (which we have since added). However, we want to show that message passing is a good method for MLC. There are many applications of similar models, for example a parrallel submission in ICLR 2019, the Graph Transformer (https://openreview.net/forum?id=HJei-2RcK7¬eId=HJei-2RcK7).\"}",
"{\"title\": \"How does our model compare to SPEN?\", \"comment\": \"We thank reviewer 2 for bringing up the important differences between our method and SPEN models. Indeed, we do not explicitly model the output structure as done in SPEN models. In contrast to SPEN and related CRF methods which use an iterative refinement of the output label predictions, our method is a simpler feedforward block to make predictions in one step. However, we plan to expand on our method by adding a SPEN output in future work.\"}",
"{\"title\": \"Is this a fully connected feed-forward network?\", \"comment\": \"We would like to thank reviewer 2 for bringing to light an important question about clarity in our explanation, which he have thus revised in the manuscript. We have revised the model figure (Figure 1) into a more intuitive way to show message passing between inputs, from inputs to labels, and between labels.\\n\\nIn Equations 1 and 2 we introduce a generic form of neural message passing, and we show how we implement neural message passing using graph attention in Equations 3-9. This is largely different from how we introduced graph attention in the previous version of our draft, which we think is now more intuitive and straightforward to understand. In summary, \\n+ Each node is represented as a weighted summation of the messages passed from all of its neighbors. \\n+The edges in our Figure 1 are not representing fully connected MLP edges. \\n+ We have revised Figure 1 to connect better to the equations. For example, in the decoder, the edge weights e_12 from node 2 to node 1 are calculated using Equations 3 and 4. Equation 5 further normalizes e_12, e_13, e_14, e_15 into a_12, a_13, a_14, a_15. a_12 is then used in Equation 6 to weight the messages from node 2 to node 1. Equation 7 aggregates all neighbors\\u2019 messages to node 1 by summing over messages from node 2 to 1, node 3 to 1, node 4 to 1, and node 5 to 1. Finally, the aggregated messages from the neighbors of node 1 are used to update the state of node 1 in Equations 8 and 9.\\n+ The above process (Equations 3-9) is certainly not a fully connected feed-forward network. It can be viewed as 1-dimensional convolution with kernel and stride sizes of 1. This is a key aspect of message passing neural networks, where feature dependencies are learned in a order-invariant manner. It is important to note that the W matrices are shared across node embeddings. In other words the W matrices are not fully connected across all labels and inputs. \\n\\nWe explain how we use attention message passing for MLC using our encoder-decoder approach in Equations 10-16.\"}",
"{\"title\": \"Can this work on graph-structured data?\", \"comment\": \"We agree that our original explanation was unclear and hard to read. The original model name Graph2Graph was a stretch since we didn\\u2019t use explicit graphs in the inputs or outputs. We have done a major revision of our draft summarized as follows:\\n1. We have changed our method name from \\u201cGraph2Graph Networks\\u201d to \\u201cMessage Passing Encoder-Decoder (MPED) Networks\\u201d. We want to emphasize that our model has not changed at all. We simply revised the presentation of our approach from the graph angle to neural message passing. We feel the current writing is more intuitive, concise, and easier to follow.\\n2. We have added new experiments using graph-structured inputs with a drug side effect dataset which uses drug molecule (graph) inputs. \\n3. In addition, we have added known structure of graph labels, which shows our method can in fact work on known graphs, as explained in the method and experiments section.\"}",
"{\"title\": \"Is the model interpretable?\", \"comment\": \"We would like to thank reviewer 2 for asking an important question about the importance of the attention mechanisms. There are a few reasons behind our claim:\\n1. We make this claim as an extension of previous works using attention (Bahdanau et al. 2015, Vaswani et al. 2017, Velickovic et al. 2017) which all show the advantage of interpretability of the attention weights on natural language and graph networks. \\n2. There exists a large literature (Simonyan et al., 2014, Bach et al. 2015, Tulio Ribeiro et al. 2016) about visualizing deep neural networks, and there are different ways to categorize these types of visualizations. Roughly, these lines of work fall into one of the following categories. First, feature attribution methods are concerned with how features of a sample contribute to a model output. Second, interaction attribution methods are concerned with how non-additive effects between features influence an outcome variable. Third, the locally interpretable model-agnostic explanations (LIME - Tulio Ribeiro et al. 2016) approximate model predictions in the local vicinity of a data sample.\\n\\nOur model provides three different levels of attention weights. The first is our input-to-input attention weights which are concerned with how these non-additive interactions among components contribute the representation of the inputs. In the second level, the input-to-label attention weights are about how the input components differently contribute to the representation of the output labels. In the third level, the label-to-label attention weights are concerned with how non-additive interactions among labels contribute the final representation of the labels before label classification. We agree that our attention weights are not about how features or feature interactions contribute directly to the outcome. We plan to combine our attention weights with approaches such as LIME to provide such end-to-end explanations.\"}",
"{\"title\": \"Can the model explanation be encapsulated and made more clear? And should the encoder be explained first?\", \"comment\": \"We thank reviewer 2 for recommending this way to make our paper more concise easier to understand. We agree that this is a much better way to explain our method. We have since updated the manuscript to reflect such an encapsulated approach, as well as incorporated the recommendation to introduce the encoder first.\"}",
"{\"title\": \"Are there errors in the bold font of the table indicating best performing methods?\", \"comment\": \"We would like to thank reviewer 2 for pointing out this critical typo. This has since been updated.\"}",
"{\"title\": \"Title and model name were misleading\", \"comment\": \"We would like to thank reviewer 1 for clarifying this important aspect of our paper. Accordingly, we have thus revised the manuscript with the following changes:\\n1. We have changed our title from \\u201cGraph2Graph Networks for Multi-Label Classification\\u201d to \\u201cNeural Message Passing for Multi-Label Classification\\u201d\\n2. We have changed our method name from Graph2Graph Networks to Message Passing Encoder-Decoder (MPED) Networks for MLC\\n3. We have revised the model figure (Figure 1.) into a more intuitive way to show message passing between inputs, from inputs to labels, and between labels. \\n4. We have added new experiments using explicit graph representations of labels, as explained in the method section. We found that modelling the labels using fully connected graphs produces better results in most cases.\"}",
"{\"title\": \"Unclear notations of weight matrices W\", \"comment\": \"We thank reviewer 1 for pointing out unclear notations. We have updated the paper to reflect the important point that weights are not shared between the different encoding and decoding modules of our framework. Specifically: weights for input-to-input are represented by W_xx (Equations 10 and 11), weights for input-to-label are represented by W_xy (Equations 12 and 13), weights for input-to-label are represented by W_yy (Equations 14 and 15).\"}",
"{\"title\": \"Representing sentences using fully connected graphs is not natural\", \"comment\": \"We agree that our wording on representing sentences as a fully connected graph was misleading, which we have since changed. Using a fully connected graph to model the interactions between input components allows us to (1) capture non-local feature interactions, and (2) speed up training and testing time by parallelization.\\n\\nIn addition, our framework is able to model more complex input representations such as chemical molecule samples. We have added one more real world MLC dataset which uses drug molecule inputs (results added in Table 2). In this case, the encoder now works on a known graph instead of a fully connected graph. On the equation level, we only need to change the summation indices (in Equation 10) to reflect the known neighbors of a node.\"}",
"{\"title\": \"Computational burdens of fully connected graphs are a drawback\", \"comment\": \"We would like to thank reviewer 1 for the detailed analysis of our approach in regard to computational burdens. We agree that this is a drawback that the parameters of our fully connected graphs scale quadratically with the number of output labels. This is one of the future work directions we are working on now. A few possible solutions include:\\n1. Using some sort of hierarchical representation over the label graph. \\n2. Message passing could be restricted to considering only k neighbors instead of all of the neighbors. In this case, some sorting algorithm would need to be used to rank neighbors.\\n\\nWe are unclear about the question regarding Figures 3 and 4. Can you please expand on the incongruent results between the input-to-label weights vs the label-to-label weights?\"}",
"{\"title\": \"Significance tests\", \"comment\": \"We thank reviewer 1 for mentioning statistical significance tests, which were missing from our experiments previously. We used the Nemenyi test, as used in Read et al., 2009. We have added the results to the Appendix Table 4.\"}",
"{\"title\": \"Minor issues\", \"comment\": \"We thank reviewer 1 for providing a detailed analysis of minor, yet important issues. There are not separate matrices W for each word/label. They are shared across words or labels, and we have updated the manuscript to reflect this. We have also fixed the mentioned typos and errors.\"}",
"{\"title\": \"Interesting but weaker novelty/experiments/writing\", \"review\": \"The paper describes an approach for using graph neural networks (GNN) to perform multi-label classification (MLC). The main idea is to use attentional pooling to project an input graph into a \\\"label graph\\\", whose nodes correspond to labels on some MLC problem. Multiple rounds of self-attention/message-passing hops can be performed on the input graph and label graph. Each output label is binary-valued, and is predicted from its corresponding node in the label graph. They evaluate on 6 multi-label sequence classification datasets, and report strong perform over baselines.\\n\\nThough interesting, I recommend rejection for several reasons:\\n\\n1) The technical contribution has limited novelty. One (very recent) reference this paper misses is \\\"Hierarchical Graph Representation Learning with Differentiable Pooling\\\" by Ying et al. (2018), which uses a very similar mechanism. The field is moving quickly, so references get missed sometimes, however from what I can tell, the graph-coarsening idea presented here isn't that technically distinct from Ying el al.'s. The Mrowca et al. (2018) \\\"Flexible Neural Representation for Physics Prediction\\\" is also fairly similar and should probably at least be cited.\\n\\n2) There aren't strong baselines. This approach is based on GNNs, and the Graph2MLP results, which is similar to previous GNN graph-level classification methods, are fairly strong too. My suspicion is that with some more tuning and tweaking, the results here would be similar to those of Ying et al., Velickovic et al. (2017)'s Graph Attention Nets, and other models which use what Gilmer et al. (2017) terms the \\\"readout\\\" function for MLC. Without testing some of these other approaches, how can readers be sure this is approach has value over other approaches? The reviews by Gilmer et al. (2017) and Battaglia et al. (2018) summarize a bunch of alternatives that could be tried, some of which use similar encoder/decoder setups (not with the attentional pooling, however, as far as I know).\\n\\n3) The writing is fairly dense for what is a fairly straightforward idea. And the paper is over 8.5 pages, with key details in the Appendix. \\n\\nI believe this approach could be quite powerful, and there was clearly a lot of excellent work that went into this project. But because the GNN area is very active, the bar is high. With a little more innovation on the model side (can the same core model be useful for things beyond MLC as well? I'm guessing it could), better baselines, better scholarship, and condensing the writing, I think this paper can be an important step forward.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"A paper, with a misleading title, presenting experimental results for which statistical significance is not reported.\", \"review\": \"As a reviewer I am expert in learning in structured data domains. Because of that I completely disagree that the proposed title of the paper is not misleading. In fact, both the input and the output of the proposed system are not graphs. Moreover, the intermediate representations are always complete graphs, so there is no graph to graph transformation here. It is the internal topology of the encoder and decoder that corresponds to a complete graph and not the nature of the processed data.\\nThe main intended contribution of the paper is to define a system able to capture the dependencies among input features as well as output labels, so to improve the multi-label classification task addressed by the system. This is obtained by defining a recurrent model with a complete graph topology to both encode the input and decode the output. The decoding part starts from the assumption of independence among the output labels and then, via interaction with the encoded representation of the input, eventually turns to an output where relevant statistical dependences among output labels emerge with decoding. Since both encoding and decoding are recurrent models (with no enforced guarantee to have stable points), the paper proposes to unfold the recursion for a fixed predefined number of time steps.\\nPresentation of the proposal is generally good, although there are some issues that are not clear. For example, the same weights indices are used for matrices belonging to the encoding and decoding, making the reader to believe that such matrices are shared. In addition, the sentence about model parameters at page 5 is a bit ambiguous and it is not sufficient to resolve the presentation problem. \\nThe discussion at the end of page 4 on the fact that a sequential representation for the input components is not natural is actually out of place for the specific application task selected for presentation. In fact, words in a sentence have an order. The fact that such order is lost with the bag-of-word representation is a problem of preprocessing, not of the nature of the data. In general, however, it is true that forcing an order is not natural. \\nGoing in the merit of the proposal, the number of parameters for the decoder scales quadratically with the number of output labels (fully connected graph). In domains with a large numbers of labels (e.g. thousands) there may be concerns on two different aspects: i) computational burden may grow significantly even if the average number of labels per item is small; ii) proper propagation of information on dependencies among labels may require to use a large value for T (graph hops), i.e. there is a dependency between size of label graph and \\\"useful\\\" value for T. On this issue, by the way, figures 3 and 4 seem to report incongruent results since, because of symmetries in the model topology, equal and reciprocal influences between input components (and output labels) would have been expected, but these are not observed in the figures. \\nAnalogous considerations could be done for the encoder when the size of the input is large.\\nConcerning experimental results, no statistical significance test is performed, so it is not clear to me if the shown improvements are actually significant. Speed-up in training and testing seem at least to give some advantage with respect to other competing approaches, however the scaling problem described above for the decoder (and encoder) may lead to much worst performances in those special cases.\\nThe addressed problem is covered by a large literature, involving many different approaches. It would have been nice to report, for the selected datasets, the best performance (and computation times) obtained by, for example, probabilistic graphical models or SVM-based models.\\nThe paper seems to refer most of the relevant recent neural-based approaches.\\nI think the paper is relevant for ICLR (although there is no explicit analysis of the obtained hidden representations) and of interest for a good portion of attendees.\", \"minor_issues\": [\"two rows before Section 2.2.1: \\\\mathbb{h}_*^2 should be \\\\mathbb{h}_*^1\", \"equations 4, 5, 9, 10, 14: matrices W are indexed in such a way to assume that each input word/label is associated to a different matrix (i.e., set of parameters). Is this really the case ? How is then managed the fact that different inputs may have a different number of components ? how is a specific matrix assigned to a specific word ? I guess this is a presentation mistake, otherwise there are relevant issues that are completely not addressed by the presentation.\", \"equation (10): since the output should be interpreted as a probability, why not using a softmax? sigmoidal units by themselves do not guarantee that the outputs sum to 1. I guess you do not have this problem because you adopt batch normalisation. This however is conceptually not nice since there is no uniformity across the dataset. Moreover, the softmax function has a nice probabilistic interpretation in the family of the exponential distributions.\", \"\\\"[...] we use add a positional encoding...\\\"\", \"Multi-head Attention: apart for the not so clear description, the equation involving the softmax is missing.\", \"\\\"[..] the the attention and feedforward layers.\\\"\", \"\\\"[..] the the sum of the total true...\\\"\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Graph2Graph without any graph structured inputs or outputs.\", \"review\": [\"This paper proposes an encoder-decoder model based on the graph representation of inputs and outputs to solve the multi-label classification problem. The proposed model considers the output labels as a fully connected graph where the pair-wise interaction between labels can be modelled.\", \"Overall, although the proposed approach seems interesting, the representation of the paper needs to be improved. Below I listed some comments and suggestions about the paper.\", \"The proposed model did not actually use any graph structure of input and output, which can potentially mislead the readers of the paper. For instance, the encoder is just a fully connected feed-forward network with an additional attention mechanism. In the same sense, the decoder is also just a fully connected feed-forward network. Furthermore, the inputs and outputs used throughout the paper do not have any graph structure or did not use any inferred graph structure from data. I recommend using any graph-structured data to show that the proposed model can actually work with the graph-structured data (with proper graph notations) or revise the manuscript without graph2graph representation.\", \"I personally do not agree with the statement that the proposed model is interpretable because it can visualise the relation between labels through the attention. NN is hard to interpret because the weight structure cannot be intuitively interpretable. In the same sense, the proposed model cannot avoid the problem with the nature of black-box mechanism. Especially, multiple weight matrices are shared across the different layers, which makes it more difficult to interpret. Although the attention weights can be visualised, how can we visualise the decision process of the model from end-to-end? The question should be answered to claim that the model is interpretable.\", \"2.2.1, 2.2.2, 2.3 shares the similar network layer construction, which can be represented as a new layer of NN with different inputs (or at least 2.2.2 and 2.3 have the same layer structure). It would be better to encapsulate these explanations into a new NN module which can be reused multiple parts of the manuscript for a concise explanation.\", \"Although the network claims to model the interactions between labels, the final prediction of labels are conditionally independent to each other, whereas the energy based models such as SPEN models the structure of output directly. In that sense, the model does not take into account the structure of output when the prediction is made although the underlying structure seems to model the 'pair-wise' interaction between labels.\", \"In Table1, if the bold-face is used to emphasise the best outcome, I found it is inconsistent with the result (see the output of delicious and tfbs datasets).\", \"Is it more natural to explain the encoder first followed by the decoder?\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}"
]
} |
|
rkgKBhA5Y7 | There Are Many Consistent Explanations of Unlabeled Data: Why You Should Average | [
"Ben Athiwaratkun",
"Marc Finzi",
"Pavel Izmailov",
"Andrew Gordon Wilson"
] | Presently the most successful approaches to semi-supervised learning are based on consistency regularization, whereby a model is trained to be robust to small perturbations of its inputs and parameters. To understand consistency regularization, we conceptually explore how loss geometry interacts with training procedures. The consistency loss dramatically improves generalization performance over supervised-only training; however, we show that SGD struggles to converge on the consistency loss and continues to make large steps that lead to changes in predictions on the test data. Motivated by these observations, we propose to train consistency-based methods with Stochastic Weight Averaging (SWA), a recent approach which averages weights along the trajectory of SGD with a modified learning rate schedule. We also propose fast-SWA, which further accelerates convergence by averaging multiple points within each cycle of a cyclical learning rate schedule. With weight averaging, we achieve the best known semi-supervised results on CIFAR-10 and CIFAR-100, over many different quantities of labeled training data. For example, we achieve 5.0% error on CIFAR-10 with only 4000 labels, compared to the previous best result in the literature of 6.3%. | [
"semi-supervised learning",
"computer vision",
"classification",
"consistency regularization",
"flatness",
"weight averaging",
"stochastic weight averaging"
] | https://openreview.net/pdf?id=rkgKBhA5Y7 | https://openreview.net/forum?id=rkgKBhA5Y7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"BJx-YnPEIN",
"HyemfGLdg4",
"HylUNeePeE",
"ryenTr_IxN",
"SJgj_rd8eN",
"S1xxKLOQlE",
"B1e9StjNCQ",
"HygKOhalA7",
"SyeJI36lCX",
"B1lZLspeAm",
"B1g4NjaeAX",
"HJgMPuQNTm",
"rklWfa233Q",
"S1gzzDv5hQ",
"HJlan4ZqnX",
"rkgNXKDCqX",
"SkgwKUS6cX"
],
"note_type": [
"official_comment",
"official_comment",
"official_comment",
"comment",
"comment",
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"official_review",
"official_review",
"official_review",
"official_comment",
"comment"
],
"note_created": [
1551297656784,
1545261579456,
1545170990465,
1545139652151,
1545139571399,
1544943223905,
1542924609870,
1542671472732,
1542671430948,
1542671176689,
1542671147851,
1541843033741,
1541356808881,
1541203721987,
1541178549109,
1539369243839,
1539294846753
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1559/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1559/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1559/Authors"
],
[
"(anonymous)"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1559/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1559/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1559/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1559/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1559/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1559/Authors"
],
[
"~Olivier_Grisel1"
],
[
"ICLR.cc/2019/Conference/Paper1559/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1559/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1559/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1559/Authors"
],
[
"(anonymous)"
]
],
"structured_content_str": [
"{\"title\": \"Response\", \"comment\": \"Thank you, we updated Table 1 to include the results from [2].\"}",
"{\"title\": \"Clarification (duplicated from below)\", \"comment\": \"Hi, thank you for your comment. In our paper we exactly replicate the experimental setup of [1], which uses 5000 validation images, in order to directly compare our approach with the most relevant existing literature. We note [2] uses a larger validation set of 10000 images, and [3] does not discuss validation. We are aware of the work [4], but note that their paper, with respect to validation data, only argues that small validation sets make model comparison difficult (section 4.6). In their experiments (everywhere except for section 4.6) they still use a full validation set of 5000 images, and thus replicating the setup in [4] would not change our results. Moreover, code for the evaluation framework [4] was released after the submission deadline for ICLR. Additionally, in our experiments we reuse the hyper-parameters of the Mean Teacher method [1] and only tune the learning rate schedule for fast-SWA on the validation set. And in section A.3 we demonstrate that the performance is not sensitive to the choice of this learning rate schedule.\\n\\n[1] Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results; Antti Tarvainen, Harri Valpola\\n[2] Virtual adversarial training: a regularization method for supervised and semi-supervised learning; Miyato, Takeru, Maeda, Shin-ichi, Koyama, Masanori, and Ishii, Shin\\n[3] Temporal ensembling for semi-supervised learning; Laine, Samuli and Aila, Timo\\n[4] Realistic Evaluation of Deep Semi-Supervised Learning Algorithms; Avital Oliver, Augustus Odena, Colin Raffel, Ekin D. Cubuk, Ian J. Goodfellow\"}",
"{\"title\": \"Clarification\", \"comment\": \"Hi, thank you for your comment. In our paper we exactly replicate the experimental setup of [1], which uses 5000 validation images, in order to directly compare our approach with the most relevant existing literature. We note [2] uses a larger validation set of 10000 images, and [3] does not discuss validation. We are aware of the work [4], but note that their paper, with respect to validation data, only argues that small validation sets make model comparison difficult (section 4.6). In their experiments (everywhere except for section 4.6) they still use a full validation set of 5000 images, and thus replicating the setup in [4] would not change our results. Moreover, code for the evaluation framework [4] was released after the submission deadline for ICLR. Additionally, in our experiments we reuse the hyper-parameters of the Mean Teacher method [1] and only tune the learning rate schedule for fast-SWA on the validation set. And in section A.3 we demonstrate that the performance is not sensitive to the choice of this learning rate schedule.\\n\\n[1] Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results; Antti Tarvainen, Harri Valpola\\n[2] Virtual adversarial training: a regularization method for supervised and semi-supervised learning; Miyato, Takeru, Maeda, Shin-ichi, Koyama, Masanori, and Ishii, Shin\\n[3] Temporal ensembling for semi-supervised learning; Laine, Samuli and Aila, Timo\\n[4] Realistic Evaluation of Deep Semi-Supervised Learning Algorithms; Avital Oliver, Augustus Odena, Colin Raffel, Ekin D. Cubuk, Ian J. Goodfellow\"}",
"{\"comment\": \"Hi,\\nI noticed that your holdout set size is 5000. This means that in practice you are using 9000 labeled examples rather than the reported 4000. As shown recently in Oliver et al (https://arxiv.org/pdf/1804.09170.pdf) in a --proper-- evaluation where only 4000 labeled examples are used, the accuracy of SSL algorithms drops considerably. \\nThus, it is not clear why you report results for the 9K case as those obtained using 4K labeled examples. \\nCan you please comment, and also say what your results are when using only 4K examples (as in Oliver et al evaluation scheme).\\nThanks\", \"title\": \"Holdout set\"}",
"{\"comment\": \"Hi,\\nI noticed that your holdout set size is 5000. This means that in practice you are using 9000 labeled examples rather than the reported 4000. As shown recently in Oliver et al (https://arxiv.org/pdf/1804.09170.pdf) in a --proper-- evaluation where only 4000 labeled examples are used, the accuracy of SSL algorithms drops considerably. \\nThus, it is not clear why you report results for the 9K case as those obtained using 4K labeled examples. \\nCan you please comment, and also say what your results are when using only 4K examples (as in Oliver et al evaluation scheme).\\nThanks\", \"title\": \"Holdout size\"}",
"{\"metareview\": \"All reviewers appreciate the empirical analysis and insights provided in the paper. The paper also reports impressive results on SSL. It will be a good addition to the ICLR program.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Interesting analysis and insights into SWA for semisupervised learning\"}",
"{\"title\": \"Response\", \"comment\": \"Dear Olivier,\\n\\nThank you for your thoughtful comments. \\n\\nThe consistency loss encourages noise stability, which is related to compressibility in [1]. In section 3.1 we argue that the consistency loss penalizes Jacobian norm of the network on the unlabeled data. We believe that the interlayer cushion is a related but different measure of noise stability. We agree that [1] is related to the discussion in section 3.1, and we will cite it in an updated version of the paper. We note, however, that consistency regularization provides little improvement when applied to labeled training data alone -- the less constrained unlabelled data is crucial to the performance of the method. Also, targeted perturbations like data augmentation have a much larger impact on performance than more isotropic perturbations like Gaussian noise and dropout. We believe that understanding these behaviours better theoretically in relationship to [1] warrants further investigation. Specifically, there may be mileage to be gained considering non-isotropic noise stability given that the inputs lie in a much lower dimensional space than the full space of pixels.\\nWe have looked at fast-SWA in other domains, such as RL, with good preliminary results. For the reasons given in the paper, however, semi-supervised learning is particularly compelling for these approaches, and vision benchmarks provide a clean way to thoroughly explore and evaluate these benefits. As you mention, auto-regressive methods are not as likely to benefit from consistency regularization because the predictions are already constrained by self-supervision. We conjecture, however, that sequence labeling tasks with extra unlabelled sequences may benefit from consistency regularization and fast-SWA, which is an interesting direction for future work.\\n\\n\\n[1] Stronger Generalization Bounds for Deep Nets via a Compression Approach\\nSanjeev Arora, Rong Ge, Behnam Neyshabur, Yi Zhang; 2018.\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"Thank you for your support and thoughtful questions. Below we address your questions:\\n\\n1. We have not yet been able to reproduce the baseline (without SWA) results for the MT model. We have been in touch with the authors of [1] to replicate these results. ImageNet experiments have also been difficult to run due to limited computational resources. Semi-supervised learning is more computationally intense than standard supervised training, which amplifies the computational difficulties and expense in running ImageNet experiments.\\n\\n2. All the errors reported in the paper are top-1 errors. We will make this more explicit in the updated draft.\\n\\n3. The epochs 170-180 are the last 10 epochs of training. We select these epochs as we are interested in the regime when the training has converged to the neighbourhood of the optimum, rather than the behavior during early iterations. We argue that in this regime SGD explores the set of possible solutions instead of converging to a single solution.\\n\\n4. The typical setup uses perturbations from the data augmentations (random translation and flipping) and from dropout. The space of images is highly structured, and as such we believe that the more targeted perturbations of translation and flipping are more efficient at enforcing meaningful consistencies between teacher and student. In the 3072-dimensional input space, random perturbations will have a low projection onto these more meaningful directions. \\n\\n5. The difference between the results of ensembling and averaging weights is sufficiently minor that the ordering could be different had we used a different dataset or architecture. We focus on weight averaging since ensembling N models results in N-fold increase in the number of computations at test time. Note that Izmailov et al. (Averaging Weights Leads to Wider Optima and Better Generalization, 2018, section 3.5) provides an argument that weight averaging approximates ensembling given that the averaged models are close in the weight space.\\n\\n6. In Figure 2b we found that different methods can achieve different gains from averaging. Our empirical analysis in section 3 is focused on the Pi model and Mean Teacher, as the training trajectories for VAT are different, due to adversarial perturbations. Since random perturbations in Pi and MT lead to heterogeneous solution spaces, the gains from averaging could be greater in these model classes due to capturing a greater diversity of models.\\n\\n[1] Tarvainen and Valpola. Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. NIPS, 2017\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"We appreciate your supportive and thoughtful review. Our responses are below:\\n\\n1. Novelty can have many forms, and in this paper the main novelty, which we believe to be very significant, and also rare, is a thorough conceptual exploration of how loss geometry interacts with training procedures, particularly for semi-supervised learning, leading to several meaningful insights. Since many settings of neural network weights lead to essentially no loss, it is of foundational importance to understand how the geometric properties of a solution affect generalization. While conceptual papers can be more difficult to assess, they are often highly impactful, such as the ICLR paper by Zhang et. al (2016) on rethinking generalization [1].\\n\\nIn addition to these conceptual advances, we do also propose simple algorithms, combined with very strong results on many thorough experiments. In this context, we view simplicity as a strength. There is sometimes a temptation to propose complicated approaches that can appear highly novel, but are not adopted because similar results can be achieved by simpler alternatives. It is our contention that the extensive strong results in our paper combined with a simple algorithm, and a novel conceptual understanding (which is rare), are a real service to the community. \\n\\nAs you note we also make novel theoretical contributions, some of which are in the appendix. We will highlight some of this material more in the main text. In addition to the novel theoretical and methodological contributions, there is also a novel empirical analyses in sections 3.2-3.3.\\n\\n\\nIn the simplified \\\\Pi model, we consider small additive input perturbations to the inputs whereas in the full \\\\Pi model we use random translations and horizontal flips of the inputs, and dropout perturbations on the weights. Tarvainen & Valpola (2017) showed that dropout could be removed without much degradation in performance. We view the random translations to be more targeted perturbations that lie along directions of the image manifold. This case is referred to in a footnote of the appendix section A.5. We mentioned the main results from the theoretical analysis in the main text but keep the proof details in the appendix due to space limitation. We will bring forward key parts to the main text for clarity. \\n\\n2. Yes, (fast-)SWA can be used on purely supervised problems. SWA was used for supervised problems on CIFAR-10 and ImageNet (Izmailov, 2018) and achieved improved performance over SGD. Our paper, however, shows that the gains from weight averaging in consistency-based models are much larger than in semi-supervised learning than in supervised learning due to the geometric properties of the training trajectories and solutions discussed in Section 3. \\n\\nWe really appreciate your strong support, and we hope that you can consider our comments on overall novelty -- across methods, experiments, theory, and conceptual understanding -- combined with strong results, in your final assessment. We are happy to answer any further questions.\\n\\n[1]: Zhang et. al. Understanding deep learning requires rethinking generalization. ICLR 2017.\"}",
"{\"title\": \"Response to Reviewer 1 (continued)\", \"comment\": \"5. Regarding the SGD-SGD ray analysis:\\n\\nSee 1.\\n\\n6. Regarding Mandt\\u2019s paper:\\nThank you for the suggestion, we will include an argument explaining the behavior in Figure 2d based on Mandt\\u2019s paper [3].\\n\\n7. Regarding fast-SWA for supervised learning:\\nIn the paper we show that the exploration done by SGD late in training in semi-supervised learning is more aggressive than in supervised learning, and leading to greater benefits from averaging. Fast-SWA, which averages weights more frequently than SWA, is designed to make use of this exploration. We also obtained preliminary results suggesting that fast-SWA can significantly improve performance in a domain adaptation model that uses the consistency term (see Section 5.5). We leave a thorough analysis of fast-SWA in supervised learning and other applications, such as domain adaptation, for future work.\\n\\n\\n8. Regarding Table 1:\\nTable 1 summarizes the results of our approach and the best previous results reported in the literature across different settings. \\u201cPrevious Best CNN\\u201d and \\u201cOurs CNN\\u201d show the results of our proposed method and the best previously reported result for the 13-layer CNN architecture, which is commonly used in the literature (see section A8 for the architecture description). \\u201cPrevious Best\\u201d and \\u201cOurs\\u201d show the results for the ResNet architectures, which are the best results reported in the literature overall. In both cases the comparisons are fair, as the methods are using the same architecture. Note that we also present a direct comparison between our approach and the alternatives *with everything else kept equal* in the Figure 4 and Tables 2-5.\\n\\n\\n[1]: Arora et. al, Stronger generalization bounds for deep nets via a compression approach, 2018.\\n[2]: Anonymous, Gradient Descent Happens in a Tiny Subspace, 2019.\\n[3]: Mandt et al., Stochastic Gradient Descent as Approximate Bayesian Inference, 2017.\\n[4]: Dinh et al., Sharp Minima Can Generalize for Deep Nets\\n[5]: Sagun et al., Empirical Analysis of the Hessian of Over-Parametrized Neural Networks, 2017.\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"Thank you for your thoughtful and supportive feedback. We address your questions below.\\n\\n1. Regarding the argument in section 3.2: \\n\\nIn section 3.2 we discuss the behavior of train and test error along different types of rays: SGD-SGD, random, and adversarial rays. We analyze the error along SGD-SGD rays for two reasons. Firstly, in fast-SWA we are averaging solutions traversed by SGD so the rays connecting SGD iterates serve as a proxy for the space we average over. Secondly, we are interested in evaluating the width of the solutions that we explore during training which we expect will be improved by the consistency training as discussed in section 3.1 and A.6. We do not expect width along random rays to be very meaningful because there are many directions in the parameter space that do not change the network outputs (see e.g. [2, 4, 5]). However, by evaluating SGD-SGD rays, we can expect that these directions corresponds to meaningful changes to our model because individual SGD updates correspond to directions that change the predictions on the training set. Furthermore, we observe that different SGD iterates produce significantly different predictions on the test data.\\n\\nIn section 3.2 we observe that along SGD-SGD directions the Pi and MT solutions are much wider than supervised solutions. On the other hand, we observe that along random and adversarial directions the difference in flatness is less pronounced. Neural networks in general are known to be resilient to noise, explaining why both MT / Pi and Supervised models are flat along random directions [1]. At the same time neural networks are susceptible to targeted perturbations (such as adversarial attacks). We hypothesize that we do not observe improved flatness for semi-supervised methods along adversarial rays because we do not choose our input or weight perturbations adversarially, but rather they are sampled from a predefined set of transformations.\\n\\n\\n2. Regarding the choice of epochs 170, 180 for SGD-SGD ray analysis:\\nWe consider epochs 170 and 180 (the last 10 epochs of training) for the SGD-SGD rays, as we are interested in the regime when the training has converged to the neighbourhood of the optimum, rather than the behavior during early iterations. We argue that in this regime SGD explores the set of possible solutions instead of converging to a single solution.\\n\\nBased on your suggestion, we computed the cosine similarity between the SGD-SGD rays for epoch pairs 170&175 and 175&180 using the Pi model. We measured a value of -0.065, which corresponds to an angle of 93 degrees. Thus, the path traversed by SGD late in training is rather far from linear as the weight updates between epochs 170 and 175 and between epochs 175 and 180 are almost orthogonal. \\n\\n\\n3. Regarding the similarity between SGD-SGD rays and adversarial rays for supervised training:\\n\\nSGD-SGD directions and adversarial rays are indeed related. The adversarial ray for train loss at the given point is aligned with the gradient of the train loss at this point. The directions between SGD solutions from different epochs are also obtained by combining multiple gradient steps. In particular, if we use the full dataset as our mini-batch, the ray connecting SGD solutions at epochs 170 and 171 would be aligned with the adversarial ray computed at the SGD solution for epoch 170 (but pointing in the opposite direction).\\n\\nSince the adversarial ray is constructed using only the derivative of the train loss at a given point -- this local derivative information says that if we perturb the weights with an infinitesimal step, the error goes up the fastest along this adversarial direction -- it is not guaranteed that along the adversarial ray, for any given distance, the error would be as large as possible. In Figure 2 (d) we observe that locally the train and test error go up more sharply along adversarial rays, but for larger distances SGD-SGD rays exhibit similar behavior. \\n\\n4. Regarding training longer:\\nYes, the model continues to explore even if we train longer, if we don\\u2019t anneal the learning rate to zero. In particular note that for the results in section 3.3 we extend the training time. We run training for a total of 330 epochs using a cyclical learning rate schedule (see section A9 for the details). Further, note that in combination with SWA or fast-SWA running longer consistently leads to improved performance. For example on CIFAR-10 with 4k labeled data using MT+fast-SWA we get 10.7% test error after 180 epochs, 10.34% after 240 epochs, 9.86% after 480 epochs, and 9.05% after 1200 epochs (see Tables 2-5 in the appendix for detailed results). The fact that running fast-SWA longer improves the results suggests that SGD continues to explore diverse solutions and is demonstrated by the diversity plots in figures 2 and 7.\"}",
"{\"comment\": \"Very interesting empirical study and nice results. I have two remarks / questions.\\n\\n- It seems that the consistency regularization loss is directly optimizing the interlayer noise cushioning terms from the generalization bound given in:\\n\\nStronger Generalization Bounds for Deep Nets via a Compression Approach\\nSanjeev Arora, Rong Ge, Behnam Neyshabur, Yi Zhang ;\\nProceedings of the 35th International Conference on Machine Learning, PMLR 80:254-263, 2018.\", \"http\": \"//proceedings.mlr.press/v80/arora18b.html\\n\\nMaybe you should discuss the relation to this theoretical work in your manuscript.\\n\\n\\n- Have you tried to apply this to non-image classification problems? In particular the combination of stochastic regularization + weight averaging seemed to be important to get SOTA performance on recurrent language models:\", \"https\": \"//github.com/salesforce/awd-lstm-lm\\n\\nI am wondering if fast-SWA with its consistency loss term could improve upon the Averaged SGD + stochastic regularization combination.\\n\\nArguably, auto-regressive language modeling cannot benefit from the semi-supervised setting as it's already a self-supervised task.\", \"title\": \"Consistency regularization vs noise cushioning\"}",
"{\"title\": \"Review\", \"review\": \"This paper proposes to apply Stochastic Weight Averaging to the semi-supervised learning context. It makes an interesting argument that the semi-supervised MT/Pi models are especially amenable to SWA since they are empirically observed to traverse a large flat region of the weight space during the later stages of training. To speed up training, the authors propose fast-SWA.\\n\\nSecition 3.2 is a little confusing. \\n- If a random direction is, with high probability, not penalized, then why is it so flat along a random direction? Or is this simply an argument for why it is not guaranteed to be penalized, and therefore adversarial rays exist? I think the claim needs to be more precise (though it remains unclear how accurate the claim would be).\\n- I also think that there is maybe something special about measuring the SGD-SGD ray at epochs 170/180. It coincides with the regime of training where the signal is dominated by the consistency loss. Is it possible this somehow induces a near-linear path in the parameter space? I would be interested in seeing projections of other epoch\\u2019s SGD-SGD (e.g. 170/17x) vectors onto the 170/180 SGD-SGD ray and the extend to which they are co-linear. \\n- It is also striking that traversing the SGD-SGD ray causes an error rate so similar to the adversarial ray for the supervised model; can the authors explain this phenomenon? \\n- All this being said, I find the diversity argument compelling---though what would happen if we train the model even longer? Does it keep exploring?\\n- Overall, I am not sure how comfortable we should be with interpreting the SGD-SGD ray results. It is important that the authors provide a convincing argument for the interpretability of the SGD-SGD ray results, as this appears to be the key to the \\u201clarge flat region\\u201d claim.\\n\\nI think Mandt\\u2019s paper should be cited in-text, since this is what motivates Figure 2d.\\n\\nIs the benefit of Fast-SWA\\u2019s fast convergence (to a competitive/better solution than SWA) unique to semi-supervised learning? Or can it be demonstrated by fully-supervised learning too? Given the focus on the semi-supervised regime, I would prefer if what the authors are proposing is, in some sense, special to the semi-supervised regime.\\n\\nTable 1 is confusing to read. I just want to see a comparison between with and without using fast-SWA, *with all else kept equal*. Is the intention to compare \\u201cPrevious Best CNN\\u201d and \\u201cOurs CNN\\u201d? Is this a fair comparison?\", \"pros\": [\"Interesting story\", \"Good empirical performance\"], \"cons\": \"- Unclear whether the story is entirely correct\\n\\nIf the authors can provide a convincing case for the interpretability of the SGD-SGD results, I am happy to raise my score.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Very thorough analysis but limited novel contribution\", \"review\": \"OVERVIEW:\\nThe paper looks at the problem of self-supervised learning using consistency-enforcing approaches. Their main contributions are two-fold:\\n1. Analysis to understand current state-of-the-art methods for self-supervised learning, namely the Mean Teacher model (MT) by Tarvainen and Valpola (2017) and the \\\\Pi model (Laine and Aila, 2017). They show a theoretical analysis (Sec.3.1) of a simplified version of the \\\\Pi model and show that it reaches flatter minima leading to good generalization. They show an analysis of the SDG trajectories (Sec. 3.2) that shows how these self-supervised models achieve flatter and lower minima compared to a fully supervised approach. They also provide an intuitive explanation to explore more solutions along the SGD trajectory. Finally, in Sec.3.3, they also discuss how ensembling and weight averaging help get better solutions.\\n2. Fast-SWA, which is a tweak to the SWA procedure (Izmailov et al, 2018) that averages models in the weight space along the SGD trajectory with a cyclical learning rate.\\nThey show good performance on CIFAR-10 and CIFAR-100 with their proposed Fast-SWA.\", \"pros\": \"1. The paper contains a lot of empirical analysis explaining the behavior of these models and providing intuition about the optimization leading to their proposed solution. The problem and experiments are very organized and explained very well.\\n2. Exhaustive experiments, plots and tables showing very good performance on the standardized benchmark.\", \"cons\": \"1. The novel contribution (as I see it) is in the theoretical analysis of Sec. 3.1 & A.5 and the Fast-SWA procedure. The Fast-SWA is a minor tweak to the regular SWA. The theoretical analysis is the main novelty and it is hidden away in the appendix ! Also, the results seems to be derived on the basis of Avron and Toledo and the authors' contribution relative to that is not clear. Also, what is the difference between the regular \\\\Pi model and simplified \\\\Pi model and how big a difference does this make in your theory ?\\n2. Can the Fast SWA be used directly say while supervised training of ImageNet ? Or is it applicable only to self-supervised problems ? Comments on the generalizability of this contribution might help increase novelty.\", \"overall\": \"I like the thorough analysis and good results of the paper. The novelty being a little weak results in the final rating of 7.5 (rounded up to 8, subject to change depending on other reviewers).\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Nice read + questions\", \"review\": \"The paper is nice thread, easy to follow.\\n\\nThe paper proposed to apply SWA (Stochastic Weight Averaging) Izmailov et al. 2018 to the semi-supervised approached based on consistency regularization. The paper first describes the related work nicely and offers a succinct explanation of two semi-supervised approaches they study. The paper then present an analysis on SGD trajectories of these 2 approaches, drawing comparisons with the supervised training and then building a case of why SWA is a valid idea to apply. The analysis section is very well described, the theoretical explanations are easy to follow and Figure 1, Figure 2 are really helpful to understand this analysis. \\n\\nOverall, the paper offers a useful insight into semi-supervised model trainings and offers recipe of converging to supervised results which is a valid contribution.\", \"i_have_following_questions_to_the_authors\": \"1. Did the authors do the analysis and apply SWA on ImageNet training besides Cifar-10 and Cifar-100\\n2. The accuracy number reported in abstract (5.0% error) is top-1 error or top-5 error? I think it's top-5 but explicit mention would be great.\\n3. In section 3.2, authors offer an analysis by chosing epoch 170, 180. How are these epochs chosen?\\n4. In section 3.1, authors consider a simple model version where only small additive perturbations to student inputs are applied. Is this a practical setup i.e. is this ever the case in actual model training?\\n5. In section 3.3, pg 6, do authors have intuition into why weight averaging has better improvement (1.18) vs ensembling (0.94)?\\n6. In section 5.2, page 8 , can authors provide their intuition behind the results: \\\"We found that the improvement on VAT is not drastic \\u2013 our base implementation obtains 11.26% error where fast-SWA reduces it to 10.97%\\\" - why did fast-SWA not improve much?\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"1: The reviewer's evaluation is an educated guess\"}",
"{\"title\": \"Clarification\", \"comment\": \"Hello,\\nThank you for your comment. In the introduction we mention the improvement over the best overall result on CIFAR-10 with 4k unlabeled data points, which is achieved using ResNet with Shake-Shake regularization and which belonged to [3]. We improve their result from 93.7% accuracy to 95% accuracy. Note that in the experiments section we also provide results for the 13-layer CNN used by [2] (the paper you mentioned). For that architecture, the best results previously reported in the literature were to the best of our knowledge achieved by [1] (90.8% as opposed to 90% for the paper [2] you mentioned). We also further improve the results from [1] on that architecture.\\n\\n[1] Adversarial Dropout for Supervised and Semi-Supervised Learning. Sungrae Park, Jun-Keon Park, Su-Jin Shin, Il-Chul MoonSungrae Park, Jun-Keon Park, Su-Jin Shin, Il-Chul Moon\\n[2] Improving the Improved Training of Wasserstein GANs: A Consistency Term and Its Dual Effect. Wei, X., Gong, B., Liu, Z., Lu, W. and Wang, L.\\n[3] Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. Antti Tarvainen, Harri Valpola\"}",
"{\"comment\": \"\\u201c... improving the best result reported in the literature (Tarvainen and Valpola, 2017) by 1.3%.\\u201d --- appeared in the introduction.\\n\\nFYI, the statement above seems like outdated because the results reported in (Tarvainen and Valpola, 2017) have been surpassed by (Wei et al., 2018) for the same underlying network architecture. It is unclear how well the WGAN+consistency method of (Wei et al., 2018) could work for the Shake-Shake architecture. \\n\\nWei, X., Gong, B., Liu, Z., Lu, W. and Wang, L., 2018. Improving the Improved Training of Wasserstein GANs: A Consistency Term and Its Dual Effect. arXiv preprint arXiv:1803.01541.\", \"title\": \"About \\\"the best result reported in the literature\\\"\"}"
]
} |
|
ryeOSnAqYm | Synthetic Datasets for Neural Program Synthesis | [
"Richard Shin",
"Neel Kant",
"Kavi Gupta",
"Chris Bender",
"Brandon Trabucco",
"Rishabh Singh",
"Dawn Song"
] | The goal of program synthesis is to automatically generate programs in a particular language from corresponding specifications, e.g. input-output behavior.
Many current approaches achieve impressive results after training on randomly generated I/O examples in limited domain-specific languages (DSLs), as with string transformations in RobustFill.
However, we empirically discover that applying test input generation techniques for languages with control flow and rich input space causes deep networks to generalize poorly to certain data distributions;
to correct this, we propose a new methodology for controlling and evaluating the bias of synthetic data distributions over both programs and specifications.
We demonstrate, using the Karel DSL and a small Calculator DSL, that training deep networks on these distributions leads to improved cross-distribution generalization performance. | [
"programs",
"specifications",
"languages",
"deep networks",
"synthetic datasets",
"neural program synthesis",
"goal",
"program synthesis",
"particular language"
] | https://openreview.net/pdf?id=ryeOSnAqYm | https://openreview.net/forum?id=ryeOSnAqYm | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"SylZ2o48g4",
"r1g7QIKGJN",
"SJeY4OtP0m",
"rJeYYBtwR7",
"BJeUtNtPRQ",
"rkxEB4tvA7",
"Bkl7uGKvAX",
"Syl7lJbO67",
"S1xqDuXihm",
"B1lM1QEc2X"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1545124777166,
1543833115349,
1543112753332,
1543112065514,
1543111806353,
1543111740466,
1543111274689,
1542094570932,
1541253218443,
1541190361756
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1558/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1558/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1558/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1558/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1558/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1558/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1558/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1558/AnonReviewer4"
],
[
"ICLR.cc/2019/Conference/Paper1558/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1558/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper analyzes existing approaches to program induction from I/O pairs, and demonstrates that naively generating I/O pairs results in a non-uniform sampling of salient variables, leading to poor performance. The paper convincingly shows, via strong evaluation, that uniform sampling of these variables can much result in much better models, both for explicit DSL and implicit, neural models. The reviewers feel the observation is an important one, and the paper does a good job providing sufficiently convincing evidence for it.\", \"the_reviewers_and_ac_note_the_following_potential_weaknesses\": \"(1) the paper does not propose a new model, but instead a different data generation strategy, somewhat limiting the novelty, (2) Salient variables that need to be uniformly sampled are still user specified, (3) there were a number of notation and clarity issues that make it difficult to understand the details of the approach, and finally, (4) there are concerns with the use of rejection sampling.\\n\\nThe authors provided major revisions that address the clarity issues, including an addition of new proofs, cleaner notation, and removal of unnecessary text. The authors also included additional results, such as KL divergence evaluation to show how uniform the distribution is. The authors also described the need for rejection sampling, especially for Karel dataset, and clarified why the Calculator domain, even though is not \\\"program synthesis\\\", still faces similar challenges. The reviewers agreed that not having a new model is not a chief concern, and that using rejection sampling is a reasonable first step, with more efficient techniques left for others for future work.\\n\\nOverall, the reviewers agreed that the paper should be accepted. As reviewer 1 said it best, this paper \\\"is a timely contribution and I think it is important for future program synthesis papers to take the results and message here to heart\\\".\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Important observation, backed by solid work\"}",
"{\"title\": \"Clarity improved\", \"comment\": \"The authors have done a great job addressing the concerns I had about the clarity. Consequently, I have raised my score, whereas my fairly low confidence still remains.\"}",
"{\"title\": \"Summary of revised paper\", \"comment\": \"Thanks to the reviewers for all the helpful suggestions and comments. We have uploaded a new paper revision with the following key changes:\\n1. Updated the notation in Section 3 (\\u201cOur Data Generation Methodology\\u201d) for further clarity\\n2. Clarified the challenges in real-world Karel tasks\\n3. Reported the result of the Action-Only Augmented model on the original test set (decrease in accuracy by less than 2 percentage points)\\n4. Stated that the Calculator domain is a program induction task\\n5. Detailed the procedure of Section 3 in pseudocode in the appendix, Section 8.2\\n6. Revamped the proof in the appendix, Section 8.3 to account for the fact that we use empirical estimates of the distribution.\\n7. Reported the empirical uniformity of salient variables on the Calculator datasets after applying the procedure from Section 3\"}",
"{\"title\": \"Response\", \"comment\": \"Thank you for your comment!\\n\\nWe agree that it would be interesting to see how models that use state sequences are affected by similar issues. When we started the project leading to this submission, the methods you cite were yet unavailable/we were unaware of them. While a full empirical analysis will be needed to study the issue, we hypothesize that these methods may be affected similarly; in particular, issues with the distribution of input/output examples in the training data will propagate to the distribution of state sequences, perhaps even amplifying them further (since the state sequences are longer).\"}",
"{\"title\": \"Response to review\", \"comment\": \"We would like to thank the reviewer for their helpful and insightful comments. Our responses to the specific concerns follow.\\n\\n> The the second paragraph two sets of silent variables are introduced X_1,...,X_n and Z_1,...,Z_m but never used again the rest of the paper.\\nWe have removed them from the paper.\\n\\n> In the third and forth paragraph details about the Karel domain are presented without the Karel domain having been introduced.\\nWe intend Section 3 to be a general description of the issues facing synthetic data generation that is independent of any given domain; however, to make the exposition clearer and more concrete, we used some examples from the Karel domain for illustration, for those who are already familiar with the domain from the related work.\\n\\n> It seems you are using rejection sampling to sample from a uniform distribution. Why can you not sample from a uniform distribution directly?\\nWe are attempting to make certain features (salient variables) uniform over the distribution of examples, in a way that cannot be easily handled generatively. For example, sampling Calculator expressions with uniform length requires a much more complicated algorithm than any of the standard algorithms for sampling programs from a CFG. Furthermore, the domain may place complicated restrictions on which subset of examples are valid. Our method allows for the usage of an underlying arbitrary sampling method, as long as all values of a given salient variable are sufficiently represented.\\n\\n> What do you mean with the notation X(s)?\\nIt means \\u201cthe value of the salient variable X within sample s\\u201d. However, we have replaced this notation in our new revision.\\n\\n> What is you proving in Appendix? Would maybe be clearer if you presented it as a theorem/lemma.\\nIn the appendix, we now show a probabilistic bound on the uniformity of the salient variable in the resulting distribution. We present it as a series of theorems and lemmas.\\n\\nWe have revamped the proof in the appendix to account for the fact that we use empirical estimates of the distribution over the salient variable X, rather than the true distribution. We now provide a full description of the algorithm as pseudocode, and have updated the notation throughout the description and the proof for further clarity. While the proof is now entirely new, we have nevertheless tried to ameliorate any past issues regarding notation.\\n\\n> However, I cannot find and strong arguments in the paper why this property should generalize to other problem settings. To me the analysis and experimental results seems to be tailored to the two problems settings used in the paper.\", \"we_believe_that_the_property_should_generalize_to_other_problem_settings_for_the_following_reasons\": \"1. Other domains/methods such as RobustFill [1] also use randomly-generated synthetic training data; the authors state in Section 3.3 that \\u201cIntuitively, it is possible to generalize [...] using randomly synthesized training because the model is learning function semantics, rather than a particular data distribution.\\u201d\\nNevertheless, the method\\u2019s reported performance on a manually-curated test set (Figure 4) is significantly lower than on a synthetic validation set drawn from the same distribution as the training data, which shows that the model adapts significantly adapts to the particulars of the training distribution.\\nGiven that this method exhibits similar failures given a similarly generated synthetic training dataset, we expect it to also improve similarly given more carefully generated training data.\\n\\n2. Our Karel and calculator domains have little in common in terms of semantics; one is for inferring a program which controls an agent\\u2019s movements within a gridworld, whereas the other is learning to perform arithmetic. Nevertheless, the same method of defining salient random variables and making them more uniform throughout the training data increased generalization performance for both domains.\\n\\n3. Even though these domains have highly different semantics, they (and indeed, most other program synthesis tasks which involve trees) share some salient random variables such as length, depth of nesting, and number of operations.\\n\\n4. On the calculator domain, all salient variables that we thought of had a positive effect on eventual accuracy, suggesting that the result is not too brittle with respect to the choice of salient variable (though some work better than others) and therefore similar results will be easier to obtain on other domains.\\n\\n[1] RobustFill: Neural Program Learning under Noisy I/O. Jacob Devlin, Jonathan Uesato, Surya Bhupatiraju, Rishabh Singh, Abdel-rahman Mohamed, Pushmeet Kohli. https://arxiv.org/abs/1703.07469\\n\\n> To achieve good generalization performance the important To me it seems that the important \\nWe weren\\u2019t sure about this point in the review; can we answer a question that got missed in a copy and paste issue or similar?\"}",
"{\"title\": \"Response to review\", \"comment\": \"We would like to thank the reviewer for their helpful and insightful comments. Our responses to the specific concerns follow.\\n\\n> (1) It requires manual curation of salient random variables. This sort of punts the decision of \\\"what should my sampling procedure be\\\" to \\\"what is my choice of salient variables to make uniform\\\". I agree that this is still an improvement.\\nA key contribution of our paper is the identification of issues with training and evaluation datasets of current neural program synthesis approaches, and a first step towards alleviating them using salient random variables. As the reviewer states, this is still an improvement over the status quo of randomly sampling programs from a DSL, and we leave the automatic discovery and curation of salient random variables to future work. Additionally, we found that all salient variables that we tried on the Calculator task had a positive effect on eventual accuracy, suggesting that the exact choice of salient variables is not significant to improve the overall results, though some work better than others.\\n\\n> (2) The procedure described for generating synthetic examples is essentially a rejection sampling algorithm [...]\\nAs stated in Section 3, the need for training examples to satisfy complex constraints in more complicated domains like Karel makes it very difficult to use other methods for generating random examples, while ensuring that a salient variable follows a particular distribution. Furthermore, a rejection sampling approach is easy to graft onto existing sampling approaches as we have done in this paper. Nevertheless, we can reduce the runtime needed for the procedure described in Section 3 by adjusting the epsilon hyperparameter (at the cost of decreasing the uniformity of the result).\\n\\n> Also, relatedly, I don't follow the description of correctness in section 8.2 at all\\nWe have significantly revised the description of the method in Section 3, and revamped the proof in Section 8.2 (now Section 8.3) to account for the fact that we use empirical estimates of the distribution over the salient variable X, rather than the true distribution. We now provide a full description of the algorithm as pseudocode, and have updated the notation throughout the description and the proof for further clarity. While the proof is now entirely new, we have nevertheless tried to ameliorate any past issues regarding notation.\\n\\n> Also, I believe there need to be conditions on q(s), e.g. such that min_x P_q[X = x] [...].\\nWe now state this condition in the proof.\\n\\n> I don't understand why even the better number (19.4%) is so low; the performance of the uniform model in table 1 tends to be much higher (in the 60% to 70% range). This would suggest that the uniform model perhaps is significantly *underweighting* important parts of the space. What is causing this?\\nThis seems to be the main point of confusion in the review and we have also addressed in the paper revision. While the model\\u2019s performance grows significantly from 11.1% to 19.4% on real-world tasks, it is important to note that it is not because the uniform model is \\u201cunderweighting\\u201d important parts of the search space that is causing the lower overall accuracy; but rather the following two challenges play a more important role on these problems:\\n- Many of the real-world problems require long programs to solve, which are intrinsically difficult for the model to synthesize correctly even if trained with more uniform data. For example, 86.11% of the programs in the real-world test set contain more than 10 tokens, whereas 75.56% of the synthetic test set does.\\n- In the real-world examples, the specification always contains fewer than 5 I/O pairs; indeed, many only contain 1 I/O pair. However, the training methodology for the model assumes that it is provided with a diverse set of 5 I/O pairs.\\n\\n> Finally, I am not sure I understand how the calculator example fits into this paper. Unless I misunderstand, it is not a program synthesis task, but rather a regression task. Clearly it does still depend on generation of synthetic data [...]\\nWe included the calculator example to show that both the notion of salient variables and making them more uniform while generating synthetic data generalize to different tasks/domains. The calculator example is not a program synthesis task, in that the model\\u2019s output is not a program; however, we would classify it under program induction, which is a closely related task and is often trained using synthetically generated data (for example, in the Learning to Execute paper [1] where one of the tasks was computing the sum of two numbers). In particular, both the Karel programs and the calculator expressions are generated from trees drawn from a context-free grammar.\\n\\nPlease let us know if this helped clarify the questions and concerns, and let us know if there are any more questions.\\n\\n[1] Learning to Execute. Wojciech Zaremba, Ilya Sutskever. https://arxiv.org/abs/1410.4615\"}",
"{\"title\": \"Response to review\", \"comment\": \"We would like to thank the reviewer for their helpful and insightful comments. Our responses to the specific concerns follow.\\n\\n> (1) No new model\\nIn this paper, we chose to focus on the impact of dataset generation and the training process on the performance of existing program synthesis models, which has largely been ignored in these neural synthesis works. We leave the impact of changes to the model for future work.\\n\\n> (2) The calculator example is relatively too trivial to represent the whole genre of implicit differentiable neural program synthesizer\\nWhile the calculator example is indeed fairly simple, we did not intend it to be a representative of all differentiable neural program synthesizers. Rather, we wanted to show that the results generalize to domains other than Karel. Furthermore, due to its relative simplicity, we were able to perform a wide variety of systematic experiments as reported in the paper.\\n\\n> (3) No statistical tests (such as chi-square test) to support the claim about uniformity (even on chosen salient variables) \\nWe have added a section to the appendix where we report the KL divergence between the uniform distribution and the generated data\\u2019s empirical distribution for the Calculator domain.\\n\\n> (1) What if the distribution of real-world programs are skewed and neural synthesizers are supposed to take advantage of their skewness?\\nWe fully agree that it would be ideal for neural synthesizers to take advantage of any skew present in the distribution of real-world programs, or more generally, the distribution of tasks that real users are want to solve; indeed, especially for program synthesis tasks from input-output examples, there may exist a large number of spurious programs that satisfy the constraints but not the user\\u2019s intent.\\n\\nHowever, as it is difficult and expensive to construct such real-world datasets, learning from synthetic datasets is still useful and the common paradigm currently taken in the research literature. In this paper, we want to point out problems with existing ways that the synthetic datasets are constructed, and suggest improvements to mitigate some of these problems.\\n\\n> (2) Why would you claim the calculator example is not a program synthesis task while intending to use it to represent another genre of program synthesis methods?\\nAs explained in the first paragraph of the introduction, we make a distinction between \\u201cprogram synthesis\\u201d (where the model outputs programs in a DSL) and \\u201cprogram induction\\u201d (where we train a differentiable model end-to-end to represent the behavior of a program). We consider the calculator example to be an example of the latter, which is why we stated it\\u2019s not a program synthesis task. We have updated the paper to clarify this point.\\n\\n> (1) To show that current salient random variables do not make the dataset theoretically uniform but are still approximate enough [...]\\nFor the Calculator domain, we show that making only one of the salient variables uniform still improves performance across multiple distributions (including distributions unrelated to the one used to generate the training data).\\n\\n> (2) In section 8.2 [...]\\nWe have revamped the proof in Section 8.2 (now in Section 8.3) to account for the fact that we use empirical estimates of the distribution over the salient variable X, rather than the true distribution. While the proof is now entirely new, we have nevertheless tried to ameliorate any past issues regarding notation.\\n\\nPlease let us know if this helped clarify the questions and concerns, and let us know if there are any more questions.\"}",
"{\"title\": \"Nice evaluations, empirically sound methodology, but no new model\", \"review\": \"This is a nice paper. It makes novel contributions by investigating (a) the problem of skewed dataset distributions in neural program synthesis, specifically program induction from given I/O pairs, and (b) the extent to which making them uniform would improve model performance.\\n\\nThe paper argues that there are inevitable and artificial sparsities as well as skews in existing datasets (e.g. pruning illegal I/O pairs, naive random sampling tends not to generate complex nested control-flow statements), and the principled way to minimize these sparsities and skews is to make distributions over salient random variables uniform. The authors evaluate their hypothesis empirically on two flavors of neural program synthesis methods: program inductions on explicit DSL represented by Karel, and implicit differentiable neural program synthesizers (such as stack, RAM, GPU as cited in section 2) represented by a Calculator example. In evaluations, they construct few challenging \\u201cnarrower\\u201d datasets and show baseline models perform significantly worse than models trained on datasets with uniform distributions (by 39-66 pp). Along this line, they also show uniform models consistently perform much better than baseline ones on other out-of-distribution test sets. To show how bad a model would perform if it were trained on a skewed training set, they train models on narrower datasets and evaluate them on different narrower sets.\", \"the_strength_of_this_paper_are\": \"(1) It has complete and empirically sound evaluations: both showing how much better uniform models would be and how much worse non-uniform models would be.\\n\\n(2) Although we might doubt the salient random variables are handcrafted and rejection sampling wouldn\\u2019t make the dataset completely uniform, they include evaluations on out-of-distribution datasets (e.g. CS106A dataset in section 5.2) to show that uniform models still perform better and thus their sampling scheme does cover some non-obvious sparsities and skews.\\n\\n(3) Despite the doubt on efficiencies of rejection sampling, they include both a proof and empirical results (section 8.3 and 8.4) to show they need sample O(1/\\u03b5) times before finishing.\", \"weaknesses\": \"(1) No new model. This work has solely using the existing model from Bunel et al. (2018) in the Karel domain and didn\\u2019t propose a new model that illustrates possibly a way to utilize/demonstrate the uniformity of dataset.\\n\\n(2) The calculator example is relatively too trivial to represent the whole genre of implicit differentiable neural program synthesizer (e.g. stack, GPU, RAM). \\n\\n(3) No statistical tests (such as chi-square test) to support the claim about uniformity (even on chosen salient variables)\", \"questions\": \"(1) What if the distribution of real-world programs are skewed and neural synthesizers are supposed to take advantage of their skewness?\\n\\n(2) Why would you claim the calculator example is not a program synthesis task while intending to use it to represent another genre of program synthesis methods?\", \"suggestions\": \"(1) To show that current salient random variables do not make the dataset theoretically uniform but are still approximate enough, why not construct some distinct held-out salient variables (such as memory/grid/marker query times, executing time) from existing ones, construct narrower test sets accordingly, and hopefully show uniform models still perform significantly better than baseline?\\n\\n(2) In section 8.2, why not write the proportionality statement in two lines so that people wouldn\\u2019t be confused to think Pr[X=x] = 1 while intending to show Pr[X=x] \\u221d 1(an arbitrary constant) so that Pr[X] is uniform?\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Nice presentation of a serious issue, with some flaws\", \"review\": \"This paper provides a good presentation of a serious problem in evaluating (as well as training!) performance of machine learning models for program synthesis / program induction: considering specifically the problem of learning a program which corresponds to given input/output pairs, since large datasets of \\\"real-world\\\" programs typically do not exist it necessary to construct a synthetic dataset for training and testing; this requires both (a) generating programs, and (b) generating input/output examples for these programs. Enumerating either all possible programs or examples is typically impossible, and so a sampling scheme is used to simulate \\\"reasonable\\\" programs and examples. This may hinder generalization to other data not often produced by the sampling scheme.\\n\\nTo address this, the paper then argues that programs should be synthesized from a distribution which as as uniform as possible over a set of user-specified statistics (the \\\"salient variables\\\") as well as over the input space. Intuitively, this makes sense: maximizing the entropy of the synthetic data should provide good coverage over the entire input space. However, there are a few ways in which the particular approach is unsatisfying:\\n\\n(1) It requires manual curation of salient random variables. This sort of punts the decision of \\\"what should my sampling procedure be\\\" to \\\"what is my choice of salient variables to make uniform\\\". I agree that this is still an improvement.\\n\\n(2) The procedure described for generating synthetic examples is essentially a rejection sampling algorithm, and it will fail to generate examples in a reasonable timeframe if the original proposal distribution is highly non-uniform, or if the salient random variables include values which fall in the tail of the proposal distribution.\\n\\nAlso, relatedly, I don't follow the description of correctness in section 8.2 at all. What is meant by the \\\"= 1\\\" at the end of the line right before \\\"\\u2026 And thus\\u2026\\\"? Clearly P_r[X=x] cannot both equal 1, and equal k. Is the \\\"=1\\\" meant to only mean the summand itself? If so, please fix the notation. Also, I assume that k is meant to be the cardinality of the set {s: X(s) = x}, but this is not defined anywhere. Notational issues aside, unless the mapping X(s) from sample to salient variable is one-to-one, then I'm not clear how the P_q[X = X(s)] would relate to q(s). This should be made more clear. Also, I believe there need to be conditions on q(s), e.g. such that min_x P_q[X = x] must always be greater than zero.\\n\\n\\nThese issues aside, the empirical demonstrations on the Karel the Robot examples are nicely presented and make the point well. My primary question here would be around section 5, the \\\"real-world benchmarks\\\", where it is observed that the baseline model performs less well than re-training on a uniform / homogenized dataset. While it is nice that it performed better, I don't understand why even the better number (19.4%) is so low; the performance of the uniform model in table 1 tends to be much higher (in the 60% to 70% range). This would suggest that the uniform model perhaps is significantly *underweighting* important parts of the space. What is causing this? e.g. what do the salient variables look like for real-world examples?\\n\\n\\nFinally, I am not sure I understand how the calculator example fits into this paper. Unless I misunderstand, it is not a program synthesis task, but rather a regression task. Clearly it does still depend on generation of synthetic data, but that is more a different task (as described in section 2). I feel its inclusion somewhat dilutes the paper. Rather, it would be nice to see more discussion or investigation into the failure modes of these trained models; for example, looking deeper at the handling of control flow and recursion, or at whether particular values of salient variables tended to be correlated with success or failure under different train / test regimes.\\n\\n\\n\\n===== after updates =====\\n\\nThanks for the edits \\u2014 I believe the overall paper is more clearly presented, now.\", \"i_still_think_it_is_a_stretch_to_consider_the_calculator_domain_is_a_program_induction_problem\": \"it is a regression problem, from an input string to an output integer, or alternately a classification problem, since it computes the result mod 10. The only way I could understand this as a program induction problem is rather obliquely, if the meaning is that any system which is able to compute the result of the calculator evaluation has implicitly replicated internally, in some capacity, the sequence of instructions which are evaluated. I don't think this is really very clear though; for example, given two calculator programs, one a subprogram of another (e.g., \\\"4*(3+2)\\\" and \\\"3+2\\\"), do the resulting \\\"induced\\\" computations share the same compositional structure? The examples of program induction in section 2 are largely architectures which are explicitly designed to have properties which mimic conventional programming languages (e.g. extra data structures as memories, compositionality, \\u2026). In contrast, the calculator example in this paper simply uses an LSTM.\\n\\nThat said, I think it's still a great example! Learning a fast differentiable model which accurately mimics existing non-differentiable model has tons of applications, and has exactly the same challenges regarding synthetic data. \\n\\n\\n\\nI have to say I find the new section 8.3 a bit intuitively challenging; e.g. it's not clear really how long a waiting time of 48 log(2|X|/\\\\delta) / (p|X|^2 z^2) really is. But, to that end, I appreciate the empirical discussion in 8.4\\u20138.6.\\n\\nI've updated my review to increase my score \\u2014 I lean towards accepting this paper, as it is a timely contribution and I think it is important for future program synthesis papers to take the results here to heart. I've reduced my confidence slightly, as I have not fully reviewed the new proof in 8.3.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Lacking arguments why the proposed method generalizes well to other problem settings\", \"review\": \"The paper presents a methodology for improved program synthesis by generating datasets for program induction and synthetic tasks from uniform distributions. This method is evaluate on two problem settings.\\n\\nThe methodology is presented in section 3. Even though the outline does not seem to be complicated, the presentation in section 3 leaves me puzzled. The the second paragraph two sets of silent variables are introduced X_1,...,X_n and Z_1,...,Z_m but never used again the rest of the paper. In the third and forth paragraph details about the Karel domain are presented without the Karel domain having been introduced. It seems you are using rejection sampling to sample from a uniform distribution. Why can you not sample from a uniform distribution directly? What do you mean with the notation X(s)? What are you proving in Appendix? Would maybe be clearer if you presented it as a theorem/lemma.\\n\\nThe remaining part of the paper evaluates this methodology on two specific problem settings, the Karel domain and Calculator domain. The generalization performance is increased when trained on datasets generated by the method presented in the paper. However, I cannot find and strong arguments in the paper why this property should generalize to other problem settings. To me the analysis and experimental results seems to be tailored to the two problems settings used in the paper.\\n\\n==== After revision ====\\n\\nThe authors have done a great job addressing the concerns I had about the clarity. Consequently, I have raised my score, whereas my fairly low confidence still remains.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}"
]
} |
|
r1xdH3CcKX | Stochastic Prediction of Multi-Agent Interactions from Partial Observations | [
"Chen Sun",
"Per Karlsson",
"Jiajun Wu",
"Joshua B Tenenbaum",
"Kevin Murphy"
] | We present a method which learns to integrate temporal information, from a learned dynamics model, with ambiguous visual information, from a learned vision model, in the context of interacting agents. Our method is based on a graph-structured variational recurrent neural network, which is trained end-to-end to infer the current state of the (partially observed) world, as well as to forecast future states. We show that our method outperforms various baselines on two sports datasets, one based on real basketball trajectories, and one generated by a soccer game engine. | [
"Dynamics modeling",
"partial observations",
"multi-agent interactions",
"predictive models"
] | https://openreview.net/pdf?id=r1xdH3CcKX | https://openreview.net/forum?id=r1xdH3CcKX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"SJe-DEG4xE",
"Byxum4VEkV",
"HJlnDtffyN",
"HkejG6vcA7",
"H1xfvqwqCQ",
"BygrEuP907",
"rygJ5BPqC7",
"ByxsCxyn3m",
"ByeIhLLcnX",
"rJeK9qxY3m"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544983640830,
1543943199544,
1543805284456,
1543302418721,
1543301722334,
1543301164672,
1543300487170,
1541300434634,
1541199534248,
1541110416734
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1557/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1557/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1557/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1557/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1557/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1557/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1557/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1557/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1557/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1557/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper proposes a unified approach for performing state estimation and future forecasting for agents interacting within a multi-agent system. The method relies on a graph-structured recurrent neural network trained on temporal and visual (pixel) information.\\n\\nThe paper is well-written, with a convincing motivation and a set of novel ideas. \\n\\nThe reviewers pointed to a few caveats in the methodology, such as quality of trajectories (AnonReviewer2) and expensive learning of states (AnonReviewer3). However, these issues do not discount much of the papers' quality. Besides, the authors have rebutted satisfactorily some of those comments.\\n\\nMore importantly, all three reviewers were not convinced by the experimental evaluation. AnonReviewer1 believes that the idea has a lot of potential, but is hindered by the insufficient exposition of the experiments. AnonReviewer3 similarly asks for more consistency in the experiments.\\n\\nOverall, all reviewers agree on a score \\\"marginally above the threshold\\\". While this is not a particularly strong score, the AC weighted all opinions that, despite some caveats, indicate that the developed model and considered application fit nicely in a coherent and convincing story. The authors are strongly advised to work further on the experimental section (which they already started doing as is evident from the rebuttal) to further improve their paper.\", \"confidence\": \"3: The area chair is somewhat confident\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Interesting for ICLR but can benefit from further evaluation\"}",
"{\"title\": \"Thank you\", \"comment\": \"Dear Reviewer 2,\\n\\nThat\\u2019s great to hear. We\\u2019d like to thank you again for your very constructive comments, which have helped us improve the quality of the paper significantly.\"}",
"{\"title\": \"Thank you\", \"comment\": \"Thanks for addressing my comments. The social pooling mechanism improves Indep-RNN as expected, however, as you show, it's not better than your method. This makes the results stronger. Additionally, the plotted trajectories shine light on the behavior of the trajectories. The trajectories are still better than the baselines after this additional information. Given the authors' response, I have increased my score. It will be nice to see this work take a semi-supervised or unsupervised route in the future :)\"}",
"{\"title\": \"Our response\", \"comment\": \"We appreciate your constructive feedback, and very useful references!\\n\\n(1) Forecasting task:\\nWe provide the sampled trajectories in Figure 5 and Appendix. In particular, Figure 5(a) and Figure A2 show the multiple samples generated by Graph-VRNN. We observe the trajectories are relatively stable. For soccer data, since the perception task is more challenging and many players are not observed, we find the belief states to be uncertain for the first several steps (having more observed steps would help in this case). For basketball data, we find that the belief states for players are usually stable, but the ball is more uncertain (bottom right row of Figure 4). We suspect that it\\u2019s due to the movement of ball is much faster (which may be addressed by using a higher FPS).\\n\\n(2) Lack of baselines:\\nAs pointed out by the reviewer, Social-LSTM and Social-GAN work with trajectory data by default. However, their social-pooling mechanism can be used as an alternative to the relation network used in Graph-RNN. We use the pooling mechanism from Social-GAN, which is more recent and has no additional hyper parameters. We find it to perform slightly worse than Graph-RNN (which uses Relation Networks), but better than Indep-RNN. Note that the graph network module is a building block in our model, and RN can be replaced by other graph network architectures.\"}",
"{\"title\": \"Our response\", \"comment\": \"Thank you for your helpful feedback and suggested baselines!\\n\\n(1) Future prediction task:\\nThe gaussian distribution with 'time' as standard deviation is unlikely to work well in team sports, where there are clear patterns of tactics. It is also unable to generate the trajectories as shown in Figure 5(a), Figure A1 and Figure A2.\\n\\n(2) Comparison with Felson et al.:\\nWe agree that Felson et al. ICCV'17 is relevant and added this baseline to revision. We decide to compare against the FCN method from this paper, as the other method is based on hand-crafted patterns of players\\u2019 relative locations, and not suitable for data with heavy occlusions (e.g. due to camera FOV). The FCN method is very similar to our visual encoder, except that it uses the last observed visual features to predict future states (rather than learning a prior distribution). We can see from Table 2 \\u201cvisual-only\\u201d row that the approach is significantly worse than Graph-VRNN.\\n\\n(3) Trajectory visualizations:\\nWe have added qualitative evaluations of the sampled trajectories, which show that our method generates higher quality trajectories than baseline methods. Figure 5(a) and Figure A2 illustrate diverse trajectories generated by Graph-VRNN, which show there are collaborative behaviors of different players. Figure A3 compares Graph-VRNN with Indep-RNN and single RNN, which illustrates the importance of having an interaction module.\\n\\n(4) Goalie example:\\nYes a location prior would work very well for goalie, we will clarify this in text.\"}",
"{\"title\": \"Our response\", \"comment\": \"Thank you for your detailed feedback! We have uploaded a revision to address your concerns:\\n\\n(1) Weak/inconsistent results:\\nWe found that joint training of visual encoder and (V)RNN/Graph-(V)RNN lead to suboptimal performance for all methods, which has been addressed by our modified visual encoder and pre-training mechanism. We can see from Table 1, 2 and 3 that graph structure consistently helps for both datasets and both tasks. We also observe that stochastic modeling is more useful for Graph-RNN than vanilla RNN.\\n\\n(2) Missing related work:\\nWe have added the references in the related work section.\\n\\n(3) RNN baseline:\\nWe have added this baseline, we find that Graph-RNN outperforms single RNN in both current state estimation and future state prediction tasks.\\n\\n(4) Comparison with true distribution:\\nThis is a great idea. We are looking into the possibility to conduct such comparison for soccer world. Unfortunately we cannot do it for basketball since the true distribution is unknown.\"}",
"{\"title\": \"Our general response\", \"comment\": [\"We thank the three reviewers for your constructive feedback. The main contribution of this submission is a unified way to do state estimation and future forecasting at the level of objects and relations directly from pixels using Graph-VRNN. We focus on augmenting the experimental section based on your feedback. We hope that our revision addresses your concerns, in particular:\", \"We slightly modified the visual encoder to be first two blocks of a ResNet-18, followed by a spatial max-pooling. This encoder is now used by both soccer and basketball data. We pre-train the visual encoder to predict the states (locations) of only visible objects, and fine-tuned the encoder for different methods. We find that these modifications lead to higher accuracy for all methods and consistent behaviors of different methods for soccer and basketball data. Table 1, 2 and 3 shows that Graph-VRNN significantly outperform all baselines.\", \"We added three new baselines: (1) a vanilla RNN; (2) Social-RNN: Social-LSTM and Social-GAN do not work with visual inputs out of the box, but their social pooling mechanism can be used to replace the relation network we use in Graph-RNN. We use the pooling mechanism from Social-GAN, which is more recent and has no additional hyper parameters; (3) Felson et al. ICCV'17. The method is very similar to our visual only baseline, except that it directly predicts future states at multiple time steps from the last visual observation. We use the same visual encoder for a fair comparison.\", \"We added visualization of sampled trajectories in the main paper and also in Appendix. These visualizations show that Graph-VRNN is able to generate diverse and realistic trajectories.\", \"We replaced the cylinder players in soccer world with human models (driven by the same AI), which were not ready by the submission deadline.\"]}",
"{\"title\": \"Supervised learning model, experiment results are weak.\", \"review\": [\"The authors propose Graph VRNN. The proposed method models the interaction of multiple agents by deploying a VRNN for each agent. The interaction among the agents is modeled by the graph interaction update on the hidden states of the VRNNs. The model predicts the true state (e.g., location) of the agent via supervised auto-regressive learning. The proposed model can improve this estimation from partially-observed visual observations. In the experiment, the authors apply the proposed method to Basketball and Soccer data to model the positions of the players.\", \"The paper is clearly written. However, Section 3.2 needs to be elaborated more because using graph interaction update in VRNN is one of the main contributions.\", \"I see two main weaknesses. The first is that the states are learned by supervised learning where obtaining the state label (i.e., the agent locations) is very expensive. Indeed, the authors had to develop their own soccer game to obtain these labels. The second weakness is the weak/inconsistent experiment results. It seems not clear whether having the graph structure or stochastic modeling is really helping or not. For example, for basketball experiment, Graph-RNN works poorly. And, for soccer, Graph-VRNN works just as good as Graph-RNN. The authors explained that this is due to the simplicity of the player behavior (not much stochastic), but the result in Table 2 shows good performance for Graph-VRNN for future prediction task. All these make it difficult to buy the claimed argument. It is also a limitation that the model requires to know and fix the number of agents.\", \"As minor comments,\", \"in Table 1. Graph-RNN works better for soccer t=4, but not indicated in bold.\", \"Having a single RNN baseline will be helpful to compare with Graph-RNN.\", \"It is confusing to call s_t a belief state because it is observed not latent.\", \"In the qualitative results, I think it can be compared to the heatmap of true distribution.\", \"I think the following papers needs to be discussed as related works.\", \"https://arxiv.org/pdf/1806.01242.pdf\", \"https://arxiv.org/pdf/1802.03006.pdf\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting formulation; Need more evaluation\", \"review\": \"Summary: The paper proposes a method to predict the future state-spaces in a multi-agent system by combing the visual and temporal information using a mixed blend of Graph-Networks+VAE+RNN (G-VRNN) formulation. The proposed approach is evaluated on two sports datasets: (1). basketball sequences; (2). soccer sequences. The authors show how the overall formulation is better than each of individual components.\", \"pros\": \"1. the multi-agent setting is interesting, very natural, and has potential for many applications.\\n\\n2. formulation encodes information about different aspects: agents, location, temporal activities, and each agent's relation to other.\", \"cons\": \"1. The current evaluation is contrived. \\n\\n(a). the task for future state prediction in current basket-ball and soccer sequence is not very clear. A gaussian distribution defined with 'time' as standard deviation could give similar results? \\n\\n(b). no comparison with the existing approaches? I think the work of Felson et al. ICCV'17 is relevant for the given paper, and so it would be ideal to do evaluation on the datasets used in their work, and if possible compare the different baselines that they have used. \\n\\n(c). the goal is to predict the future state of an agent in a multi-agent setting, but it is not clear from the evaluation as how the presence of multiple agents influence the behavior of an individual. \\n\\n(d). a better way to demonstrate the future state-spaces could be through trajectory of ball or players (similar to ones shown by Walker et al ECCV'16, CVPR'14). The current qualitative analysis is not sufficient to understand what is happening in the proposed pipeline.\\n\\n(e). more challenging cases to demonstrate the proposed approach -- consider any multi-person tracking dataset, and use the proposed formulation to predict multiple trajectories (and hence state-spaces at varying time) for the people. An amazing result could be shown as how a person changes trajectory as a group of people pass by.\\n\\n2. The running example of 'location of goalie' is ambiguous. By design, goalie has to be near the goal post. Even if there is no visual information or any other information, one can safely say this thing?\\n\\nOverall I think the work has the potential to be on something really interesting. However, I think it needs solid experiments and is not yet ready for publication.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Review\", \"review\": \"1) Summary\\nThis paper presents a graph neural network based architecture that is trained to locate and model the interactions of agents in an environment directly from pixels. They propose an architecture that is a composition of recurrent neural networks where each models a single object independently and communicate with other for the overall environment modeling. The model is trained with a variational recurrent neural network objective that allows for stochasticity in the predictions while at the same time allows to model the current and future steps simultaneously. In experiments, they show the advantage of using the proposed model for tasks of tracking as well as forecasting of agents locations.\\n\\n\\n\\n2) Pros:\\n+ Novel recurrent neural network architecture to model structured dynamics of agents in an environment.\\n+ Outperforms baseline methods.\\n+ New dataset for partially observable prediction research.\\n\\n3) Cons:\", \"forecasting_task\": [\"The authors argue that a discretization needs to be performed because of the many possible futures given the past, and also provide an error measure based on likelihood. However, if trajectories are actually generated from these distributions, I suspect the many possible futures generated will be very shaky. Can the authors provide trajectories sampled from this? If sampling trajectories does not make sense somehow, can the authors comment on how we can sample multiple trajectories?\"], \"lack_of_baselines\": [\"The authors mention social LSTM and social GAN in the related work, however, no comparison is provided. From a quick glance, the authors of these papers work on trajectories. However, the \\u201csocial\\u201d principle in those papers is general since it\\u2019s done from the computed feature vector. Could it have not been used on top of one of the baselines? If not, could the authors provide a reason why this is not the case?\"], \"additional_comments\": \"As the authors mention, it would be nice to extend this paper to an unsupervised or semi-supervised task. Here are a couple of papers that may interest you:\", \"https\": \"//arxiv.org/abs/1806.07823\\n\\n4) Conclusion\\nOverall, the paper is well written, easy to understand, and seems to be simple enough to quickly reproduce. Additionally, the proposed dataset may be of use for the community. If the authors are able to successfully address the issues mentioned, I am willing to improve my score.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
r1xurn0cKQ | Correction Networks: Meta-Learning for Zero-Shot Learning | [
"R. Lily Hu",
"Caiming Xiong",
"Richard Socher"
] | We propose a model that learns to perform zero-shot classification using a meta-learner that is trained to produce a correction to the output of a previously trained learner. The model consists of two modules: a task module that supplies an initial prediction, and a correction module that updates the initial prediction. The task module is the learner and the correction module is the meta-learner. The correction module is trained in an episodic approach whereby many different task modules are trained on various subsets of the total training data, with the rest being used as unseen data for the correction module. The correction module takes as input a representation of the task module's training data so that the predicted correction is a function of the task module's training data. The correction module is trained to update the task module's prediction to be closer to the target value. This approach leads to state-of-the-art performance for zero-shot classification on natural language class descriptions on the CUB and NAB datasets. | [
"zero-shot learning",
"image classification",
"fine-grained classification",
"meta-learning"
] | https://openreview.net/pdf?id=r1xurn0cKQ | https://openreview.net/forum?id=r1xurn0cKQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"rJlK037-fN",
"H1gayYnexN",
"BylpoAFn1E",
"Hkl-Ant21V",
"r1lbuMYn1V",
"rJxOv9dtkV",
"rJguR6XG07",
"r1gvx57MAm",
"H1lmxW7fAX",
"HkltO1Rdn7",
"r1lrIhav2Q",
"HyxukKdLhm"
],
"note_type": [
"official_comment",
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1546890448610,
1544763620850,
1544490660837,
1544490184893,
1544487529433,
1544288864255,
1542761935868,
1542760943198,
1542758634539,
1541099377276,
1541033036898,
1540946144382
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1556/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1556/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1556/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1556/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1556/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1556/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1556/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1556/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1556/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1556/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1556/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1556/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"Methods were compared using consistent features - Reported baseline methods already use the features from Zhu\", \"comment\": \"We appreciate the reviewer's description of the details of their decision.\\n\\nThe authors of Zhu et al. confirmed that the baseline methods in Zhu et al. that we compared to were trained using the same features of Zhu et al. These experiments were done in Elhoseiny et al, CVPR 2017. We were able to reach the authors of Zhu et al and Elhoseiny et al and verified this. \\n\\nSpecifically, the same semantic and visual representations were used for GAZSL, ZSLPP, ESZSL, WAC-linear, WAC-kernel, and ZSLNS (and also correction networks). Thus, the methods are compared using consistent features.\"}",
"{\"metareview\": \"This is a difficult decision, as the reviewers are quite polarized on this paper, and did not come to a consensus through discussion. The positive elements of the paper are that the method itself is a novel and interesting approach, and that the performance is clearly state of the art. While impressive, the fact that a relatively simple task module trained on the features from Zhu et al. can match the performance of GAZSL suggests that it is difficult to compare these methods in an apples-to-apples way without using consistent features. There are two ways to deal with this: train the baseline methods using the features of Zhu, or train correction networks using less powerful features from other baselines.\\n\\nReviewer 3 pointed this out, and asked for such a comparison. The defense given by the authors is that they use the same features as the current SOTA baselines, and therefore their comparison is sound. I agree to an extent, however it should be relatively simple to either elevate other baselines, or compare correction networks with different features. Otherwise, most of the rows in Table 1 should be ignored. Running correction networks in different features in an ablation study would also demonstrate that the gains are consistent.\\n\\nI think the authors should run these experiments, and if the results hold then there will be no doubt in my mind that this will be a worthy contribution. However, in their absence, I can\\u2019t say with certainty how effective the proposed method really is.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Novel approach, but needs stronger comparisons.\"}",
"{\"title\": \"Re: No Title\", \"comment\": \"For 1., GAZSL is the state of the art, right? And based on the text \\\"We use the published features from (Zhu et al., 2018)\\\". So doesn't this make the Correction Network's results comparable with GAZSL, which is the state of the art?\\n\\nGAZSL is the state of the art (prior to this paper submission). We use the published features from GAZSL. Thus, our model uses the same input at GAZSL yet is able to achieve better results than GAZSL.\"}",
"{\"title\": \"Re: Thanks for the response.\", \"comment\": \"Furthermore one odd thing to me is that Task model alone gets 43.8% in ablation Tab 3\\n\\nWe found that many of the additional contributions of Zhu 2018 are not necessary to achieve 43.8%. Specifically, the adversarial training and the adversarial losses are not necessary. Our version of Zhu 2018\\u2019s model for the task module only uses the L2 loss (called \\u201cvisual pivot\\u201d in Zhu 2018) and the classification loss with sparsity regularization. We use the exact same features as published by Zhu 2018. Our ablation studies show that the correction module achieves better performance than only using the task module. We can open source the code after our paper is published so that others can reproduce our task module.\"}",
"{\"title\": \"\", \"comment\": \"For 1., GAZSL is the state of the art, right? And based on the text \\\"We use the published features from (Zhu et al., 2018)\\\". So doesn't this make the Correction Network's results comparable with GAZSL, which is the state of the art?\\n\\nI agree that 2. is strange. I don't know who the authors of this paper are, but if they differ from (Zhu et al., 2018), it would seem unfair to penalize them for the results in (Zhu et al., 2018) being easy to improve upon...\"}",
"{\"title\": \"Thanks for the response.\", \"comment\": \"Thanks to the authors for the clarifications. I have outstanding concerns however.\", \"with_regards_to_the_experiments\": \"1. This paper uses a very advanced feature compared to all the other papers in the comparison besides GAZSL (Zhu 2018). For example according to Zhu\\u201918, this feature extracts bird parts, etc, while the competitors use holistic and generally more primitive image feature. This means that all the comparisons in Tab 1+2 are basically meaningless. We don\\u2019t know if the other compared approaches to ZSL there would perform better than the proposed Correction Network if they had access to the same feature. The right way to evaluate this is is to compare some prior methods upgraded with the feature used here; and also to run the current method with the older visual features used by some prior methods. In absence of such an evaluation, I would give this a definite reject if it was a vision conference. As it\\u2019s a methodology conference, I\\u2019m not so stringent, but still consider this to be a significant minus. Particularly given that the model is not well explained and analysed from a methodology and insight perspectives as a compensatory contribution.\\n\\n2. Furthermore one odd thing to me is that Task model alone gets 43.8% in ablation Tab 3. This is a bit weird, because from a very quick look at Zhu 2018, there are several contributions there besides the better feature, including adversarial learning, etc. If the current paper misses those other components, and just uses the same visual extractor, then the result should not be this good. Does this mean that: (a) The current model benefits from a discriminator-enhanced feature from Zhu\\u201918 (which got 43.7%), or (b) There is something great about the task module\\u2019s design that allows it to exploit the same input feature but perform better than Zhu\\u2019s GAN approach? If so this is surprising, because the task-module alone seems very vanilla. So the feature of the task net that provides this improvement needs some explanation.\"}",
"{\"title\": \"Re: Additional details\", \"comment\": \"- We found the task module performance improves slightly when the output of the task module is feed into a classifier with a single hidden layer that is also trained to classify samples from the task model\\u2019s training dataset.\\\" => I don't understand what this means. Isn't the output of the task module already trained to classify samples from its training dataset? So why is this additional single hidden layer needed?\\n\\nThe output of the task module is used to classify samples from the training dataset using L2 distance between the image sample and the class center as predicted by the task module. The task module is trained to predict the mean of the image samples in a class, given a text description of the class. This prediction is trained using L2 loss. The additional single hidden layer takes as input the predicted mean of image samples and predicts softmax class probabilities using cross entropy loss. This additional single hidden layer is also trained on image samples to predict softmax class probabilities.\", \"so_the_architecture_is\": \"linear layers -> predicted class mean -> additional single hidden layer -> softmax. The total loss is the sum of L2 and cross-entropy. A hypothesis as to why this improves performance is the classifier is more sensitive in certain dimensions (and their combinations) than others, and this loss is combined with the L2 loss which is otherwise dimension invariant. This additional single hidden layer improved the accuracy of the task module by 0.5% to 1% in absolute accuracy.\\n\\nThank you for the minor details. We appreciate your attention to detail.\"}",
"{\"title\": \"Response\", \"comment\": \"There are many typos. Auhtors definitely need to improve their writing and the layout of the paper [sic]\\n\\nWe corrected the writing typos and improved the layout of the paper in our revision.\"}",
"{\"title\": \"Response to Reviewer's comments\", \"comment\": \"8. The writing is very rushed. There are lots of writing and editorial errors. To name a few: P4 Extra \\u201cTask module is trained to minimise.\\u201d P4 \\u201c\\\\mu_u\\u201d Is repeated. Citation style \\u201cMohamed Elhoseiny & Elgammal\\u201d is wrong, check the bibtex.\\n\\nWe thank the reviewer for pointing out the writing and editorial errors. We have edited and revised the writing.\"}",
"{\"title\": \"Official Review\", \"review\": \"Summary: This paper proposes a \\u201cmeta-learning\\u201d approach for zero-shot learning. There is a Task Module that works in a conventional zero-shot way: Training to predict a class prototype using the auxiliary/text data description of that task. The new part is the added Correction Module that inputs both the target/zero-shot task description, the training task description, and the current prediction of the task module, and then outputs a correction vector that is added to the output of the task-module to produce the final output. The resulting system achieves state of the art results on zero-shot fine-grained classification (CUB and NAB).\", \"assessment\": \"Overall this might be a good idea worthy of publication at some point. But despite the good results, the current realisation is not well analysed about exactly how and why it works, with no insight being provided; and leaves some doubt about the validity of the comparative experiments. The writing is also very rushed. It is not ICLR standard yet.\", \"strengths\": [\"Interesting idea overall.\", \"Good results.\"], \"weaknesses\": [\"Poor clarity.\", \"Some experimental evaluation questions.\", \"Poor analysis.\"], \"comments\": \"1. The correction module inputs the full set of training features T_s (Alg1-L13). However the training dataset is fixed, therefore this input is effectively a constant. So its not clear how a constant input can possibly be useful. \\n1.1 Possibly this has something to do with the episodic training, but this is exactly the kind of thing that should be analysed and explained, but is not discussed at all.\\n2. The paper is sold as a meta-learning paper, but it\\u2019s not clearly explained what is the \\u201cmeta\\u201d part of the algorithm.\\n3. Its not explained anywhere how exactly the T_s, T_s^u, etc are fed into the correction network. Is it average pooling? It seems that simple average pooling is unlikely to be adequate given the large number (150) of classes in CUB.\\n4. There are no experimental details such as hyper parameters, network architecture, etc.\\n5. Based on the ablation study (Tab 2), the baseline task network without correction network already achieves state of the art results. Conceptually the task-network alone is a very standard \\u201cregression\\u201d based approach to ZSL of the type that people tried almost 10 years ago. So what is the explanation for why its so good? This makes the comparison to all the competitors in Tab1 suspect. If there is some reason (E.g., better image feature extractor or pre/post-processing) that makes the ultra simple baseline there already outperform SotA, then you have to ask how all the prior methods would perform if they were run with the same tweak. \\n6. Overall no insight provided about what kind of corrections are made, when they are useful, etc. This is important to provide insight about how/why correcting outputs can work.\\n7. There is nothing particularly unique about this setup for ZSL. It could equally be applied to correct outputs in the case of few-shot learning (CF: Prototype Networks). It would be more convincing if it was applied to both settings and analysed better for both.\\n8. The writing is very rushed. There are lots of writing and editorial errors. To name a few: P4 Extra \\u201cTask module is trained to minimise.\\u201d P4 \\u201c\\\\mu_u\\u201d Is repeated. Citation style \\u201cMohamed Elhoseiny & Elgammal\\u201d is wrong, check the bibtex.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Interesting idea, but many flaws.\", \"review\": \"This paper presents an interesting idea by formulating the problem of zero-shot learning in a meta-learning framework. Specifically, the proposed model consists of two components: the task module and the correction module, where the former module learns to map the text description of a class to the sample mean and the latter one updates the predictions for unseen classes.\\n\\nThe presentation of this paper is very poor. Proposed meta-framework has some flaws. And, the experiments are not persuasive enough to demonstrate the significance of the proposed framework. \\n\\nThe proposed zero-shot classifier is based on the nearest centroid. Authors formulate the learning problem as mapping the text description of each class to sample mean of the data of the class. Within a meat-training instance, the training performance is based on L2 distance between the mapped mean and the sample mean of each class. This setup is wired. This because, no matter how many data (x, y pairs) we get, the proposed method only makes the prediction based on the pre-calculated mean. In other words, the \\\"number of samples\\\" in a meta-training dataset becomes the number of unique classes appears in training. For instance, if we have 10 classes in the $D_\\\\mathcal{S}$, and10000 samples per class, the proposed setup will consider the meta training only consist of 10 data points. \\n\\nIn addition, the proposed method heavily rely on the feature extractor of the image. The classification performance could be poor if two the mean of different classes close to each other. Even they are not, the proposed framework cannot provide sample-level generalization. \\n\\nAnother confusion I have is why the training of the task module is not based on a fixed correction module? \\n\\nThe experiments also have many problems. Authors need to clearly state how they construct meta-training, validation and testing instance. Since the proposed framework is a meta-framework, authors need to report their performance in different meta-train/test splits. The conventional split of CUB and NAB is only considered as a single split. How well the proposed framework generalizes to other meta splits? How well the proposed method performance to a generalized zero-shot setting? \\n\\nThere are many typos. Auhtors definitely need to improve their writing and the layout of the paper.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Original approach with strong results, but lacks many details\", \"review\": \"=== Post-rebuttal update ===\\n\\nThe authors' rebuttal provided many of the details I was seeking. I asked a few additional questions which were also recently addressed, and I encourage the authors to include these clarifications into the final draft of the paper.\\n\\nHence, I've increased my score for this paper.\\n\\n=== Pre-rebuttal review ===\\nThis paper presents a meta-learning approach to zero-shot learning. The idea is to train a correction module which is trained to produce a correction to the output of a previously trained task module. The hypothesis is that the correction should depend on the nature of the training data of the task module, and so the correction module receives as input a representation of the training data of the task module. An episodic approach is then used for training the correction module, whereby many different task modules are trained on various subsets of the total training data, the rest being used as unseen data for the correction module.\\n\\nThe proposed idea is original and the results are strong. Generally, I'd be inclined to see this paper published.\\n\\nHowever right now, the paper lacks A LOT of details on how the experiments were run. I would like to see these answered in the rebuttal, before I consider raising my rating for this paper:\\n- What are the architectures used for M_T and M_C?\\n- What distance functions was used for training?\\n- What optimizer was used for training?\\n- How was convergence established in the inner and outer while loops of algorithm 1?\\n- Text mentions that before evaluation, M_T is trained on all data in D_S. How is this done exactly (e.g. how is convergence assessed)?\\n- How is the T_S computed exactly?\\n- How expensive is it to run Algorithm 1 (i.e. to train the correction module)? Since a new task module M_T needs to be trained for each subset S^s, it seems like it might be expensive to run... if not, why?\\n\\nI would also strongly suggest the authors release their code if this paper ends up being published.\", \"in_summary\": [\"Pros\", \"Claims SOTA results on two good benchmarks for zero-shot learning\", \"Approach is original\", \"Cons\", \"Paper lacks a lot of methodological and experimental details\"], \"some_minor_details\": [\"\\\"We found the task module performance improves slightly when the output of the task module is feed into a classifier with a single hidden layer that is also trained to classify samples from the task model\\u2019s training dataset.\\\" => I don't understand what this means. Isn't the output of the task module already trained to classify samples from its training dataset? So why is this additional single hidden layer needed?\", \"Typos:\", \"on few shot learn => on few shot learning\", \"but needs not => but need not\", \"image image classification => image classification\", \"the the compatibility => the compatibility\", \"psuedo => pseudo\", \"\\\"The task module is trained to minimize\\\" => that reads like an unfinished sentence\", \"\\\\hat{\\\\mu}_U \\\\hat{\\\\mu}_U => \\\\hat{\\\\mu}_U\", \"inputted => input\", \"FOr => For\", \"it's inputs => its inputs\", \"otherhand => other hand\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
Byldr3RqKX | Tinkering with black boxes: counterfactuals uncover modularity in generative models | [
"Michel Besserve",
"Remy Sun",
"Bernhard Schoelkopf"
] | Deep generative models such as Generative Adversarial Networks (GANs) and
Variational Auto-Encoders (VAEs) are important tools to capture and investigate
the properties of complex empirical data. However, the complexity of their inner
elements makes their functionment challenging to assess and modify. In this
respect, these architectures behave as black box models. In order to better
understand the function of such networks, we analyze their modularity based on
the counterfactual manipulation of their internal variables. Our experiments on the
generation of human faces with VAEs and GANs support that modularity between
activation maps distributed over channels of generator architectures is achieved
to some degree, can be used to better understand how these systems operate and allow meaningful transformations of the generated images without further training.
erate and edit the content of generated images. | [
"generatice models",
"causality",
"disentangled representations"
] | https://openreview.net/pdf?id=Byldr3RqKX | https://openreview.net/forum?id=Byldr3RqKX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"H1e_gxZskV",
"SkxGivnqR7",
"Syen4wnqCX",
"rkxssS29Rm",
"HygZm9oph7",
"S1xBGosDh7",
"rkeUkmCLhQ"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544388592295,
1543321497872,
1543321396131,
1543320994891,
1541417496976,
1541024524894,
1540969181756
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1555/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1555/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1555/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1555/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1555/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1555/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1555/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper explores an interpretation of generative models in terms of interventions on their latent variables. The overall set of ideas seems novel and potentially useful, but the presentation is unclear, the goal of the method seems poorly defined, and the qualitative results (including the videos) are unconvincing.\\n\\nI recommend you put work into factoring the ideas in this paper into smaller ones. For instance, definition 1 is a mess. I would also recommend the use of algorithm boxes.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Interesting ideas but unclear presentation\"}",
"{\"title\": \"Clarifications about our approach\", \"comment\": \"Dear Reviewer 3,\\nWe have rephrased the unclear sentences you pointed out, many thanks. Regarding the lack of clarity of the approach, we have considerably improved the explanation and rigorous formulation of our analysis in the revision. In particular, Definition 2 and 5 as well as equation (3) now describe in detail what is done. We would like to point out that what we are doing considerably differs from classical interpretability approaches, as it relies on changing variables inside the computational graph and assess how these changes modify the output of the generator. We claim that different meaningful aspects of the generated image can be intervened on independently in such a way, and our result on the CelebA support our claim surprisingly well. In particular, it was striking for us to see that intervening on different parts of the output image by acting the first convolutional layers (further from the image) was possible.\"}",
"{\"title\": \"Concerns addressed\", \"comment\": \"Dear Reviewer 1, thanks to your feedback, we improve the organization and clarity of the paper, moreover we added more quantitative analysis to the results. We provide below answers to your main concerns.\\n1.\\tWhat is the causal estimand?\\nVery good point, we clarified this in the revision by first defining unit-level counterfactuals in section 2 (Definition 2) and then introducing the hybridization operation as counterfactuals in section 3.1. Finally, the causal estimand (at the population level) is written in equation 1 and corresponds to the average absolute value of the unit level causal effect in the potential outcome framework.\\n2.\\tJustification for the number of clusters\\n Assessing the optimal number of clusters is a notoriously difficult and still debated problem in unsupervised learning. We addressed it by quantifying the consistency of the labelling provided by the clustering algorithm, when it is trained on different but overlapping datasets. One benefit of such approach is that this analysis can be applied to any clustering approach, which allowed us to compare the performance of the classical k-means algorithm with respect to our NMF based approach. This is described in the revised section 4.1, and the results depicted on Fig. 5 (in the appendix) suggest using NMF with 3 clusters is a reasonable choice as the consistency drops strongly for 4 clusters. \\n3.\\tSubjective interpretation of the results\\nAssessing objectively the performance of generative models is also still largely debated in the field. We have however performed quantitative analysis in this revision by investigating in Fig. 3 the magnitude of the causal effect as a function of the size of the modules that we create with clustering and intervene on. The results are interesting as they exhibit to some extent a linear dependency between the causal effect and the size of the cluster, that tends to become more complex for layers closer to the image.\\n4.\\tStrange layout\\nWe reduced and moved the optics and related work sections to introduction.\\n5.\\tAbstruse Definition 1\\nWe rewrote the Definition 1 and provided more extensive explanations below, however we could not see an easy way to lead with observed variables, as in our analysis (including new definition 2 and 5 and proposition 1), the mapping from latent space to observations is central. As we added in the comments below Definition 1, the present context is quite different from classical causality settings as we are given the whole generator architecture, so every variables, included latent ones can be observed by the user. In that context, emphasizing the deterministic mapping from the latent to the output variables seemed more natural to us.\\n6.\\tDeterministic mapping in structural equations\\nIt is correct, the deterministic mapping plays a key role for our counterfactual analysis, and follows from the very definition of structural equation models. We clarified this after the Definition 1 and added reference to chapter 7 of Pearl, 2009, where structural equation are first introduced in a deterministic setting.\"}",
"{\"title\": \"Concerns addressed\", \"comment\": \"Dear Reviewer 2, thanks to your comments we have made our counterfactual framework more precise. Here are concise replies to your concerns.\\n1)\\tYes the concept is consistent with counterfactual as defined by Pearl and with the potential outcome framework of Rubin. We added Definition 2 and 5 in order to precisely define counterfactuals and hybridization as a special case. Essentially, we based our framework on unit level counterfactuals (Pearl, 2014) consisting in: a) assigning the distribution of latent variables to a deterministic value obtained on a single sample of the latent variables (called unit in this framework). 2) intervening on one or several variables of the causal model. In the case of hybridization, the intervention consists in assigning to the output values of a subsets of channels in a layer to the value they take for another sample of the latent variables.\\n2)\\tWe updated Definition 1 and provided more explanations below. We now draw and explicit connection between disentanglement and interventions in the context of causal models with the newly introduced Proposition 1.\\n3)\\tWe described now rigorously hybridization as an intervention in section 3.1, for which we also improved and simplified the explanation. We also improved section 3.2 by giving a mathematical definition of influence maps (equation 3), and connected it to causal effects computed in the potential outcome framework.\"}",
"{\"title\": \"causality based investigation of the modular structure of the deep generative model\", \"review\": \"The work provides a way to investigate the modular structure of the deep generative model. The key concept is the \\u201cdistribute over channels of generator architectures\\u201d.\", \"strong_points\": \"1) using the causality to investigate the modular of the deep generative model. \\n2) the key concept is interesting and straightforward. \\n3) the observations in the experiments are interesting. \\n\\nBut I have the following concerns, \\n1) the concept of counterfactual is consistent with that in the causality context? \\n2) more details of the causal model of the deep learning are needed,\\n3) more details of section 3.1 and 3.2 are needed, especially why these processes are proper interventions?\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Interesting concept but not ready for publication.\", \"review\": \"This paper proposes to examine generative adversarial networks by using counterfactual reasoning. The authors propose to examine modularity through the lens of interventions on the generative networks. After observing that the nodes within the generative network obey a deterministic relationship, they propose a proxy for intervention which takes samples and creates \\u201chybrid\\u201d samples by replacing the activation output of one sample with the others. Given the vast number of nodes that exist within a generative network, the authors propose a heuristic for choosing the nodes to perturb.\\n\\nI found the underlying premise of this paper to be very strong (identifying modularity in generative networks), however I think there is a substantial amount of work that should go into this paper before acceptance. While the authors begin by working within the framework of causal reasoning there is no mention of what the effect is that they are seeking to measure, i.e. what is the causal estimand here? The influence maps provide an intuitive answer to this, but not one that defines a clear estimand. I would like to see additional evaluation. The evidence provided largely leaves the reader to interpret results subjectively, rather than providing clear evidence. I was also uncomfortable with the selection on hyperparameters (3 clusters). It would be very nice to either have a selection criterion or show the sensitivity of the proposed methodology to other choices. \\n\\nOverall, I think this is an interesting idea in a very important area, but one that is not quite ready for publication.\", \"some_editorial_comments\": \"The layout of this paper is slightly strange. After the introduction, the authors introduce the notion of disentanglement and lead with an example from optics. This motivation should either be moved to the introduction or removed. After the definitions the authors jump into a related work section that feels slightly disjointed from the previous section.\\n\\nI found definition 1 to be abstruse. In addition there are a couple of typos that should be addressed (\\u201cconsists in a distribution\\u201d \\u2192 \\u201cconsists of a distribution\\u201d). It is non-standard to lead with the latent variables. I think it makes for a much easier narrative to describe the observed variables and structure first, before carrying on to the latent variables. Additionally, I believe you are stating an observation made by Pearl (2001) that after observing the noise variable, relationships become deterministic. This is slightly non-obvious from the wording used (and is also missing the proper reference).\", \"parens_are_missing_from_the_following_citation\": \"\\u201cgenerative models encountered in machine learning Besserve et al. (2018).\\u201d \\u2192 \\u201cgenerative models encountered in machine learning (Besserve et al., 2018).\\u201d\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"The authors of this paper propose a method for assessing modularity in deep networks and more specifically on deep generative networks.\", \"review\": \"AFTER REBUTTAL:\\nI think that in its current version the paper is not yet ready for publication. Several issues have been raised by fellow reviewers as well. I think that they are not trivial and they regard key aspects like paper structure, quality of exposition and experimental analysis. I have detailed my initial opinion in response to the author request for more details. I hope this will serve as useful guidelines for improving the paper in the future. \\n\\n------------\\nThe method tackles the problem of interpretability that is a very important issue for usually black-box deep networks. Unfortunately it is not very clear how is the achieved. I have read several times the part explaining the influence maps and the clustering based on them and it still doesn't make a lot of sense to me. I think that part has to be better justified and exposed. Moreover, results do not support the claim which makes me doubt even more about how effective the method proposed actually is. In conclusion, I think that better exposition and more solid experimental analysis is needed.\", \"also_please_check_some_writing_problems\": \"> Introduction: \\n\\\"to acquire a generative function mapping a latent space (such as Rn)\\\" > difficult to read, rephrase. \\n\\\"making it difficult to add human input\\\" > confusing. What do you mean by human input? I assume you refer to having control to make decisions about design. \\n\\n> Section 3.1\\n\\\"the internal variable may leave the manifold it is implicitly embedded in as a result of the model\\u2019s training\\\" : not clear, rephrase.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
|
S1e_H3AqYQ | Exploiting Cross-Lingual Subword Similarities in Low-Resource Document Classification | [
"Mozhi Zhang",
"Yoshinari Fujinuma",
"Jordan Boyd-Graber"
] | Text classification must sometimes be applied in situations with no training data in a target language. However, training data may be available in a related language. We introduce a cross-lingual document classification framework CACO between related language pairs. To best use limited training data, our transfer learning scheme exploits cross-lingual subword similarity by jointly training a character-based embedder and a word-based classifier. The embedder derives vector representations for input words from their written forms, and the classifier makes predictions based on the word vectors. We use a joint character representation for both the source language and the target language, which allows the embedder to generalize knowledge about source language words to target language words with similar forms. We propose a multi-task objective that can further improve the model if additional cross-lingual or monolingual resources are available. CACO models trained under low-resource settings rival cross-lingual word embedding models trained under high-resource settings on related language pairs.
| [
"cross-lingual transfer",
"character-based method",
"low-resource language"
] | https://openreview.net/pdf?id=S1e_H3AqYQ | https://openreview.net/forum?id=S1e_H3AqYQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"B1lSGMRxe4",
"Bygm3enFCQ",
"Hyl7wenF0m",
"H1xySl3F0X",
"Sklq012KAX",
"rJe977-j2Q",
"Hylf35yihX",
"S1gw8ImdhX"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544770061059,
1543254187169,
1543254107451,
1543254070728,
1543253970224,
1541243681614,
1541237418506,
1541056079035
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1554/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1554/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1554/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1554/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1554/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1554/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1554/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1554/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper's contribution lies in using cross-lingual sharing of subword representations for improving document classification. The paper presents interesting models and results.\\n\\nWhile the paper is good (two out of three reviewers are happy about it), I do agree with the reviewer who suggests the experimentation with relatively dissimilar languages and showing whether or not the approach works for those cases. I am also not very happy with the author response to the reviewer. Moreover, I think the paper could improve further if the authors presented experiments on more tasks apart from document classification.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Meta Review\"}",
"{\"title\": \"Updates\", \"comment\": \"We sincerely thank all reviewers for the useful reviews. We have uploaded a revised paper to address some of the questions and suggestions. We experiment with two additional models:\\n1. A lightly-supervised monolingual model trained on fifty labeled target language document (SUP in Table 1).\\n2. A combined model that adds CLWE as additional features for the CACO classifier (COM in Table 1). This model achieves a significantly higher average test accuracy, which shows our model is useful even when we have enough resources to train a good CLWE.\\n\\nPlease see our responses for detailed discussion.\"}",
"{\"title\": \"Response\", \"comment\": \"Thank you for your review!\\n\\nReviewer 3 comments that our method are \\u201cunexciting\\u201d and similar to Pinter et al. (2017). We believe the novelty lies in our application to cross-lingual document classification and our multi-task objective. The objective proposed by Pinter et al. (2017) is only one of the three auxiliary tasks, and they only apply their model to monolingual tasks.\\n\\nReviewer 3\\u2019s main concern is that our experiments do not cover enough language pairs (with different amount of similarities). We have made our best effort to cover a diverse set of language pairs with. While we wish to investigate more language pairs, we cannot find a dataset for further experiments. RCV2 is a standard benchmark of cross-lingual document classification, and yet most of the languages (with enough labeled documents) are Indo-European. Please let us know if there are other text classification datasets with more languages.\\n\\nTransferring between similar languages is more than an \\u201cacademic exercise\\u201d. Sometimes related languages have very different amount of resources. For example, Hindi has almost ten times as many speakers as Urdu. In our experiments, we construct datasets for truly low-resource language such as Tigrinya and experimentally demonstrate the effectiveness of our method.\"}",
"{\"title\": \"Response\", \"comment\": \"Thank you for your review!\\n\\nReviewer 2 asks how our model performs when using the same resources as the CLWE-based models. In our preliminary experiment, using a larger dictionary slightly improves test accuracy for the CACO models, but the improve becomes marginal as the dictionary is larger. Note that CLWE-based models are strictly more expressive than character-based models and have many more parameters (CLWE learns a separate vector for each word). Therefore, CLWE-based models are more suitable for high-resource settings, while our model is specifically designed for the low-resource setting, which we focus on in this paper.\\n\\nOur model is still useful in the high-resource setting though. In our updated draft, we experiment on feeding CLWE as extra features to a CACO classifier, and the test accuracies are significantly higher (on average) than only using CLWE as features (COM in Table 1). Therefore our model is useful even when we have enough resources to train a good CLWE.\\n\\nReviewer 2 asks what happens if we apply parallel projection on RCV2. In our preliminary experiment, using parallel project does not improve test accuracy on RCV2, because we already have a rather large set of high-quality labeled data. Therefore, we choose not to apply PP on RCV2.\\n\\nReviewer 2 asks about how the lambdas in Eq. 11 are tuned. As mentioned in the appendix, we use the same hyperparameters (including the lambdas) for all language pairs, and they are tuned on a held-out set of one \\u201cdev\\u201d language pair (it-es). In particular, we find it helpful to use a smaller lambda_e, which implies that the mimick task is less helpful than the other two auxiliary tasks.\\n\\nReviewer 2 asks why language identifiers hurts the performance. Using language identifier allows the embedder to behave differently for the two languages. In practice, this added expressiveness could lead to overfitting the training dictionary. Consequently, the embedder might assign very different representations to orthographically similar words from the two languages. This could prevent generalization through orthographic features and decrease test accuracy.\\n\\nReviewer 2 asks why our experiment results are asymmetric, and why the performance gains are better for some language pairs. In general, the effectiveness of *any* existing cross-lingual transfer technique varies across languages, and there is no guarantee that the results should be symmetric. In our case, we hypothesize the differences in morphology between languages plays an important role. This is an important research question that we wish to investigate in future work.\\n\\nReviewer 2 asks why using two source languages helps and what happens when we further increase the number of source languages. One simple explanation for the accuracy improvement is that training on two source languages has a regularization effect and prevents the model from overfitting to a particular language. Unfortunately, our dataset has a limited number of languages, and we could not experiment with more (related) source languages. Please let us know if there are other text classification datasets with more languages.\\n\\nIn our updated draft, we try to clarify reviewer 2\\u2019s questions to provide more explanations for experiment results. Please let us know if you have any additional questions or suggestions.\"}",
"{\"title\": \"Response\", \"comment\": \"Thank you for your review!\\n\\nReviewer 1 mentions that the model components are not new. It is true that our model is built on existing models. However, the novelty of our work lies in the combination of these techniques and the application to cross-lingual document classification.\\n\\nFollowing Reviewer 1\\u2019s suggestion, we add a baseline that is trained on a small set of 50 labeled documents in the target language. In general, our CACO models perform on par with this lightly-supervised target language model (SUP in Table 1 in our new draft). We hope this further demonstrates the effectiveness of our method. We cannot apply this baseline on the LORELEI dataset since it is too small to split further.\"}",
"{\"title\": \"Straightforward model, sub-par experiment setup\", \"review\": \"The paper proposes to transfer document classifiers between (closely) related languages by exploiting cross-lingual subword representations in a cross-lingual embedder jointly with word-based classifier: the embedder represents the words, while the classifier labels the document. The approach is reasonable, albeit somewhat unexciting, as the basic underlying ideas are in the vein of Pinter et al. (2017), even if applied on a different task.\\n\\nThe main concern I have with the paper is that it leaves much open in terms of exploring the dimension of (dis)similarity: How does the model perform when similarity decreases across language pairs in the transfer? The paper currently offers a rather biased view: the couplings French-Italian-Spanish, Danish-Swedish are all very closely related languages, and Amharic-Tigrinya are also significantly related. Outside these couplings, there's a paragraph to note that the method breaks down (Table 5 in the appendix). Sharing between Romance and Germanic languages is far from representative of \\\"loosely related languages\\\", for all the cross-cultural influences that the two groups share.\\n\\nWhile the experiment is reasonably posed, in my view it lacks the cross-lingual breadth and an empirical account of similarity. What we do in cross-lingual processing is: port models from resource-rich to low-resource languages, and to port between very similar languages that already have resources is a purely academic exercise. This is not to say that evaluation by proxy should be banned, but rather that low-resource setups should be more extensively controlled for.\\n\\nThus, in summary, a rather straightforward contribution to computational modeling paired with sub-par experiment setup in my view amounts to a rejection. The paper can be improved by extending the experiment and controlling for similarity, rather than leaving it as implication.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Need more insights\", \"review\": [\"Summary: The authors address the task of cross language document classification when there is no training data available in the target language but data is available a closely related language. The authors propose forming character-based embeddings of words to make use of sub-word similarities in closely-related languages. The authors do an extensive evaluation using various combinations of related languages and show improved performance. In particular, the performance is shown to be competitive with word-based models, which are tied to a requirement of resources involving the original language (such as MT systems, bilingual lexicons, etc). The authors show that their results are boosted when some additional resources (such as bilingual dictionaries of minimal size) are used in a multi-task learning setup.\", \"I would have liked to see some comparison where your model also uses all the resources available to CLWE based models (for example, larger dictionary, larger parallel corpus, etc)\", \"It is mentioned that you used parallel projection only for Amharic as for other languages you had enough RCV2 training data. However, it would be interesting to see if you still use parallel projection on top of this.\", \"I do not completely agree with the statement that CACO models are \\\"not far behind\\\" DAN models. IN Table 1, for most languages the difference is quite high. I understand that your model uses fewer resources but can it bridge the gap by using more resources? Is the model capable of doing so ?\", \"How did you tune the lambdas in Eqn 11? Any interesting insights from the values of these lambdas? Do these lambda values vary significantly across languages ?\", \"The argument about why the performance drops when you use language identifiers is not very convincing. Can you please elaborate on this ?\", \"Why would the performance be better in one directions as compared to another (North Germanic to Romance v/s ROmance to North Germanic). Some explanation is needed here.\", \"One recurring grievance that I have is that there are no insights/explanations for any results. Why are the gains better for some language pairs? Why is there asymmetry in the results w.r.t direction of transfer ? In what way do 2 languages help as compared to single source language? What is you use more that 2 source languages?\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Well-presented but some shortcomings in experiments\", \"review\": \"Overview:\\n\\nThis paper proposes an approach to document classification in a low-resource language using transfer learning from a related higher-resource language. For the case where limited resources are available in the target low-resource language (e.g. a dictionary, pretrained embeddings, parallel text), multi-task learning is incorporated into the model. The approach is evaluated in terms of document classification performance using several combinations of source and target language.\", \"main_strengths\": \"1. The paper is well written. The model description in Section 2 is very clear and precise.\\n2. The proposed approach is simple but still shows good performance compared to models trained on corpora and dictionaries in the target language.\\n3. A large number of empirical experiments are performed to analyse different aspects and the benefits of different target-language resources for multi-task learning.\", \"main_weaknesses\": \"1. The application of this model to document classification seems to be new (I am not a direct expert in document classification), but the model itself and the components are not (sequence models, transfer learning and multitask learning are well-established). So this raises a concern about novelty (although the experimental results are new).\\n\\n2. With regards to the experiments, it is stated repeatedly that the DAN model which are compared to uses \\\"far more resources.\\\" The best ALL-CACO model also relies on several annotated but \\\"smaller\\\" resources (dictionaries, parallel text, embeddings). Would it be possible to have a baseline where a target-language model is trained on only a small amount of annotated in-domain document classification data in the target language? I am proposing this baseline in order to answer two questions. (i) Given a small amount of in-domain data for the task at hand, how much benefit do we get from additionally using data from a related language? (ii) How much benefit do we get from using target-language resources that do not address the task directly (dictionaries, embeddings) compared with using a \\\"similar\\\" amount of data from the specific task?\", \"overall_feedback\": \"This is a well-written paper, but I think since the core of the paper lies in its empirical evaluation, the above experiments (or something similar) would greatly strengthen the work.\", \"edit\": \"I am changing my rating from 5 to 6 based on the authors' response.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
HyePrhR5KX | DyRep: Learning Representations over Dynamic Graphs | [
"Rakshit Trivedi",
"Mehrdad Farajtabar",
"Prasenjeet Biswal",
"Hongyuan Zha"
] | Representation Learning over graph structured data has received significant attention recently due to its ubiquitous applicability. However, most advancements have been made in static graph settings while efforts for jointly learning dynamic of the graph and dynamic on the graph are still in an infant stage. Two fundamental questions arise in learning over dynamic graphs: (i) How to elegantly model dynamical processes over graphs? (ii) How to leverage such a model to effectively encode evolving graph information into low-dimensional representations? We present DyRep - a novel modeling framework for dynamic graphs that posits representation learning as a latent mediation process bridging two observed processes namely -- dynamics of the network (realized as topological evolution) and dynamics on the network (realized as activities between nodes). Concretely, we propose a two-time scale deep temporal point process model that captures the interleaved dynamics of the observed processes. This model is further parameterized by a temporal-attentive representation network that encodes temporally evolving structural information into node representations which in turn drives the nonlinear evolution of the observed graph dynamics. Our unified framework is trained using an efficient unsupervised procedure and has capability to generalize over unseen nodes. We demonstrate that DyRep outperforms state-of-the-art baselines for dynamic link prediction and time prediction tasks and present extensive qualitative insights into our framework. | [
"Dynamic Graphs",
"Representation Learning",
"Dynamic Processes",
"Temporal Point Process",
"Attention",
"Latent Representation"
] | https://openreview.net/pdf?id=HyePrhR5KX | https://openreview.net/forum?id=HyePrhR5KX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"SkeDF1P5DB",
"rklkdoLYDH",
"SklhlP1twS",
"SygXQ2IoYN",
"H1l72xneg4",
"S1g3PD-L07",
"ryx61DWL0X",
"SkePF8Z8CQ",
"SylfmIbU07",
"HygUKS-807",
"S1g5-1zxRm",
"rkl_EsCKpX",
"BJgv8BVFTm",
"rygMqV4YTX",
"Sygt67NF67",
"BJeonJJo3X",
"SyeBsVc93m",
"rkx45Fl3jX",
"S1eidr-Eom",
"HJg59orrqQ",
"SkeXykWAYm"
],
"note_type": [
"comment",
"official_comment",
"comment",
"comment",
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_comment",
"comment",
"official_comment",
"comment"
],
"note_created": [
1569513343320,
1569446759409,
1569416948166,
1554897947092,
1544761514669,
1543014244016,
1543014116599,
1543014014676,
1543013913902,
1543013758147,
1542622977808,
1542216495838,
1542174031431,
1542173833741,
1542173633087,
1541234610710,
1541215388546,
1540258188499,
1539736947435,
1538771857735,
1538293466597
],
"note_signatures": [
[
"~Boris_Knyazev1"
],
[
"ICLR.cc/2019/Conference/Paper1553/Authors"
],
[
"~Boris_Knyazev1"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1553/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1553/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1553/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1553/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1553/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1553/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1553/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1553/AnonReviewer4"
],
[
"ICLR.cc/2019/Conference/Paper1553/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1553/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1553/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1553/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1553/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1553/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1553/Authors"
],
[
"~Michael_Bronstein1"
]
],
"structured_content_str": [
"{\"comment\": \"Thanks for a response. Most of the issues were clarified to us before, and we really appreciate that. I think, however, that those clarifications should be made more accessible to other people either in the form of responses here or in the updated paper. I'll try my best to enumerate most of them here. My list applies to the Social Evolution dataset and dynamic link prediction mainly, and may or may not apply to other cases.\\n\\n/* I admit that some of my misunderstandings and questions come from the fact that \\\"Temporal point processes\\\" are still something mysterious to me. There are a lot of materials on that in the works you cite, but a more clear connection (and better, quantitative comparison) between neural networks, specifically recurrent neural networks, and point processes would be appreciated. As far as I understand, the main advantage is the support of continuous time scale, which is great. However, to me (as a neural network person), it seems to be easier to adapt an RNN to support continuous time scale. There seems to a literature on that [1]. So, it would be interesting to know your thoughts regarding that. */\\n\\nI've collected 11 issues/comments so far. There is, of course, no rush to address them!\\n\\n1. Sign typo in Section 5.2 in the conditional density formula (should be exp(-...) as in Section 2.2). Also, there are too many inline equations, so it's hard to refer to them and it makes the reading experience more challenging.\\n2. Computing the exponential term in that formula in 5.2 seems to be extremely expensive, because you need to sum over all possible events between t_bar and t. It seems that you assume to sum only over events involving nodes u and v.\\n3. Unfortunately, most (>99%) events in the Social Dataset are symmetric Proximity events, but lambda(u,v) is not equal lambda(v,u), so it might make sense to take an average of them or something like that both during training and testing.\\n4. In 5.2 \\\"Hence, when ranking the entities, we remove any entities that creates a pair already seen in the test\\\" seems to be not applicable to the Social dataset, because it's very dense.\\n5. Many results are reported only on plots (e.g., Figure 2), so it's very hard to compare numerically to what we obtained.\\n6. The number of Association events in the test set is extremely small (~70), so I'm not sure if the comparison in Figure 2 is reliable. Also, it's not clear what the error bar actually stands for: is it std over multiple runs or time slots or something else.\\n7. Most of the baselines seem to be worse or on pair with the random prediction, which should be MAR~42 for 84 nodes in the dataset. This basically says that the baselines do learn anything useful. This is a little bit weird, however, we got similar results in our experiments with some static methods.\\n8. The Exogenous term in Eq. (4) depends on the difference between t and t_bar. It means that, for example, if you use seconds as the units, this difference can be extremely huge in some cases and the embeddings will go to NaN or be very unstable during training. I think to compute this term, it has more sense to represent t and t_bar as a vector [year, month, day, hour, minute, second, etc.] or something like that, but I'm not sure if you do that.\\n9. There are node features in the dataset, and the paper does not say if you use them or not.\\n10. Not all hyperparameters are mentioned for the DyRep model: learning rate, exact number of epochs, optimizer, weight decay, etc. Also, reporting the range of parameters is fine, as you do for baselines, but final parameters that you found to be the best is also important, because it can be very expensive to run hyperparameter search. Also, it's not clear how the search is performed, because there is no validation set.\\n11. The intuition behind using the log in Section 4 in the loss is not clear. Why do you need it for the first term and not for the second?\\n\\n[1] Zhibin Yu, Dennis S. Moirangthem, and Minho Lee2, Continuous Timescale Long-Short Term Memory Neural Network for Human Intent Understanding\", \"title\": \"Thanks for a response!\"}",
"{\"title\": \"Response\", \"comment\": \"Thank you for your comment and efforts on studying the data and extensions of our work.\\nWe will make the original implementation and data available soon. However, we want to note that\\nthe data used in this extension is different in terms of statistics/pre-processing from our data and the performance doesn't seem to worsen a lot (We report results in temporal slots and performance will be slightly worse if we use single number average as done in extension). The performance with simple stats has probably to do with high density and recurrence (~2M in our case) in dataset but will look in more detail. Also, we checked the current version uploaded here and we believe it lists all assumptions and does not have any incorrect formulae. But if you point to the issue, we would be happy to rectify anything that you found incorrect.\"}",
"{\"comment\": \"We tried to reimplement this method here https://github.com/uoguelph-mlrg/LDG . Please feel free to report issues or submit PRs. Our implementation is not very clean, so I hope I will find time to clean it up in the future.\\n\\nIt's an interesting method and worth exploring further, but the presentation could be better. As a result, unfortunately, we were unable to reproduce it, even after several rounds of discussion with the authors, because there are many implementation tricks and assumptions, as well as typos in some formulas in the paper. We appreciate their detailed help though. We only tried the Social Evolution dataset and the link prediction task. It's also extremely slow to train (in our implementation), because you need to loop over all training events and it's hard to parallelize it without taking more assumptions. The Social Evolution dataset is also a challenging dataset in terms of machine learning due to its very noisy (especially Proximity events) and imbalanced events, but I guess it's a typical case in practice. We found that you can get extremely good results without any training by just computing basic statistics of events in the training set. Please see our tech report \\\"Learning Temporal Attention in Dynamic Graphs with Bilinear Interactions\\\" at https://arxiv.org/abs/1909.10367 for details about that, where we also show that you might not need the ground truth association graph (i.e. CloseFriends) to learn a good model. The GitHub dataset seems to be better, but we didn't evaluate on it.\", \"title\": \"Our implementation and extension of the paper\"}",
"{\"comment\": \"Congratulations! Would you plan to release the source code?\", \"title\": \"Would you plan to release the source code?\"}",
"{\"metareview\": \"After discussion, all reviewers agree to accept this paper. Congratulations!!\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Meta-Review for DyRep paper\"}",
"{\"title\": \"Revision\", \"comment\": \"We\\u2019d like to thank all the reviewers for your helpful comments. We\\u2019ve made the following updates to our paper based on your feedback:\", \"main_paper\": [\"=========\", \"Revised Section 2.3 based on reviews and discussion.\", \"Removed $l$ as it was only used for book-keeping and only invoked in Algorithm 1. However, as input to Algorithm 1 is A(\\\\bar{t}), the most recent adjacency matrix, $l$ is redundant and can be removed. This helps to make a cleaner presentation\", \"Revised the text under Section 3.2.1 including explanation of Algorithm 1 and also rectified minor notations.\", \"Made \\\\bar notation consistent to signify past time points. Henceforth, for an event at time $t_p$, $\\\\bar{t_p}$ represents the global timepoint just before the current event while for a node $u$ involved in current event at time $t_p$, $\\\\bar{t_p}^u$ represents the timepoint of previous event involving node $u$. This makes all notation consistent and removes any use of $t_{p-1}$ in Eq 4 and $t-$ in Algorithm 1.\", \"Rectified any minor flaws suggested by the reviewers.\"], \"appendix\": \"=======\", \"added_two_new_sections\": [\"Section A: Pictorial exposition of DyRep\\u2019s representation learning module that visualizes Localized Embedding Propagation Principle, Temporal Point Process based Self-Attention and Algorithm 1.\", \"Section B: Discusses rationale behind DyRep framework - includes discussion on marked process view of DyRep clarifying differences of edge type vs dynamic, consolidated comparison to (Trivedi et. al. 2017) and description on support for node, edge types and unseen nodes in our framework.\", \"Further, we have responded to individual comments below.\"]}",
"{\"title\": \"Thank you for the update\", \"comment\": \"Thank you for updating your review. We added a clarification on the point process perspective as a response to your previous comment. Here we address your updated review comments and re-emphasize the contributions of our work:\", \"exogenous_drive\": \"Do you mean Alice/Bob is a person inside network? The exogenous drive constitutes the changes in features of node caused by external influences. However, activities external to network are not observed in the dataset. Hence for a node $u$ (or Alice which will be a node in social network) , the term allows a smooth latent approximation of change in $u$\\u2019s features over time caused by such an external effect. Please note, $\\\\bar{t_{p}}^u$ is not the time of previous event in the global dataset, it is time for previous event of node $u$.\", \"contributions\": \"While one can augment the event specification in (Trivedi et. al. 2017) with additional mark information, that itself is not adequate to achieve our proposed method of modeling dynamical process over graphs at multiple time scales. A subtle but key difference in our deep point process formulation that allows us to achieve our goal of two time-scale expression, is the form of conditional intensity function (Eq 3 in our paper). We employ a softplus function for $f$ which contains a dynamic specific scale parameter $\\\\psi_k$ to achieve this while (Trivedi et al. 2017) uses an exponential (exp) function for $f$ with no such parameter. The exponential choice of $f$ also restricts their model to Rayleigh dynamics while DyRep can capture more general dynamics.\\n\\nHowever, we wish to emphasize that our major contributions for learning dynamic graph representation in this work extend well beyond this conditional intensity function. To the best of our knowledge, our work is the first to adopt the paradigm of expressing network processes at different time-scales (widely studied in network dynamics literature) to representation learning over dynamic graphs and propose an end-to-end framework for the same. Further our novel representation learning module that incorporates *graph structure* - using Temporal Point Process based Self-Attention (a principled advancement over all existing graph based neural self-attention techniques) and Localized Embedding Propagation - is not a straightforward extension or variant of (Trivedi et al. 2017).We will release the code and datasets with the final version of the paper.\\n\\nWe again thank you for your time and discussions. Please let us know if there are still unclear points and we would be happy to clarify your further concerns.\"}",
"{\"title\": \"We agree and provide further clarifications\", \"comment\": \"Thank you for a detailed response. We believe that we were describing similar things but from different perspectives and your response has greatly helped us to distill that. Below we provide further clarifications on our perspective:\\n\\nFirst, we clarify that $l$ was only used for book-keeping to check the status of link in Algorithm 1, so it should not be part of event representation $e$ and we rectify that in our revision by removing it completely as adjacency matrix A already provides that information.\", \"marked_process\": \"From a mathematical viewpoint, we agree with you that for any event $e$ at time $t$, any information other than the time point can be considered a part of mark space describing the events. Hence, in our case, given a one-dimensional timeline, we can consider O=\\\\{(u,v,k)_p, t_p)_{p=1}^P as a marked process with the triple (u,v,k) representing the mark.\\n\\nHowever, using a single-dimensional process with such marks does not allow to efficiently and effectively discover or model the structure in the point process useful for learning intricate dependencies between events, participants of the events and dynamics governing those events. Hence, it is often important to extract the information out of the mark space and build an abstraction that helps to *discover the structure* in point process and make this learning *parameter efficient*. In our case, this translates to two components: \\n\\ni) The nodes in the graph are considered as dimensions of the point process, thus making it a multi-dimensional point process where an event represents interaction/structure between the dimensions, thus allowing us to explicitly capture dependencies between nodes. \\nii) The topological evolution of networks happen at much different temporal scale than activities on a fixed topology network (e.g. rate of making friends vs liking a post on a social network). However both these processes affect each other\\u2019s evolution in a complex and nonlinear fashion. Abstracting $k$ to associate it with these different scales of evolution facilitates to model our purpose of expressing dynamic graphs at two time scales in a principled manner. It also provides an ability to explicitly capture the influential dynamics (Chazelle et. al. 2012) of topological evolution on dynamics of network activities and vice versa (through the learned embedding -- aka evolution through mediation which is the most crucial part of this whole framework). \\n\\nNote that this distinction in use of mark information is also important as we learn representations for nodes (dimensions) but not for $k$. Our overall intention here is to make sure that $k$ representing two different scales of event dynamics is not confused with edge or interaction type. For instance, in case of typed structural edge (e.g. wasbornIn, livesIn) or typed interaction (e.g. visit, fight etc. as in Trivedi et. al. 2017), one would add type as another component in the mark space to represent an event while $k$ still signifying different dynamic scales. In that sense, (Trivedi et. al. 2017) can also be viewed as a marked process that only models the typed interaction dynamics at a single time-scale and does not model topological evolution.\", \"independence\": \"We agree with you but we would paraphrase your statement as follows: The next event and its mark (u,v,k) at time $t$ is conditionally independent of all past events and their marks given the conditional intensity function, which itself is a function of the model and the most recent *learned representations* of nodes (this is the most important part for this to hold) at time $t$.\\n\\nBernard Chazelle. Natural Algorithms and Influence Systems, 2012.\"}",
"{\"title\": \"Response to Reviewer 4 - Part 2\", \"comment\": [\"Functional form of Computing Representation: Eq 4. provides the functional form that computes the representations with inputs being the three terms and parameterized by the W parameters. We state this clearly in revised version. Note that z^u(t) in Eq. 4 is qualified by $t$ and it keeps getting updated as the node $u$ gets involved in events. It does not represent direct embedding, rather just the placeholder for evolving embedding. For learning direct embedding of nodes (as done in *transductive* setting), one needs to have node-specific parameter i.e. one dimension of parameter matrix need to be of size = number of nodes in graph. In contrast to that, our setting is *inductive* where the parameters are not node-specific and hence it allows to learn general functions to compute representations given input information for a node. This allows to compute node embeddings for new (unseen) nodes without any necessity of altering the parameter space. This difference in transductive vs. inductive settings is well summarized for graphs in (Hamilton et. al. 2017).\", \"Algorithm 1: It seems there is a misunderstanding on this point. Algorithm 1 is not a part of training (Algorithm 2 makes training tractable). Algorithm 1 constitutes a vital part of the forward pass (our novel Temporal Point Process based Attention mechanism) that computes node embeddings. As Algorithm 1 is used in an involved process, we believe that a figure accompanying the process may provide easier access to the mathematics behind it. To this end, we have now added an auxiliary figure in Appendix A describing the use of Algorithm 1 and how the whole process works. In addition to the accompanying figure, we have also updated the description of Algorithm 1 in the main paper to make it more readable in the revised version.\", \"Adding new nodes: It is important to note the *inductive* ability of our framework described in response to your above question on computing functions, as that gives us an inherent ability to support new nodes. In practice, as described in Section 2.3 of the paper, the data contains a set of dyadic events ordered in time. Hence, each event involves two nodes $u$ and $v$. A new node will always appear as a part of such an event and it will be processed by the framework like any other node. We provide some more details on the mechanism in Appendix B.\", \"Comments on Experiment Section: Both datasets in (Trivedi et. al. 2107) are purely interaction datasets (i.e. contains information about activities on the network, e.g. visit, fight, etc.) but do not consider any topological events i.e. there do not exist an underlying topology between the nodes that interact in those events. One way to remedy that would be to augment such a dataset with an underlying fixed topology knowledge graph such as Freebase or Wikidata. We considered this approach but the issue in this case is the absence of time points for the formation of topological edges. As we require time-stamped events, we chose the datasets that naturally provided both network evolution and activities on network with timestamps in lieu of constructing an artificial network by combining multiple sources where the quality of such construction will also play a role. We believe that the two datasets used in this work contain lot of interesting properties observed in real-world dynamic graphs that helps to adequately evaluate our proposed contributions and serve as a strong empirical evidence of the success of our approach.\", \"In the interest of space, we provide preliminary details on datasets in Section 5.1 while more details on the two datasets are available in Appendix G.1.\", \"Please let us know if something is still not clear and we will be happy to further discuss and address your concerns.\", \"William L. Hamilton et. al., Representation Learning on graphs: Methods and Applications, 2017.\"]}",
"{\"title\": \"Response to Reviewer 4 - Part 1\", \"comment\": \"We thank the reviewer for providing detailed comments. Below we provide clarifications on your specific points:\\n\\n- Importance of Two-time scale Process: We emphasize that the two-time scale expression of dynamic processes over graphs is not an assumption of our work; it is a naturally observed phenomenon in any dynamic network. For instance, consider the dynamics over a social network. The growth of network (topology change) by addition of new users (nodes) or new friendships (edges) occurs at significantly different rate/dynamics compared to various activities on a *fixed* network topology (self evolution of user\\u2019s features, effect on user from activities external to network, information propagation on network or interactions (sending a message, liking a post, comments, etc.). Further, both these dynamics affect each other significantly - befriending someone on social network increases the likelihood of activities between those nodes and on the other way around, activities such as regularly liking or sharing a post or mere prolonged interest in posts from friends of friends may lead to a friendship or follow edge between non-friends.\\n\\nThis dichotomy of expressing network processes at two different time-scales (dynamic *of* the network or network evolution) and (dynamic *on* the network or network activities) is a widely known phenomenon that is subject of several studies in dynamic networks literature [1,2,3,4,5]. However, to the best of our knowledge, our work is the first to adopt this paradigm for large scale representation learning over dynamic graphs and propose an end-to-end framework for the same. \\n\\n- Support for Node and Edge Types is inherent in our approach and not a limitation of our model. As both node and edge types are essentially features, our model does not require any modification in the approach incorporate them. We have added a brief discussion in Appendix B to explain how our model works in presence of them. Consequently, DyRep can learn representations over various categories of dynamic graphs including but not limited to social networks, biological networks, dynamic knowledge graphs etc. as long as data provides time stamped events for both network evolution and activities on the network. \\n\\n- Support for Deletion: Being a continuous-time model, our work captures fine-grained temporal dependencies among network processes. To achieve this, the model needs time stamped edges for graphs. However, as we mention in conclusion of our paper, it is difficult to procure data with fine grained deletion time stamps. Further, the temporal point process model requires more sophistication to support deletion. For example, one can augment the model with a survival process formulation to account for lack of node/edge at future time which is an involved task and requires a dedicated investigation outside the scope of this paper.\\n\\n- Temporal Dependence between events: $lambda$ is the conditional intensity function the *conditional* part represents the occurrence of current event conditional on all past events. Hence, $\\\\lambda(t)$ can also be written as $\\\\lambda(t|\\\\amthcal{H}_t)$ to mention the conditional part where $\\\\mathcal{H}_t$ represents history of all previous event occurrences. In the point process literature, $\\\\mathcal{H}_t$ is often omitted as it is well understood. Next, the conditional intensity function is derived based on the most recent embeddings of the two nodes in the event. However the node embeddings get updated after every event (whether k = 0 or k=1). For instance, consider that a node $u$ was involved in a communication event (k=1) at time $t1$, association event (k=0) at time $t2$ and another communication event (k=1) at time $t3$ ($t1$ < $t2$ < $t3$). In this case, the conditional intensity function computed for time $t3$ (when k = 1) will use most recent embeddings of node $u$ updated after its event at time $t2$ (when k =0) and similarly the conditional intensity function computed for time $t2$ (when k=0) will use most recent embeddings of node $u$ updated after its event at time $t1$ (when k=1). This is how the two processes are interleaved with each other through evolving representations whose learning is the latent mediation process.\\n\\n[1] Bernard Chazelle. Natural Algorithms and Influence Systems, 2012.\\n[2] Damien Farine. The dynamics of transmission and the dynamics of networks, 2017.\\n[3] Oriol Artime et. al., Dynamics on networks: competition of temporal and topological correlations, 2017.\\n[4] Haijun Zhou et. al., Dynamic pattern evolution on scale-free networks, 2005.\\n[5] Farajtabar et. al., Coevolve: A Joint Point Process Model for Information Diffusion and Network Evolution, 2015.\"}",
"{\"title\": \"I also politely disagree, but mostly because some of my comments were misunderstood\", \"comment\": \"Thank you for your reply. I do realize now that the process is not Poisson as the definition of \\\\lambda clearly depends on past marks (it is not an externally driven process like a non-homogeneous Poisson process). I will change my review accordingly.\\n\\nI also apologize but I fear we are talking past each other here (\\u201cWe disagree with these comments as this is an incorrect characterization of our work\\u201d \\u2026 ). I will strive to be more specific from now on. \\n\\n\\u201cIt seems that the misunderstanding arises from your assumption (including point 6) that \\u2026 \\u2018k\\u2019 is a mark\\u201d => By your own definition of O = \\\\{(u, v, t, l, k)_p\\\\}_{p=1}^P , which fits the Definition 2.1.2 of Jacobsen (2006) where T_p is your p-th event time and Y_p = (u, v, l, k) is an element of a Polish space E. When you say O is a not a Marked point process, what is the basis for the claim? Why would Y_p not be represented by a Polish space? \\n\\nFormally, any time-varying graph is a Marked point process where the edges are the marks. When I say \\u201cGraph process\\u201d, it is implicit that it has edge marks. Thus, my comment \\u201cGraph process\\u201d with edge marks k implies a measure (density) over the sigma algebra (sequence) given by O = \\\\{(u, v, t, k)_p\\\\}_{p=1}^P. The variable \\u201cl\\u201d is not properly a mark because it can be re-constructed from the process (l_p = 1 if there has been any event with k=0 in the past). Algorithm 1 uses this marks definition when it does \\u201cif k = 0 then Auv(t) = Avu(t) = 1\\u201c, i.e., k=0 is a mark of an observable edge (see description next). \\n\\n\\u201cIt seems that the misunderstanding arises from your assumption (including point 6) that \\u2018k\\u2019 is type of an edge,\\nPossibly my general use of the ill-defined term \\u201cedge\\u201d was not clear. I am thinking of (u,v) as a tuple. If (u,v) is a physical edge or a virtual edge \\u201cinteraction\\u201d, k \\\\in \\\\{0,1\\\\} defines a mark (physical or virtual). \\n\\n\\u201cIt seems that the misunderstanding arises from your assumption (including point 6) that \\u2018k\\u2019 has independence, none of which is true.\\u201d \\nWe seem be to talking about different things. Marks (u, v, t, k) are conditionally independent given the model and past marks, per your likelihood \\\\mathcal{L}. This is the independence I was referring to. Adding these marks to Trivedi et al. (2017) is rather (mathematically) straightforward given the independent nature of the model. Mathematically straightforward does not mean it is easy to get it to work in practice and releasing the code would be important.\\n\\nJacobsen, Martin. Point process theory and applications: marked point and piecewise deterministic processes. Springer Science & Business Media, 2006.\", \"minor\": \"Page 3, \\u03bb(t)dt:= P[event .. ] missing brackets\"}",
"{\"title\": \"An interesting idea which could use clearer theoretical justification and larger scale experimental validation.\", \"review\": \"Overall the paper suffers from a lack of clarity in the presentation, especially in algorithm 1, and does not communicate well why the assumption of different dynamical processes should be important in practice. Experiments show some improvement compared to (Trivedi et al. 2017) but are limited to two datasets and it is unclear to what extend end the proposed method would help for a larger variety of datasets.\\n\\nNot allowing for deletion of node, and especially edges, is a potential draw-back of the proposed method, but more importantly, in many graph datasets the type of nodes and edges is very important (e.g. a knowledge base graph without edges loses most relevant information) so not considering different types is a big limitation. \\n\\nComments on the method (sections 2-4).\\n\\nAbout equation (1):\\n \\\\bar{t} is not defined and its meaning is not obvious. The rate of event occurrence does not seem to depend on l (links status) whereas is seems to be dependent of l in algorithm 1. \\n\\nI don\\u2019t see how the timings of association and communication processes are related, both \\\\lambda_k seem defined independently. Should we expect some temporal dependence between different types of events here? The authors mention that both point processes are \\u201crelated through the mediation process and in the embedding space\\u201d, a more rigorous definition would be helpful here. \\n\\nThe authors claim to learn functions to compute node representations, however the representations z^u seem to be direct embeddings of the nodes. If the representations are computed as functions it should be clear what is the input and which functional form is assumed.\\n\\nI find algorithm 1 unclear and do not understand how it is formally derived, its justification seems rather fuzzy. It is also unclear how algorithm 1 relates to the loss optimisation presented in section 4. \\n\\nWhat is the mechanism for addition of new nodes to the graph? I don\\u2019t see in algorithm 1 a step where nodes can be added but this might be handled in a different part of the training. \\n\\nComments on the experiments section.\\n\\nSince the proposed method is a variation on (Trivedi et al. 2017), a strong baseline would include experiments performed on the same datasets (or at least one dataset) from that paper. \\n\\nIt is not clear which events are actually observed. I can see how a structural change in the network can be observed but what exactly constitutes a communication event for the datasets presented?\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"Thank you for your review! We appreciate your time and supportive feedback and we are glad that you find our work interesting. Details about the corresponding association and communication events in the two datasets are provided in Appendix E.1. We uploaded a revised version that contains your suggested changes.\"}",
"{\"title\": \"Response to Reviewer 1 - Part II\", \"comment\": \"Responses to Other Comments:\\n========================\\n\\n1) This is incorrect as self-propagation mainly captures the recurrent evolution of one\\u2019s own latent features independent of others. Self-propagation principle states: A node evolves in the embedded space with respect to its previous position (e.g. set of features) and not in a random fashion. Based on Localized Propagation principle described above, a node's embedding is described by information it receives from other node and not exclusively it's own neighbors. The good performance of DyRep-No-SP signifies that the Localized Propagation term in Eq 4. is able to account for the relative position of node with respect to its previous position more often than not. Further, both dynamic of network and dynamic on network contribute to updates to a node's embedding. The interplay of multi-scale temporal behavior of these processes and evolving features leads to better discriminative embeddings, not just the rate of activities - this is evident by other exploratory use cases we discuss.\\n\\n2) $W_t(t_p - t_{p-1})$ is personalized as $t_p$ is node specific.\\n\\n3,4) We add the suggested changes to the revised version.\\n\\n5) The intention for the *qualitative* exploratory analysis was not to make a performance comparison, which is already available against dynamic baselines in our *quantitative* predictive analysis. The goal of Figure 4 and appendix experiments is to draw the comparison between how embeddings learned using state-of-the-art static methods would differ from our dynamic model in terms of capturing evolving properties over time. To our knowledge, such extensive analysis for dynamic embeddings is not available in previous works. Further, we believe that visualizing embeddings from another dynamic method against our model may not provide informative insights.\\n\\n6) This is incorrect - please check our main response above\\n\\n7) \\u201cz\\\" in Algorithm 1 is a temporary variable whose scope is limited to the algorithm. Please note that $\\\\lambda$ is an input to the algorithm and hence \\u201cz\\\" within Algorithm 1 has no interaction with the node embedding z (which always has a superscript) used throughout the paper. Hence, there is no recurrence, however, to avoid any further confusion, we change the temporary variable to \\u201cy\\\".\\n\\nDetails explaining Algorithm 1 in full are available on Page 7. Here we provide a simplified high-level explanation. As a starting point, we refer you to the point 2 in paragraph before Eq 4 page 5. To capture the effect described there, we parameterize the attention module with element of matrix S corresponding to an existing edge that signifies information/effect propagated by that edge. Algorithm 1 computes/updates this S matrix. Please note that S is parameter for a structural temporal attention which means temporal attention is only applied on structural neighborhood of a node. Hence, the value of S are only updated/active in two scenarios: a) the current event is between nodes which already has structural edge (communication between associated nodes or l=1, k=1) and b) the current event is an association event (l=0, k=0). Now, given a neighborhood of node \\u2018u\\u2019, $b$ represents background (base) attention for each edge which is uniform attention based on neighborhood size. Whenever an event occurs between two nodes, this attention changes in following ways: For case (a), just change the attention value for corresponding S entry using the intensity of the event. For case (b), repeat same as (a) but also adjust the background attention for each node as the neighborhood size grows in this case.\\n\\n8) Thank you for pointing this. It is true that we consider undirected graphs in proposed work. However, our model can be easily generalized to directed graphs. Specifically, the difference would appear in the update of matrix A used in Algorithm 1, which would subsequently lead to different neighborhood and attention flow for each node. We will add this clarification in the revised paper.\\n\\nWe have uploaded a revised version of the paper to add the above clarifications, address your points and discuss related work cited by you (thank you for the pointers). Please let us know if something is still not clear and we will be happy to further discuss and address your concerns.\"}",
"{\"title\": \"Response to Reviewer 1 - Part I\", \"comment\": \"Thank you for your review! We appreciate your comments and suggestions.\\n\\nAs a preface to our response, we wish to mention that, unlike existing approaches, our work expresses dynamic graphs at multiple time-scales as follows:\\na) Dynamic \\u201dof\\u201d the Network: This corresponds to the topological changes of the network \\u2013 insertion or deletion of nodes and edges. We use \\\"Association\\\" to label the observed process corresponding to this dynamic.\\nb) Dynamic \\u201don\\u201d the Network: This corresponds to activities on a *fixed* network topology \\u2013 self evolution of node\\u2019s features, change in node\\u2019s features due to exogenous drive (activities external to network), information propagation within network and interactions between nodes which may or may not have direct edge between them. We use \\\"Communication\\\" to label the observed process of interaction between nodes (only the observed part of dynamic \\u201don\\u201d the network).\", \"general_comment\": \"==============\\nOverall, the contribution of the paper is limited. It is essentially a minor extension of (Trivedi et al. 2017), adding attention, applied to two types of edges (communication edges and \\u201cfriendship\\u201d edges). Edges are assumed independent, which makes the math trivial. The work would be better described as modeling a Marked Poisson Process with marks k \\\\in {0,1}.\", \"response\": \"=========\\nWe politely disagree with these comments as this is an incorrect characterization of our work. It seems that the misunderstanding arises from your assumption (including point 6) that \\u2018k\\u2019 is type of an edge, \\u2018k\\u2019 is a mark and \\u2018k\\u2019 has independence, none of which is true. \\u2018k\\u2019 truly distinguishes scale of event dynamics (not type of edge) in our two-time scale model. In fact, when k=1, it is an interaction event which is not considered as an edge between nodes in our model. The edge (which forms graph structure) only appears through an association event (k=0). Indeed, \\u2018k\\u2019 corresponds to stochastic processes at different time scales and hence $\\\\psi_k$ is the rate (scale) parameter corresponding to each dynamic. Further, every time when k=0, an edge is created between different node pairs. As we clearly mention in the paper, we do not consider edge type in this work and hence \\u2018k\\u2019 is not a mark. However, edge type can be added to Eq 4 in case it is available. Finally, dynamic processes realized by k=0 and k=1 are not independent and are highly interleaved in a nonlinear fashion. For instance, formation of a structural edge (k=0) affects interactions (k=1) and vice versa. Algorithm 1 captures this intricate dependencies as we will describe below. Based on the above points, it follows that our model is not a marked Poisson process. In fact, it does not take any specific form of point process - rather learns the conditional intensity function through a function approximation.\\n\\nIn terms of contributions, we argue that our approach of modeling dynamic graphs at multiple scales and learning dynamic representations as latent mediation process bridging the two dynamic processes, is a significant innovation compared to any existing approaches. This is a non-trivial effort for a setting where the dynamic processes evolve in a complex and nonlinear fashion. Further, our temporal point process based structural-temporal self-attention mechanism to model attention based on event history of a node is very novel and has not been attempted before. Our attention model can: 1) take into account temporal dynamics of activities on edge and 2) capture effects from faraway nodes due to dependence on event history. This is a formal advancement to state-of-the-art models of non-uniform attention (such as Graph Attention networks). \\n\\nFurther, the paper provides an in-depth comparison with (Trivedi et. al. 2017) (including Table 1). Here we reiterate the differences: (Trivedi et. al. 2017) model events at single time scale and do not distinguish between two dynamic processes. They only consider edge level information for learning the embeddings. Our model considers a higher order neighborhood structure to compute embeddings. More importantly, in their work, the embedding update for a node \\u2018u\\u2019 considers the edge information for the same node \\u2018u\\u2019 at a previous time step. This is entirely different from our structural model based on \\u201dLocalized Embedding Propagation\\u201d principle which states: Two nodes involved in an event form a temporary (communication) or a permanent (association) pathway for the information to propagate from the neighborhood of one node to the other node. This means, during the update of embedding for node \\u2018u\\u2019, information is propagated from the neighborhood of node \\u2018v\\u2019 (and not node \\u2018u\\u2019, please check Eq. 4) to node \\u2018u\\u2019. Subsequently, (Trivedi et. al. 2017) does not have any attention mechanism as they don't consider structure.\"}",
"{\"title\": \"Marked Point Process extension of (Trivedi et al., 2017)\", \"review\": \"Overall, the contribution of the paper is somewhat limited [but a little more than my initial assessment, thanks to the rebuttal]. It is essentially an extension of (Trivedi et al. 2017), adding attention to provide self-exciting rates, applied to two types of edges (communication edges and \\u201cfriendship\\u201d edges). Conditioned on past edges, future edges are assumed independent, which makes the math trivial. The work would be better described as modeling a Marked Point Process with marks k \\\\in {0,1}.\", \"other_comments\": \"1.\\t[addressed] DyRep-No-SP is as good as the proposed approach, maybe because the graph is assumed undirected and the embedding of u can be described by its neighbors (author rebuttal describes as Localized Propagation), as the neighbors themselves use the embedding of u for their own embedding (which means that self-propagation is never \\\"really off\\\"). Highly active nodes have a disproportional effect in the embedding, resulting in the better separated embeddings of Figure 4. [after rebuttal: what is the effect of node activity on the embeddings?]\\n2.\\t[unresolved, comment still misundertood] The Exogenous Drive W_t(t_p \\u2013 t_{p\\u22121}) should be more personalized. Some nodes are intrinsically more active than others. [after rebuttal: answer \\\"$W_t(t_p - t_{p-1})$ is personalized as $t_p$ is node specific\\\", I meant personalized as in Exogenous Drive of people like Alice or Bob]\\n3.\\t[unresolved] Fig 4 embeddings should be compared against (Trivedi et al. 2017) [after rebuttal: author revision does not make qualitative comparison against Trivedi et al. (2017)]\\n\\nBesides the limited innovation, the writing needs work. \\n4.\\t[resolved] Equation 1 defines $g_k(\\\\bar{t})$ but does not define \\\\bar{t}. Knowing (Trivedi et al. 2017), I immediately knew what it was, but this is not standard notation and should be defined. \\n5.\\t[resolved] $g_k$ must be a function of u and v\\n6.\\t[resolved] \\u201c$k$ represent the dynamic process\\u201d = > \\u201c$k$ represent the type of edge\\u201d . The way it is written $k$ would need to be a stochastic process (it is just a mark, k \\\\in {0,1})\\n7.\\t[resolved] Algorithm 1 is impossibly confusing. I read it 8 times and I still cannot tell what it is supposed to do. It contains recursive definitions like $z_i = b + \\\\lambda_k^{ji}(t)$, where $\\\\lambda_k^{ji}(t)$ itself is a function of $z_i(t)$. Maybe the z_i(t) and z_i are different variables with the same name?\\n8.\\t[resolved] The only hint that the graph under consideration is undirected comes from Algorithm 1, A_{uv}(t) = A_{vu}(t) = 1. It is *very* important information for the reader.\\nRelated work (to be added to literature):\", \"dynamic_graph_embedding\": \"(Yuan et al., 2017) (Ghassen et al., 2017)\", \"dynamic_sub_graph_embedding\": \"(Meng et al., 2018)\", \"minor\": \"state-of-arts => state-of-the-art methods\\nlist enumeration \\u201c1.)\\u201d , \\u201c2.)\\u201d is strange. Decide either 1) , 2) or 1. , 2. . I have never seen both.\\nMAE => mean absolute error (MAE)\\n\\nYuan, Y., Liang, X., Wang, X., Yeung, D. Y., & Gupta, A., Temporal Dynamic Graph LSTM for Action-Driven Video Object Detection. ICCV, 2017.\\nJerfel, , Mehmet E. Basbug, and Barbara E. Engelhardt. \\\"Dynamic Collaborative Filtering with Compound Poisson Factorization.\\\" AISTATS 2017. \\nMeng, C., Mouli, S.C., Ribeiro, B. and Neville, J., Subgraph Pattern Neural Networks for High-Order Graph Evolution Prediction. AAAI 2018.\\n\\n--- --- After rebuttal \\n\\nAuthors addressed most of my concerns. The paper has merit and would be of interest to the community. I am increasing my score.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"This paper presents a dynamic graph embedding method, which considers two types of dynamics in evolving networks: association events with node and edge grows, and communication events with node-node interactions.\", \"review\": \"The paper is very well written. The proposed approach is appropriate on modeling the node representations when the two types of events happen in the dynamic networks. Authors also clearly discussed the relevance and difference to related work. Experimental results show that the presented method outperforms the other baselines.\\nOverall, it is a high-quality paper.\", \"there_are_only_some_minor_comments_for_improving_the_paper\": \"\\u03bd\\tPage 6, there is a typo. \\u201cfor node v by employing \\u2026\\u201d should be \\u201cfor node u\\u201d\\n\\u03bd\\tPage 6, \\u201cBoth GAT and GaAN has\\u201d should be \\u201cBoth GAT and GaAN have\\u201d\\n\\u03bd\\tIn section 5.1, it will be great if authors can explain more what are the \\u201cassociation events\\u201d and \\u201ccommunication events\\u201d with more details in these two evaluation datasets.\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Response to comment\", \"comment\": \"Thank you for your interest in our work.\\n\\nInspired from [1], our work expresses dynamic graphs at multiple scales as follows:\\na.) Dynamic \\u201dof\\u201d the Network: This corresponds to the topological changes in network \\u2013 insertion or deletion of nodes and edges\\nb.) Dynamic \\u201don\\u201d the Network: This corresponds to various activities in the network \\u2013 self evolution of node\\u2019s interests/features, change in node\\u2019s features due to exogenous drive (activities external to net-work), information propagation within network and within-network interactions between nodes which may or may not have direct edge between them. \\n\\nWe do not define \\\"Association\\\" and \\\"Communication\\\" as two new concepts or constraints on dynamic graphs neither do we claim that in the paper. Instead, we use those two words to label the well-known and naturally *observed* processes corresponding to the dynamics mentioned in (a) and (b) \\u2013 Association events maps to observed insertion of nodes or edges and Communication events maps to observed interactions between nodes (which is observed part of dynamic \\u201don\\u201d the network). Nevertheless, this dichotomy of dynamic network processes is well-known and has been subject of several studies [1, 2, 3, 4, 5] in segregated manner. But none of the existing machine learning approaches has jointly modeled them for representation learning over dynamic graphs (our key objective) to the best of our knowledge. \\n\\n\\u201dIn reality, dynamic networks are represented by insertion and deletion of nodes and insertion or deletion of edges between existing nodes.\\u201d\\n\\nThis is a rather limited or constrained view of dynamic graphs as there are many dynamic processes (as listed in b above) occurring on such a graph which cannot be realized by just modeling growth or shrinkage of graph. Approaches based on such model of dynamic network cannot distinguish or model interleaved evolution of network processes which leads to multiple shortcomings:\\n\\u2013 Such a model may capture structural evolution, but it lacks the ability to effectively and correctly capture dynamics \\u201don\\u201d the network. Concretely, the dynamic process under which a node\\u2019s features evolve or node interactions happen within a network (thus leading to information propagation) has vastly different behavior from the dynamic process that leads to growth (shrinkage) of the network structure. For example, social network activities such as liking a post or posting on discussion or sharing a video happen at much accelerated rate compared to slow rate of making friends and thereby growing the network. Hence it is important to express dynamic graphs at different time scales. \\n\\u2013 Edge types only serve as feature information and they can be readily added in our model if available. Edge weights may or may not be available apriori and may need to be inferred. Both of them are insufficient to effectively model the evolutionary multi-time scale dynamics of structure and network activities and their influence on each other. Further, neither of them express node specific dynamic properties. This, in turn, will not help to learn the effect of evolving node representations on observed processes and vice versa.\\n\\nExtended Details on use of both datasets is available in Appendix E. \\n\\n[1] Natural algorithms and influence systems.\\n[2] The dynamics of transmission and the dynamics of networks.\\n[3] Dynamics on networks: competition of temporal and topological correlations.\\n[4] Dynamic pattern evolution on scale-free networks.\\n[5] Coevolve: A Joint Point Process Model for Information Diffusion and Network Evolution.\"}",
"{\"comment\": \"The paper presents its content in the most complicated way. It defines new concepts of Association (refers to topological evolution) and Communication (refers to node interactions) for dynamic graphs and formulate the problem based on them. In reality, dynamic networks are represented by insertion and deletion of nodes and insertion or deletion of edges between existing nodes. The edges and nodes may have features or labels. The paper defines two new concepts of communication and association which I think are inherited from the edge concept with subtle differences. Association has global effects and communication has local effects on information exchange. I am really confused if we really need to define such new concepts and then propose a model for that, while in reality dynamic graphs usually do not contain these kinds of constraints. Assuming we have the realization of these concepts, can we formulate the problem using simpler models such as networks with typed edges or weighted edges? I am skeptical about how the authors use the datasets in the experiment. For example, in the Social Evolution Dataset, what is association and what is communication? How did you interpret the dataset to find these concepts? Do we really need to consider these concepts in the Social Evolution Dataset to do the link prediction? I think authors can elaborate on new concepts definitions and necessity for considering them in their method.\", \"title\": \"comment\"}",
"{\"title\": \"Thank you for interesting pointers\", \"comment\": \"We view the work on geometric deep learning as a very interesting direction for representation learning over graphs. However, most current works including cited papers in geometric deep learning over graphs primarily deal with static graphs, while our work focuses on dynamic graphs to jointly model both - topological evolution (dynamic of the network) and node interactions (dynamic on the graph). It would be interesting complimentary direction to extend cited spectral/spatial domain methods to derive local graph operators that can take into account both both temporal and spatial dynamics. We will add a related discussion section in the updated version of the paper.\"}",
"{\"comment\": \"I would like to draw the authors' attention to multiple recent works on deep learning on graphs directly related to their work. Among spectral-domain methods, replacing the explicit computation of the Laplacian eigenbasis of the spectral CNNs Bruna et al. with polynomial [1] and rational [2] filter functions is a very popular approach (the method of Kipf&Welling is a particular setting of [1]). On the other hand, there are several spatial-domain methods that generalize the notion of patches on graphs. These methods originate from works on deep learning on manifolds in computer graphics and recently applied to graphs, e.g. the Mixture Model Networks (MoNet) [3] (Note that Graph Attention Networks (GAT) of Veli\\u010dkovi\\u0107 et al. are a particular setting of the MoNet [3]). MoNet architecture was generalized in [4] using more general learnable local operators and dynamic graph updates. Finally, the authors may refer to a review paper [5] on non-Euclidean deep learning methods.\\n\\n\\n1. Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering, arXiv:1606.09375\\n\\n2. CayleyNets: Graph convolutional neural networks with complex rational spectral filters, arXiv:1705.07664,\\n\\n3. Geometric deep learning on graphs and manifolds using mixture model CNNs, CVPR 2017. \\n\\n4. Dynamic Graph CNN for learning on point clouds, arXiv:1712.00268\\n\\n5. Geometric deep learning: going beyond Euclidean data, IEEE Signal Processing Magazine, 34(4):18-42, 2017\", \"title\": \"prior works on graph deep learning\"}"
]
} |
|
rkxwShA9Ym | Label super-resolution networks | [
"Kolya Malkin",
"Caleb Robinson",
"Le Hou",
"Rachel Soobitsky",
"Jacob Czawlytko",
"Dimitris Samaras",
"Joel Saltz",
"Lucas Joppa",
"Nebojsa Jojic"
] | We present a deep learning-based method for super-resolving coarse (low-resolution) labels assigned to groups of image pixels into pixel-level (high-resolution) labels, given the joint distribution between those low- and high-resolution labels. This method involves a novel loss function that minimizes the distance between a distribution determined by a set of model outputs and the corresponding distribution given by low-resolution labels over the same set of outputs. This setup does not require that the high-resolution classes match the low-resolution classes and can be used in high-resolution semantic segmentation tasks where high-resolution labeled data is not available. Furthermore, our proposed method is able to utilize both data with low-resolution labels and any available high-resolution labels, which we show improves performance compared to a network trained only with the same amount of high-resolution data.
We test our proposed algorithm in a challenging land cover mapping task to super-resolve labels at a 30m resolution to a separate set of labels at a 1m resolution. We compare our algorithm with models that are trained on high-resolution data and show that 1) we can achieve similar performance using only low-resolution data; and 2) we can achieve better performance when we incorporate a small amount of high-resolution data in our training. We also test our approach on a medical imaging problem, resolving low-resolution probability maps into high-resolution segmentation of lymphocytes with accuracy equal to that of fully supervised models. | [
"weakly supervised segmentation",
"land cover mapping",
"medical imaging"
] | https://openreview.net/pdf?id=rkxwShA9Ym | https://openreview.net/forum?id=rkxwShA9Ym | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"BylWv0IsyN",
"HJx_z_FYC7",
"B1xSzvYip7",
"HJg9xHKi6X",
"SygsHEtspm",
"SJlqORuoTX",
"SJeTefhtn7",
"ByguYDKY3X",
"rkln9EjE3X"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544412760798,
1543243791946,
1542326029315,
1542325490473,
1542325315265,
1542323825705,
1541157365013,
1541146495717,
1540826260514
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1552/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1552/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1552/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1552/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1552/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1552/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1552/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1552/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1552/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper formulates a method for training deep networks to produce high-resolution semantic segmentation output using only low-resolution ground-truth labels. Reviewers agree that this is a useful contribution, but with the limitation that joint distribution between low- and high-resolution labels must be known. Experimental results are convincing. The technique introduced by the paper could be applicable to many semantic segmentation problems and is likely to be of general interest.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"metareview\"}",
"{\"title\": \"Reviewer response\", \"comment\": \"The authors clarified the doubts I expressed in the review and properly answered all my questions.\\nGiven this, I confirm my positive rating.\"}",
"{\"title\": \"Authors' response\", \"comment\": \"[Modified Nov. 19 to reflect changes in the text.]\\n\\nThank you for your thoughtful comments and questions. We have taken account of the Minor Concerns you raised.\", \"in_response_to_the_major_concerns\": \"We agree with your comment on P(Y,Z) and will incorporate discussion of both the generality and limitations of fixing a joint distribution. Please see the response to Reviewer3, part 1, above, as well as the updated Appendix F [formerly Appendix D], for discussion on this. In short, the estimated joint distribution need not be derived from high-resolution data: it could be specified a priori, tuned manually, derived from the output of another model, etc.\", \"loss_functions\": \"We summarize the intuitive motivation in our response to Reviewer3, part 2, above.\\n\\nQualitatively, the two loss functions have a similar form. They minimize:\\n(first term) the L2 distance between observed and expected counts normalized by the expected (resp. expected+observed) variance in (7) (resp. KL);\\n(second term) the variance at individual pixels.\\nIn (7), the added term in the denominator reduces the weight of the L2 distance when the model is uncertain (sigma is large). When the block size is large, the difference between the two functions becomes insignificant. However, when the block size is small, (7) punishes the model for predictions that are very certain but incorrect, so it must balance between high certainty (second term) and low certainty on unlikely predictions (first term).\\n \\nQuantitatively, if there are c classes, the maximum possible sigma2 occurs when all outputs are uniform over the classes and is equal to\\n1/Bk * 1/c * (1-1/c),\\nIn the land cover experiment, where Bk=900, and c=4, sigma2 is approximately 0.0002. If rho=0.03, as it may be for classes very unlikely to occur in given blocks (see Table 4), then rho2=0.0009, on the same order as sigma2. Thus we punish more for predictions that are certain but predict a class that is unlikely to occur. As predictions become more certain during training, sigma2 becomes insignificant.\\n \\nIndeed, in early experiments we found that beginning SR-only training with the KL distance sometimes led to pathological local minima, such as a single class always being predicted with high certainty. In contrast, it seems that using (7) in early training -- favoring uncertainty in unlikely predictions -- enables the behavior in Figure 7. However, if training is initialized with a well-performing model, the two criteria give similar results.\"}",
"{\"title\": \"Authors' response, part 2\", \"comment\": \"In response to (3):\\n\\nWe view the output of the core network as a generative model of (hard) segmentations, where the label at a given pixel is drawn from the distribution given by the model\\u2019s output. A version of the central limit theorem implies that if one samples the label at each pixel, the count of pixels of a class c within a block, appropriately normalized, will follow an approximately Gaussian distribution whose mean and variance are the average mean and variance of the distributions at individual pixels (eq. 4).\\n\\nThis point of view also leads naturally to the proposed loss function (eq. 7). Here we are maximizing the probability of the model producing the set of labels with the highest log-likelihood under both the network output and the known joint distribution of high-res and low-res labels. (In other words, we independently draw counts from p_net and p_coarse and choose the optimal counts conditioned on the two counts being equal.) On the other hand, the KL divergence mentioned in the footnote measures the expected log-likelihood under p_coarse of a sample count drawn from p_net. \\n\\nPlease see the response to Reviewer2 below for more discussion of the statistics and loss functions.\"}",
"{\"title\": \"Authors' response\", \"comment\": \"[Modified Nov. 19 to reflect changes in the text.]\\n\\nThank you for your thoughtful comments and questions.\\n\\n(1) In Figure 8 [now Figure 9; see also Figures 10-12], we see qualitatively that both the high-resolution and low-resolution models are sensitive to small-scale input features, and the low-resolution model indeed has little punishment for small-scale *label* errors when data is given at a scale of 8 numbers per image. Yet, our results demonstrate that a model that sees no high-resolution data can learn to (a) be sensitive to shape and (b) make highly certain predictions around boundaries (cf. Fig. 4). \\n\\nThis also depends on the capacity of the core segmentation model. In principle, if it is highly expressive, it could learn to recognize the blocks and fill in the labels inside the block to fit the frequencies without regard to the input features. This did not happen in our experiments, partly because the neural networks are difficult to overtrain.\\n\\n(2) Please see the response to Reviewer3 above regarding the uses of our method beyond the setup of the land cover example, as she or he raised closely related questions. We have updated Appendix D [now Appendix F] with an example of super-resolving coarse segmentations and discussion of other approaches to obtaining coarse labels.\"}",
"{\"title\": \"Authors' response, part 1\", \"comment\": \"[Modified Nov. 19 to reflect changes in the text.]\\n\\nThank you for your thoughtful comments and questions.\\n\\nIn response to (1) and (2):\\n\\nCertainly, knowing the distributions p(c|z) is a prerequisite to using our method. In the problem we are considering, where high-resolution and low-resolution classes may not match one-to-one, one must establish at least a weak correspondence between the two kinds of classes -- else, one does not know anything about the meaning of the target (high-res) classes. \\n \\nIn our main example, land cover mapping, high-resolution data is expensive and difficult to collect, but plenty of low-resolution data exists. However, our method is more general, as there are different potential sources of this distribution:\\n \\n- Labels given in coarse blocks with a known distribution (as the NLCD data in our land cover example). In fact, these need not be derived from any high-resolution data. For example, we could set the target distributions based on the descriptions in the NLCD specification (Table 3). Indeed, we found that this gave similar results, although more noise was seen in classes like \\\"Water\\\" and Evergreen Forest\\\" where the specification allows for a wide interval (e.g., [0.75,1], translated into mu=0.875 and sigma=0.25/sqrt(12)) but the true mean is much closer to 1 (Table 4). \\nFurthermore, this distribution can be tuned by hand (forcing the \\\"Water\\\" class to have higher proportion of water than what was in the NLCD description, for example). If there are only a handful of coarse and fine-grained labels, then such experimentation is not unreasonable.\\n \\n- Quantized density estimates from another model (as the coarse predictor output in our lymphocyte example).\\n \\n- A coarse segmentation provided by another model. We mock this by blurring the ground truth distribution in the Cityscapes pedestrians example, but this may come from the output of a coarser segmentation model, a class activation map coming from a classification model, etc. (as Reviewer1 seems to be suggesting).\\n \\nWe have added a small extension to the Cityscapes pedestrians example, showing how we can super-resolve coarse segmentations. We have updated Appendix D [now Appendix F] with these results and revised the text to emphasize the applicability of our approach to different kinds of problems.\\n \\nWe think that more general approaches to overcoming this limitation would be an interesting subject for future work. Potential directions are: (a) beginning with only rough priors on the distributions, estimate them by iteratively updating them with the label counts currently being predicted in blocks of each low-resolution class during training; (b) combine this with an unsupervised segmentation method to infer high-resolution classes, knowing they are distributed similarly in blocks of any given low-resolution class. In other words, we could estimate the joint distribution with EM. However, in most applications some knowledge of the relationship between classes is available, and, as discussed above, even weak or hand-tuned priors p(c|z) are often sufficient (depending, of course, on the capacity of the core model and the ability of the gradient descent to vastly overtrain).\\n\\nWe do think the method will be of interest to wide readership as there are many ways to adapt it to new applications. (We do see now that our focus on the two applications that most need this method may create impression that the idea is limited to these couple of applications, and we will address that in the writing.)\"}",
"{\"title\": \"A very well written paper with substantial and well organized experimental content, but overall a bit too narrow in scope and technical contribution\", \"review\": \"The authors present a technique to exploit low resolution labels from a space Z to provide weak supervision to a semantic segmentation network which predicts high resolution labels from a different space Y, assuming that a joint distribution of Z, Y is known a-priori.\\n\\nThe paper is very well written and easy to follow, the main contribution is clearly and rigorously explained in the technical section.\\nThe technical contribution is somehow limited, but it is substantially validated by a very well organized and convincing experimental evaluation.\\nOverall, I have three main points of criticism (detailed in the following), which however aren't enough to not recommend this paper for acceptance.\", \"main_cons\": \"1) At points, the paper reads more like a technical report about solving specific problems in land cover estimation and lymphocyte segmentation than a machine learning paper.\\nMany paragraphs are devoted to describe the specifics of these two problems and to design methods to overcome them.\\nThe main technical contribution of the paper seems to be specifically tailored to solve the particular setup encountered in land cover estimation, i.e. two different sets of labels with different resolution on the same segmentation data, which ties to the next point.\\n\\n2) One important limitation lies in the fact that the distribution p(c|z) needs to be known a-priori and somehow derived from additional problem-specific knowledge.\\nThis is not an issue in the two tasks considered in the paper, but in my opinion it could severely limit the applicability of the proposed approach.\\nI think the paper would benefit from the inclusion of some discussion about how this limitation could be overcome.\\n\\n3) It's not very clear to me why the gaussian approximation with the specific mean and variance values defined in eq.4 would be a good approximation for p_net(c_lk|X).\\nCould the authors expand on this?\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting approach, and unique use cases\", \"review\": \"This paper presents a method to super-resolve coarse low-res segmentation labels, if the joint distribution of low-res and high-res labels are known. The problem formulation and the proposed solution are valid, given the examples of land cover super-resolution and lymphocyte segmentation.\\nI like the paper in general, with the following concerns/thoughts:\\n1. While matching the divergence of low-res and high-res segmentations, will the model simply collapse and predict noisy boundaries? Or is it already the case, as can be seen in Figure 8 of Appendix? It seems possible that the model is learning high resolution noises. I suggest the authors to do more careful analysis on this.\\n2. I am curious to see if the proposed technique can be used in other aspects, like super-resolving the boundary of semantic segmentations.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Fun, useful, and well presented idea. Experimental results are convincing, too.\", \"review\": \"Paper summary:\\n\\nThis paper presents a deep-learning based method for super-resolving low-resolution labels into high-resolution labels given the joint distribution between those low- and high- resolution labels. This is useful for many semantic segmentation tasks where high-resolution ground truth data is hard and expensive to collect. Its main contribution is a novel loss function that allows to minimize the distance between the distribution determined by a set of model outputs and the corresponding distribution given by low-resolution label over the same set of outputs. The paper also thoroughly evaluates the proposed method for two main tasks, the first being a land cover mapping task and the second being a medical imaging problem.\\n\\nFor the land cover application, adding low-resolution data to high-resolution data worsens the results when evaluating on the geographic area from which the high-resolution data was taken. However, when testing the model on new geographic areas and only adding the low-resolution data from this new area in training makes significant improvements.\\n\\nGenerally the paper is very well written, well structured, all explanations are clear, examples and figures are presented when needed and convey helpful information for the reader. The overall idea is fun, original, useful (especially in remote sensing) and is presented in a a convincing way. All major claims are supported by experimental evaluation. There are nevertheless a few concerns:\", \"major_concerns\": \"On a conceptual level, the main concern is that the paper assumes we are given a joint distribution of low and high resolution labels, \\u201cwhere we are given the joint distribution P(Y,Z)\\u201d, which seems the main limitation of this method. In fact, to correctly estimat this joint distribution either requires additional knowledge about low-resolution data such as the example presented on the NCLD data : \\u201cFor instance, the \\u201cDeveloped, Medium Intensity\\u201d class [...] of the coarse classes\\u201d, or it requires actual high-resolution labelled data to correctly estimate this joint distribution. I think the paper would greatly benefit from including a section that discusses the impact of this limitation.\\n\\nAnother point is footnote 3 on page 5. This argument is valid but it would be more convincing to give a thorough explanation on why the choice of the presented loss function is better compared to the KL divergence based loss function or at least some evidence that the two perform similarly when evaluating the method.\", \"minor_concerns\": [\"\\u201csuch as CRFs or iterative evaluation\\u201d I would include a citation on this type of work.\", \"Format of some references in the text need to be corrected, e.g. \\u201cinto different land cover classes Demir et al. (2018); Kuo et al. (2018); Davydow et al. (2018); Tian et al. (2018).\\u201d\"], \"rating\": \"9: Top 15% of accepted papers, strong accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
HkfPSh05K7 | Multi-step Retriever-Reader Interaction for Scalable Open-domain Question Answering | [
"Rajarshi Das",
"Shehzaad Dhuliawala",
"Manzil Zaheer",
"Andrew McCallum"
] | This paper introduces a new framework for open-domain question answering in which the retriever and the reader \emph{iteratively interact} with each other. The framework is agnostic to the architecture of the machine reading model provided it has \emph{access} to the token-level hidden representations of the reader. The retriever uses fast nearest neighbor search that allows it to scale to corpora containing millions of paragraphs. A gated recurrent unit updates the query at each step conditioned on the \emph{state} of the reader and the \emph{reformulated} query is used to re-rank the paragraphs by the retriever. We conduct analysis and show that iterative interaction helps in retrieving informative paragraphs from the corpus. Finally, we show that our multi-step-reasoning framework brings consistent improvement when applied to two widely used reader architectures (\drqa and \bidaf) on various large open-domain datasets ---\tqau, \quasart, \searchqa, and \squado\footnote{Code and pretrained models are available at \url{https://github.com/rajarshd/Multi-Step-Reasoning}}. | [
"Open domain Question Answering",
"Reinforcement Learning",
"Query reformulation"
] | https://openreview.net/pdf?id=HkfPSh05K7 | https://openreview.net/forum?id=HkfPSh05K7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"BJeW40EwvN",
"H1xk4aEvwE",
"BkeCzfDxSN",
"SyxRWCJK7N",
"ryl25JGilV",
"Hklr6h55l4",
"BJefzpggg4",
"SkxCvQKjyE",
"H1eqAZ8iJV",
"SJeVzbUoyN",
"rJeN1G8oA7",
"ryliayIs0X",
"Sygq4coqA7",
"SyxjKXK90m",
"r1xQXGYqA7",
"SJl8P-Fc07",
"HJgTj6OcRm",
"BklH_p_qRm",
"H1lQ7Ar52X",
"BylrP5Q92m",
"BklS0hsYs7"
],
"note_type": [
"official_comment",
"official_comment",
"comment",
"comment",
"official_comment",
"comment",
"meta_review",
"comment",
"official_comment",
"comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1552530984523,
1552530727188,
1549984277595,
1548447238341,
1545441172410,
1545411773409,
1544715529852,
1544422246288,
1544409553738,
1544409356101,
1543360988319,
1543360451226,
1543318066109,
1543308162635,
1543307803282,
1543307613758,
1543306661407,
1543306605353,
1541197339283,
1541188189331,
1540107468731
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1551/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1551/Authors"
],
[
"(anonymous)"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1551/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1551/Area_Chair1"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1551/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1551/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1551/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1551/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1551/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1551/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1551/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1551/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1551/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1551/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1551/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1551/AnonReviewer2"
]
],
"structured_content_str": [
"{\"title\": \"Re:\", \"comment\": \"I apologize for the delay in releasing the code. The code and pretrained models are available here (https://github.com/rajarshd/Multi-Step-Reasoning).\\n\\nThanks!\\nRajarshi\"}",
"{\"title\": \"Sorry for the delay\", \"comment\": \"Due to personal deadlines, releasing the code got delayed, but I have opensourced the code and pretrained models here -- https://github.com/rajarshd/Multi-Step-Reasoning\\n\\nThanks!,\\n\\nRajarshi\"}",
"{\"comment\": \"Agreed will be great if you can open source the code and models soon!\", \"title\": \"Open Source model/code\"}",
"{\"comment\": \"Hi, any update on the source code?\", \"title\": \"Opensourcing?\"}",
"{\"title\": \"Thank you!\", \"comment\": \"Thanks again!. This work was very much a joint effort with Shehzaad Dhuliawala and Manzil Zaheer. We are planning on open-sourcing the code as soon as possible. The holidays might delay it by a week but if you need it sooner, feel free to email us and we will work with you.\"}",
"{\"comment\": \"Congratulations on your acceptance! I thought it was work done by Mr.Das and Prof.McCallum(and it really is). Your writing style and research idea are quite consistent with MINERVA. I wish you could open source soon so that we can learn from it like MINERVA, and catch deadline of ACL (as well as NAACL rebuttal).\", \"title\": \"Congratulations!\"}",
"{\"metareview\": \"\", \"pros\": [\"novel idea for multi-step QA which rewrites the query in embedding space\", \"good comparison with related work\", \"reasonable evaluation and improved results\"], \"cons\": \"There were concerns about missing training details, insufficient evaluation, and presentation. These have been largely addressed in revision and I am recommending acceptance.\", \"confidence\": \"3: The area chair is somewhat confident\", \"recommendation\": \"Accept (Poster)\", \"title\": \"An interesting approach to open domain QA using query rewriting in latent space\"}",
"{\"comment\": \"That would be really helpful! Thanks for your update!\", \"title\": \"Thank you!\"}",
"{\"title\": \"Re:\", \"comment\": \"Thanks for your comment!. Right now the link is intentionally anonymized. We will release the code once the decision on the paper is finalized. Thank you for your interest!\"}",
"{\"comment\": \"This paper is very interesting and we're are doing follow-up research. Could the authors update their link to their source code? The current link doesn't seem to work. Thanks a lot!\", \"title\": \"Model implementation and source code\"}",
"{\"title\": \"Thank you!\", \"comment\": \"Thank you for your insightful comments which helped make the paper a lot better.\"}",
"{\"title\": \"Increasing the score\", \"comment\": \"Thanks for the authors for updating the paper. The updated paper have more clear comparisons with other models, with more & stronger experiments with the additional dataset. Also, the model is claimed to perform multi-step interaction rather than multi-step reasoning, which clearly resolves my initial concern. The analysis, especially ablations in varying number of iterations, was helpful to understand how their framework benefits. I believe these make the paper stronger along with its initial novelty in the framework. In this regard, I vote for acceptance.\"}",
"{\"title\": \"Response to reviewer 3\", \"comment\": \"We thank you for your helpful reviews. We have significantly updated the writing of the paper to hopefully address all confusion and we\\u2019ve also updated the results section of the paper for better comparison. In a nutshell, we have added a section on retriever performance demonstrating the scalability of our approach (sec 4.1). We have improved results for our experiments with BiDAF reader and we have also added new results on the open-domain version of the SQuAD dataset.\\n\\n> In the general sense, the architecture can be seen as a specific case of a memory network. Indeed, the multi-reasoner step can be seen as the controller update step of a memory network type of inference. The retriever is the attention module and the reader as the final step between the controller state and the answer prediction.\\n\\nWe agree with you and think its a valid way of viewing our framework. We have updated and cited memory networks in our paper (Sec 4) . However, we would like to point out that most memory network architectures are based on soft-attention, but in our case the retriever actually makes a \\u201chard selection\\u201d of the top-k paragraphs and hence for the same reason, we have to train it via reinforcement learning.\\n\\n> The authors claim the method is generic, however, the footnote in section 2.3 mentioned explicitly that the so-called state of the reader assumes the presence of a multi-rnn passage encoding. Furthermore, this section 2.3 gives very little detailed about the \\\"reinforcement learning\\\" algorithms used to train the reasoning module.\\n\\nWe agree with you and based on your comments we have made this absolutely clear in the paper. Our method needs access to the internal token level representation of the reader model in order to construct the current state. The current API of machine reading models only return the span boundaries of the answer, but for our method, it needs to return the internal state as well. What we wanted to convey is, our model does not depend/need any neural architecture re-designing to an existing reader model. To show the same, we experimented and showed improvements with two popular and widely used reader architectures - DrQA and BiDAF.\\nRegarding results of BiDAF -- During submission we ran out of time and hence we could not tune the BiDAF model. But now the results of BiDAF have improved a lot and as can be seen from (Table 2, row 9), the results of BiDAF are comparable to that of DrQA. \\nWe have also significantly updated the model section of our paper to include more details about methods and training (Sec 2 & 3) with details about our policy gradient methods and training procedure.\\n\\n> Finally, the experimental section, while giving encouraging results on several datasets could also have been used on QAngaroo dataset to assess the multi-hop capabilities of the approach. \\n\\nWe did not consider QAngaroo for the following reasons -- (a) The question in QAngaroo are based on knowledge base relations and are not natural language questions. This makes the dataset a little synthetic in nature and we were unsure if our query reformulation strategy would work in this synthetic setting. (b) In this paper, we have tried to focus on datasets for open domain settings where the number of paragraphs per query is large (upto millions). QAngaroo on the other hand is quite small in that respect (avg of 13.7 paragraphs per question). We were unsure, that in this small setting, if we would see significant gains by doing query reformulation. \\n\\nWe have shown the effectiveness of our model in 4 large scale datasets including new results on SQuAD-open since submission. We sincerely hope, we will not be penalized for not showing the effectiveness of our model on enough number of datasets.\\n\\n> Furthermore, very little details are provided regarding the reformulation mechanism and its possible interpretability.\\n\\nWe have significantly updated this section of the paper. We have added a whole new section (Sec 5.3) with detailed analysis of the effect of query reformulation. In Table 4, we quantitatively measure if the iterative interaction between the retriever and reader is able to retrieve better context for the reader.\"}",
"{\"title\": \"Summary of updates\", \"comment\": \"Based on the insightful feedback from our reviewers, we\\u2019ve updated our paper. Below we summarize the general changes.\", \"writing_and_analysis_of_results\": \"We have significantly improved the writing of our paper, especially the model (Sec 2, Sec 3) and the experiments section (Sec 5). We have added the details of our training methodology (e.g. details of reinforcement learning and various hyperparameters). In the experiments section, we have included a new section on analysis of results (Sec 5.3) in which we quantitatively measure if the iterative interaction between the retriever and reader is able to retrieve better context for the reader (Table 4)\", \"performance_of_paragraph_retriever\": \"We have added a new section on the performance of the paragraph retriever (Sec 4.1). We show that our retriever architecture based on fast nearest neighbor search can scale to corpus containing millions of paragraphs where as retrievers of current best-performing models cannot scale to that size.\", \"new_bidaf_results\": \"During initial submission we ran out of time and could not tune our implementation of the BiDAF model. But since, the results of BiDAF have improved a lot and are comparable to that of DrQA (Table 2).\", \"new_results_on_squad_open\": \"We have also added new results on another popular dataset -- the open domain setting of SQuAD. Following the setting of Chen et al., (2017), we were able to demonstrate that our framework of multi-step-interaction improves the exact match performance of a base DrQA model from 27.1 to 31.9.\", \"change_in_title\": \". Following the comment by reviewer 2, we have renamed the title of the paper to \\u201cMulti-step Retriever-Reader Interaction for Scalable Open-domain Question Answering\\u201d.\\nWe believe that our framework that supports retriever-reader interaction would be a starting point to build models for multi-hop \\u201creasoning\\u201d but the current datasets do not explicitly need models with such inductive bias. Hence it will be more appropriate for our work to have this title.\"}",
"{\"title\": \"Response to reviewer 2 (continued)\", \"comment\": \"Response to Reviewer 2 (continued from before)\\n4. There are other published papers with higher result on Quasar-T, SearchQA and TriviaQA (such as https://aclanthology.info/papers/P18-1161/p18-1161 and https://arxiv.org/abs/1805.08092) which the authors did not compare with.\\n\\nWork by (Min, Zhong, Socher, Ziong, 2018) has results on TriviaQA-wikipedia setting. Our results are on the unfiltered setting of TriviaQA as we mentioned in the previous response, hence the results are not comparable. However, their results on SQuAD-open is comparable to our new experiments on SQuAD and we have added it in Table 2.\\nWe also have results of DS-QA (Lin, Ji, Liu, Sun, 2018) in Table 2. They indeed have better results than us on SearchQA and we outperform them in TriviaQA-unfiltered. We tried to reproduce their results on Quasar-T with their code base and shared hyperparameter setting, but we could not reproduce it. However, for fairness, we have reported both their reported scores and our scores in the latest version of the paper. \\n\\n5. In Section 5.2, is there a reason for the specific comparison to AQA (5th line), though AQA is not SOTA on SearchQA? I don\\u2019t think it means latent space is better than natural language space. They are totally different model and the only intersection is they contains interaction between two submodules.\\n\\nActive Question Answering (AQA) propose a model in which an query reformulation agent sits between an user and a black box \\u201cQA\\u201d system. The agent probes the reader model (BiDAF) with (N=20) reformulations of the initial natural language query and aggregates the returned evidence to yield the best answer. The reformulation is done by a seq2seq model. In our method, the query reformulation is done by a gated recurrent unit to the initial query vector and this update is conditioned on the current state of the reader. By using the same reader architecture (BiDAF) in our experiments, we find significant improvements on SearchQA and other datasets.\\nWe have updated the paper to make this distinction very clear. We only wanted to convey that our strategy of query reformulation yield better empirical results than the query reformulation strategy adopted by AQA. However we do agree with you that there is no specific reason to compare this in the experiment section and we have removed it from there and added more relevant results.\\n\\n6. In Section 5, the authors mentioned their framework outperforms previous SOTA by 15% margin on TriviaQA, but what is that? I don\\u2019t see 15% margin in Table 2.\\n\\nThis is a miscalculation and was a huge oversight from our part. The relative increase from the previous best result is 9.5% (61.66 - 56.3)/56.3. We mistakenly calculated the improvement from results of R^3 which is a 14.98% (61.66 - 53.7)/53.7 relative increase. We have fixed it. \\n\\nIf I understood correctly, `TriviaQA-open` and `TriviaQA-full` in the paper are officially called `TriviaQA-full` and `open-domain TriviaQA`. How about changing the term for readers to better understand the task? Also, in Section 4, the authors said TriviaQA-open is larger than web/wiki setting, but to my knowledge, this setting is part of the wiki setting.\\n\\nThanks for the suggestion. Yes we agree, the naming convention we chose was confusing. `TriviaQA-full` is better known as TriviaQA-unfiltered, so we adopted that name. And for the experiment with 1.6M paragraphs per query, we have renamed it to TriviaQA-open, as per your suggestion.\\n\\nIt would be great if the authors make the capitalization consistent. e.g. EM, Quasar-T, BiDAF. Also, the authors can use EM instead of `exact match` after they mentioned EM refers to exact match in Section 5.2.\\nWe have fixed this, thanks!\"}",
"{\"title\": \"Response to reviewer 2\", \"comment\": \"We thank you for your very useful and detailed review. We have significantly updated the writing of the paper to hopefully address all confusion and we\\u2019ve also updated the results section of the paper for better comparison. In a nutshell, we have added a section on retriever performance demonstrating the scalability of our approach (sec 5.1). We have improved results for our experiments with BiDAF reader and we have also added new results on the open-domain version of the SQuAD dataset. Below we address your concerns point-by-point.\\n\\n1. The authors seem to highlight multi-step `reasoning`, while it is not `reasoning` in my opinion. Multi-step reasoning refers to the task which you need evidence from different documents, and/or you need to find first evident to find the second evidence from a different document. I don\\u2019t think the dataset here are not multi-step reasoning dataset, and the authors seem not to claim it either. Therefore, I recommend using another term (maybe `multi-step interaction`?) instead of `multi-step reasoning`.\\n\\nAfter much discussion among us, we have arrived to an agreement with your comment. We have renamed the title of the paper to \\u201cMulti-step Retriever-Reader Interaction for Scalable Open-domain Question Answering\\u201d.\\nWe believe that our framework that supports retriever-reader interaction would be a starting point to build models for multi-hop reasoning but the current datasets do not explicitly need models with such inductive bias. There has been some very recent efforts in this direction such as HotpotQA -- but this dataset was very recently released (after the ICLR submission deadline).\\n\\n2. While the idea of multi-step interaction and how it benefits the overall performance is interesting, the analysis is not enough. Figure 3 in the paper does not have enough description \\u2014 for example, I got the left example means step 2 recovers the mistake from step 1, but what does the right example mean?\\n\\nWe have significantly updated this section of the paper with much more analysis. We have included a new section on analysis of results (Sec 4.3) in which we quantitatively measure if the iterative interaction between the retriever and the reader is able to retrieve better context for the reader. We have also updated Figure 2 to report the results of our model for steps = {1, 3, 5, 7} for SearchQA, Qusar-T and TriviaQA-unfiltered.\\nTo answer your specific question about the second example from figure 3, after the query reformulation the new paragraph that was added also has the right answer string, i.e. the total occurrence of the correct answer span increased after the reformulation step. Since we sum up the scores of spans, this led to the overall increase in the score of the right answer span (Demeter, in Figure 3) to be the maximum. We have explained this in the text of the paper.\\n\\n3. On TriviaQA (both open and full), the authors mentioned the result is on hidden test set \\u2014 did you submit it to the leaderboard? I don\\u2019t see the same numbers on the TriviaQA leaderboard. Also, the authors claim they are SOTA on TriviaQA, but there are higher numbers on the leaderboard (which are submitted prior to the ICLR deadline).\\n\\nWe apologize for the confusion about this experiment. Ours and the reported baseline results are on the \\u201cTriviaQA-unfiltered\\u201d dataset (unfiltered version in http://nlp.cs.washington.edu/triviaqa/), for which there is no official leaderboard. The unfiltered version is built for open-domain QA. The evidence for each question in this setting are top 10 documents returned by Bing search results along with the Wikipedia pages of entities in the question. In the web setting, each question is associated with only one web document and in the Wikipedia setting, each question is associated with the wiki pages of entities in the question (1.78 wiki pages per query on avg.) Thus, the unfiltered setting has much more number of paragraphs than the individual web/wiki setting. Moreover, there is no guarantee that every document in the evidence will contain the answer making this setting even more challenging. However we did submit our model predictions to the TriviaQA admin who emailed us back the result on the hidden test set and to the best of our knowledge, we achieve the highest result on this setting of TriviaQA. We have updated the paper by naming this experiment TriviaQA-unfiltered and have clarified other details.\"}",
"{\"title\": \"Response to Reviewer 1 (continued)\", \"comment\": \"Response to Reviewer 1 (continued from before)\\n\\nMoreover, for TriviaQA their results and the cited baselines seem to all perform well below to current top models for the task (cf. https://competitions.codalab.org/competitions/17208#results).\\n\\nWe apologize for the confusion about this experiment. Ours and the reported baseline results are on the \\u201cTriviaQA-unfiltered\\u201d dataset (unfiltered version in http://nlp.cs.washington.edu/triviaqa/), for which there is no official leaderboard. The unfiltered version is built for open-domain QA. The evidence for each question in this setting are top 10 documents returned by Bing search results along with the Wikipedia pages of entities in the question. In the web setting, each question is associated with only one web document and in the Wikipedia setting, each question is associated with the wiki pages of entities in the question (1.78 wiki pages per query on avg.) Thus, the unfiltered setting has much more number of paragraphs than the individual web/wiki setting. Moreover, there is no guarantee that every document in the evidence will contain the answer making this setting even more challenging. However, we did submit our model predictions to the TriviaQA admin who emailed us back the result on the hidden test set. We have updated the paper by naming this experiment TriviaQA-unfiltered and have clarified other details.\\n\\nI would also like to see a better analysis of how the number of steps helped increase F1 for different models and datasets. The presentation should include a table with number of steps and F1 for different step numbers they tried. (Figure 2 is lacking here.)\\n\\nWe have included a detailed result in figure 2 where we note the results of our model for steps = {1, 3, 5, 7} for SearchQA, Qusar-T and TriviaQA-unfiltered. The key takeaway from the result is that multi-step interaction uniformly increases the performance across all the datasets.\\n\\nIn the text, the authors claim that their result shows that natural language is inferior to 'rich embedding spaces'. They base this on a comparison with the AQA model. There are two problems with this claim: 1) The two approaches 'reformulate' for different purposes, retrieval and machine reading, so they are not directly comparable. 2) Both approaches use a 'black box' machine reading model, but the authors use DrQA as the base model while AQA uses BiDAF. Indeed, since the authors have an implementation of their model that uses BiDAF, an additional comparison based on matched machine reading models would be interesting.\\n\\nWe have now reported the results of our method with a BiDAF reader on SearchQA (row 9, table 2) and have shown that our method outperforms AQA by a significant margin when both the model uses the same reader architecture (BiDAF).\\n\\nActive Question Answering (AQA) propose a model in which an query reformulation agent sits between an user and a black box \\u201cQA\\u201d system. The agent probes the reader model (BiDAF) with (N=20) reformulations of the initial natural language query and aggregates the returned evidence to yield the best answer. The reformulation module is trained end to end using policy gradients to maximize the F1 of the reader. In our method as well, the query reformulation is done to the initial query vector to maximize the F1 of the reader. In other words, both methods are reformulating to improve retrieval. By using the same reader architecture (BiDAF) in our experiments, we find significant improvements on SearchQA. We have updated the paper to make this distinction very clear.\"}",
"{\"title\": \"Response to reviewer 1\", \"comment\": \"We sincerely thank you for your insightful comments and we\\u2019re glad that you found our approach interesting. Based on your comments, we have significantly improved the writing of the paper with more details and have added more evaluation. Below we address your concerns point-by-point.\\n\\n- I find some of the description of the models, methods and training is lacking detail. For example, their should be more detail on how REINFORCE was implemented; e.g. was a baseline used?\\n\\nWe have significantly updated the model section of our paper to include more details about methods and training (Sec 2 & 3). To answer your specific question about use of variance reduction baseline with REINFORCE -- In question answering settings, it has been noted by previous work such as Shen et al., (2017) that common variance reduction techniques don\\u2019t work well. We also tried experimenting with a commonly used baseline - the average reward in a mini-batch, but found that it significantly degrades the final performance.\\n\\nI am not sure about the claim that their method is agnostic to the choice of machine reader, given that the model needs access to internal states of the reader and their limited results on BiDAF.\\n\\nWe agree with you and based on your comments we have made this absolutely clear in the paper. Our method needs access to the internal token level representation of the reader model in order to construct the current state. The current API of machine reading models only return the span boundaries of the answer, but for our method, it needs to return the internal state as well. What we wanted to convey is, our model does not depend/need any neural architecture re-designing to an existing reader model. To show the same, we experimented and showed improvements with two popular and widely used reader architectures - DrQA and BiDAF.\\nRegarding results of BiDAF -- During submission we ran out of time and hence we could not tune the BiDAF model. But now the results of BiDAF have improved a lot and as can be seen from (Table 2, row 9), the results of BiDAF are comparable to that of DrQA. \\n\\nIt is not clear to me which retrieval method was used for each of the baselines in Table 2.\\n\\nWe report the best performance for each of our baseline that is publicly available. Most of the results for the baseline (except DS-QA) are taken as reported in the R^3 paper. We briefly describe the retrieval method used by the baselines below:\\n(a) R^3 and DS-QA, like us, has a trained retriever module. R^3 retriever is based on the Match-LSTM model and DS-QA is based on DrQA model (more details in the respective papers). However, their retrievers compute query dependent para representation and hence don\\u2019t scale as we experimentally demonstrate in Fig 2.\\n(b) AQA, GA and BiDAF lack an explicit retriever module. They concatenate all paragraphs in the context and feed it to their respective machine reading module. Since the reader has to find the answer from possible very large context (because of concatenation), these models have lower performance as can be seen from Table 2.\\n\\nWhy does Table 2 not contain the numbers obtained by the DrQA model (both using the retrieval method from the DrQA method and their method without reinforcement learning)? That would make their improvements clear.\\n\\nThanks for suggesting this experiment! We ran the experiment and results are in (Table 2, row 7). We trained a DrQA baseline model and the results indeed suggest that multi-step reasoning give uniform boost in performance across all datasets.\"}",
"{\"title\": \"Interesting and encouraging results but limited novelties\", \"review\": \"The paper proposes a multi-document extractive machine reading model and algorithm. The model is composed of 3 distinct parts. First, the document retriever and the document reader that are states of the art modules. Then, the paper proposes to use a \\\"multi-step-reasoner\\\" which learns to reformulate the question into its latent space wrt its current value and the \\\"state\\\" of the machine reader.\\n\\nIn the general sense, the architecture can be seen as a specific case of a memory network. Indeed, the multi-reasoner step can be seen as the controller update step of a memory network type of inference. The retriever is the attention module and the reader as the final step between the controller state and the answer prediction.\\n\\nThe authors claim the method is generic, however, the footnote in section 2.3 mentioned explicitly that the so-called state of the reader assumes the presence of a multi-rnn passage encoding. Furthermore, this section 2.3 gives very little detailed about the \\\"reinforcement learning\\\" algorithms used to train the reasoning module.\\n\\nFinally, the experimental section, while giving encouraging results on several datasets could also have been used on QAngoroo dataset to assess the multi-hop capabilities of the approach. Furthermore, very little details are provided regarding the reformulation mechanism and its possible interpretability.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Very interesting idea; needs more details and better evaluation\", \"review\": \"The authors improve a retriever-reader architecture for open-domain QA by iteratively retrieving passages and tuning the retriever with reinforcement learning. They first learn vector representations of both the question and context, and then iteratively change the vector representation of the question to improve results. I think this is a very interesting idea and the paper is generally well written.\\n\\nI find some of the description of the models, methods and training is lacking detail. For example, their should be more detail on how REINFORCE was implemented; e.g. was a baseline used?\\n\\nI am not sure about the claim that their method is agnostic to the choice of machine reader, given that the model needs access to internal states of the reader and their limited results on BiDAF.\", \"the_presentation_of_the_results_left_a_few_open_questions_for_me\": [\"It is not clear to me which retrieval method was used for each of the baselines in Table 2.\", \"Why does Table 2 not contain the numbers obtained by the DrQA model (both using the retrieval method from the DrQA method and their method without reinforcement learning)? That would make their improvements clear.\", \"Moreover, for TriviaQA their results and the cited baselines seem to all perform well below to current top models for the task (cf. https://competitions.codalab.org/competitions/17208#results).\", \"I would also like to see a better analysis of how the number of steps helped increase F1 for different models and datasets. The presentation should include a table with number of steps and F1 for different step numbers they tried. (Figure 2 is lacking here.)\", \"In the text, the authors claim that their result shows that natural language is inferior to 'rich embedding spaces'. They base this on a comparison with the AQA model. There are two problems with this claim: 1) The two approaches 'reformulate' for different purposes, retrieval and machine reading, so they are not directly comparable. 2) Both approaches use a 'black box' machine reading model, but the authors use DrQA as the base model while AQA uses BiDAF. Indeed, since the authors have an implementation of their model that uses BiDAF, an additional comparison based on matched machine reading models would be interesting.\", \"Generally, it would be great to see more detailed results for their BiDAF-based model as well.\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"New framework, but weak comparison\", \"review\": \"This paper introduces a new framework to interactively interact document retriever and reader for open-domain question answering. While retriever-reader framework was often used for open-domain QA, this bi-directional interaction between the retriever and the reader is novel and effective because\\n1) If the retriever fails to retrieve the right document at the first step, the reader can give a signal to the retriever so that the retriever can recover its mistake at the next step\\n2) The idea of `reader state` from the reader to the retriever is new\\n3) The retriever use question-independent representation of paragraphs, which does not require different representation depending on the question and makes the framework easily scalable.\\n\\nStrengths\\n1) The idea of multi-step & bi-directional interaction between the retriever and the reader is novel enough (as mentioned above). The paper contains enough literature studies on existing retriever-reader framework in open-domain setting, and clearly demonstrates how their framework is different from them.\\n2) The authors run the experiments on 4 different dataset, which supports the argument about the framework\\u2019s effectiveness.\\n\\nWeakness\\n1) The authors seem to highlight multi-step `reasoning`, while it is not `reasoning` in my opinion. Multi-step reasoning refers to the task which you need evidence from different documents, and/or you need to find first evident to find the second evidence from a different document. I don\\u2019t think the dataset here are not multi-step reasoning dataset, and the authors seem not to claim it either. Therefore, I recommend using another term (maybe `multi-step interaction`?) instead of `multi-step reasoning`.\\n2) While the idea of multi-step interaction and how it benefits the overall performance is interesting, the analysis is not enough. Figure 3 in the paper does not have enough description \\u2014 for example, I got the left example means step 2 recovers the mistake from step 1, but what does the right example mean?\\n\\nQuestions on result comparison\\n1) On TriviaQA (both open and full), the authors mentioned the result is on hidden test set \\u2014 did you submit it to the leaderboard? I don\\u2019t see the same numbers on the TriviaQA leaderboard. Also, the authors claim they are SOTA on TriviaQA, but there are higher numbers on the leaderboard (which are submitted prior to the ICLR deadline).\\n2) There are other published papers with higher result on Quasar-T, SearchQA and TriviaQA (such as https://aclanthology.info/papers/P18-1161/p18-1161 and https://arxiv.org/abs/1805.08092) which the authors did not compare with.\\n3) In Section 4.2, is there a reason for the specific comparison to AQA (5th line), though AQA is not SOTA on SearchQA? I don\\u2019t think it means latent space is better than natural language space. They are totally different model and the only intersection is they contains interaction between two submodules.\\n4) In Section 5, the authors mentioned their framework outperforms previous SOTA by 15% margin on TriviaQA, but what is that? I don\\u2019t see 15% margin in Table 2.\", \"marginal_comments\": \"1) If I understood correctly, `TriviaQA-open` and `TriviaQA-full` in the paper are officially called `TriviaQA-full` and `open-domain TriviaQA`. How about changing the term for readers to better understand the task? Also, in Section 4, the authors said TriviaQA-open is larger than web/wiki setting, but to my knowledge, this setting is part of the wiki setting.\\n2) It would be great if the authors make the capitalization consistent. e.g. EM, Quasar-T, BiDAF. Also, the authors can use EM instead of `exact match` after they mentioned EM refers to exact match in Section 4.2.\\n\\nOverall comment\\nThe idea in the paper is interesting, and their model and experiments are concrete. My only worries is that the terms in the paper are confusing and performance comparison are weak. I would like to update the score when the authors update the paper.\\n\\n\\nUpdate 11/27/2018\\nThanks for the authors for updating the paper. The updated paper have more clear comparisons with other models, with more & stronger experiments with the additional dataset. Also, the model is claimed to perform multi-step interaction rather than multi-step reasoning, which clearly resolves my initial concern. The analysis, especially ablations in varying number of iterations, was helpful to understand how their framework benefits. I believe these make the paper stronger along with its initial novelty in the framework. In this regard, I vote for acceptance.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
S1lPShAqFm | Empirically Characterizing Overparameterization Impact on Convergence | [
"Newsha Ardalani",
"Joel Hestness",
"Gregory Diamos"
] | A long-held conventional wisdom states that larger models train more slowly when using gradient descent. This work challenges this widely-held belief, showing that larger models can potentially train faster despite the increasing computational requirements of each training step. In particular, we study the effect of network structure (depth and width) on halting time and show that larger models---wider models in particular---take fewer training steps to converge.
We design simple experiments to quantitatively characterize the effect of overparametrization on weight space traversal. Results show that halting time improves when growing model's width for three different applications, and the improvement comes from each factor: The distance from initialized weights to converged weights shrinks with a power-law-like relationship, the average step size grows with a power-law-like relationship, and gradient vectors become more aligned with each other during traversal.
| [
"gradient descent",
"optimization",
"convergence time",
"halting time",
"characterization"
] | https://openreview.net/pdf?id=S1lPShAqFm | https://openreview.net/forum?id=S1lPShAqFm | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"SkxL_p32yV",
"BylWTKda27",
"B1l-_TA237",
"Sye-0-Kt37"
],
"note_type": [
"meta_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544502638091,
1541405112595,
1541365096843,
1541145033397
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1550/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1550/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1550/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1550/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper studies the behavior of training of over parametrized models. All the reviewers agree that the questions studied in this paper are important. However the experiments in the paper are fairly preliminary and the paper does not offer any answers to the questions it studies. Further the writing is very loose and the paper is not ready for publication. I advise authors to take the reviews seriously into account before submitting the paper again.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"ICLR 2019 decision\"}",
"{\"title\": \"Interesting and inspiring observations, but need some further enhancement\", \"review\": \"This paper discusses the effect of increasing the widths in deep neural networks on the convergence of optimization. To this end, the paper focuses on RNNs and applications to NLP and speech recognition, and designs several groups of experiments/measurements to show that wider RNNs improve the convergence speed in three different aspects: 1) the number of steps taken to converge to the minimum validation loss is smaller; 2) the distance from initialization to final weights is shorter; 3) the step sizes (gradient norms) are larger. This in some sense complements the theoretical result in Arora et al. (2018) for linear neural networks (LNN), which states that deeper LNNs accelerates convergence of optimization, but the hidden layers widths are irrelevant. This also shows some essential difference between LNNs and (practical) nonlinear neural networks.\\n\\n### comments about writing ###\\nThe findings are in general interesting and inspiring, but the explanations need some further improvement. In particular, the writing lacks some consistency and clarity in the wordings. For example, it is unclear to me what \\\"weight space traversal\\\" means, \\\"training size\\\" is mixed with \\\"dataset size\\\", and \\\"we will show that convergence ... to final weights\\\" seems to be a trivial comment (unless there is some special meaning of \\\"convergence rate\\\"), etc. It also lacks some clarity and organization in the results -- some more summarizing comments and sections (and in particular, a separate and clearer conclusion section), as well as less repetitions of the qualitative comments, should largely improve the readability of the paper.\\n\\n### comments about results ###\\nThe observations included in the work may kick off some interesting follow-up work, but it is still a bit preliminary in the following sense:\\n1. It lacks some discussions with its connection to some relevant literature about \\\"wider\\\" networks (e.g., Wide residual networks, Wider or deeper: revisiting the ResNet model for visual recognition, etc.).\\n2. It lacks some discussions about the practical implication of the improvement in optimization convergence with respect to the widening of the hidden layers. In particular, what is the trade-off between the validation loss increase and the optimization convergence speed-up resulted from widening hidden layers? A heuristic discussion/approach should largely improve the impact of this work.\\n3. The simplified theory about LNNs in the appendix seems a bit too far from the explanation of the difference between the observations in this paper and Arora et al. (2018).\\n\\n### typos and small suggestions ###\\n1. It is suggested that the full name of LNN is provided at the beginning, and the font size should be larger in Figure 1.\\n2. There are some mis-spellings that the authors should check (e.g., gradeint -> gradient).\\n3. In formula (4), the authors should mention that the third line holds for all $t$ is a sufficient condition for the previous two equivalent lines.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Interesting observations that are not backed up by a rigorous (empirical or otherwise) study\", \"review\": \"Understanding the effects of over-parametrization in neural network training has been a major challenge, albeit a lot of progress has been made in the past few years. The present paper is another attempt in this direction, with a slightly different point of view: the work characterizes the impact of over-parametrization in the number of iterations it takes an algorithm to converge. Along the way, it also presents further empirical observations such as the distance between the initial point and the final point and the angle between the gradients and the line that connects the initial and final points. Even though the observations presented are very interesting, unfortunately, the paper doesn't have the level of rigor required that would make it a solid reference.\\n\\nThe work presents its results somewhat clearly in the sense that one can simply reconstruct to probe in order to replicate the observations. This clarity is mainly due to the simplicity of the questions posed. There is nothing inherently wrong with simple questions, in fact, the kind of questions posed in the present paper are quite valuable, however, it lacks detailed study and rigor of a strong empirical work. Furthermore, the style of the exposition (anecdotal) and several obvious typos make the work look quite unfinished.\", \"here_are_some_flaws_and_suggestions_that_would_improve_the_work_substantially\": [\"A deeper literature review would help guide the reader put the paper in a better context. Especially, the related work section is quite poor, how exactly do those papers appear related to the present work? Do they support similar ideas or do they propose different perspectives?\", \"The exposition should be made more to the point and concise (for instance 3rd paragraph of section 4.3 where it starts with Figure 5(a) What's meant by over-fitting regime, is it worse gen error, is it merely fitting tr data?.. How do we \\\"know\\\" from Figure 2, what's a strong evidence? Some concepts such as the capacity do not have precise and commonly agreed upon definitions, the paper uses those quite a bit and sometimes only later on the reader understands what it actually refers to... The misalignment section is also quite unclear.)\", \"The observations can be formalized and the curve fitting should be explained in further detail, the appendix touches upon simple cases but there is a strong literature behind those simple cases that could be quite useful for the purposes of the paper.\", \"The authors have a lot of data available at no point the power law decay and exponent fitting are discussed. For a paper whose main point is this precise scaling, this looks like a major omission unless there is a specific reason for it (other than the hardness of fitting exponents to power laws). Merely showing the observables in a log-log plot weakens the support of the main claims.\", \"The theoretical argument provided is just an elementary observation whose assumptions and conditions are not discussed. It is not a straightforward task, for instance, a suggestion for a theoretical result on the distance between the initial and final weights is presented here: Lemma 1 A.3 https://arxiv.org/abs/1806.07572 (distance shrink as the number of parameters increase consistent with the observations of the present paper) (note that this is in addition to the several early-2018 mean field approximations to NNs whose solutions are found in the limit where the number of parameters tend to infinity)\", \"All the figures from 5 to 8 are presented very quantitatively such as looking at different layers and observing the percentage reductions. The message one can gain from such presentations are extremely limited and not systematic. I encourage the authors to formulate solid observables that can and should be tested in further detail.\", \"Even though the paper is touching upon very interesting questions, at its current stage, it is not a good fit to be presented in a conference as it only presents anecdotal evidence. There is a lot of room to improve, but the good news is that most of the improvement should be straightforward.\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Mostly descriptive experimental analysis\", \"review\": \"This paper presents an empirical analysis of the convergence of deep NN training (in particular in language models and speech).\\n\\nStudying the effect of various hyperparameters on the convergence is certainly of great interest. However, the issue with this paper is that its analyses are mostly *descriptive*, rather than conclusive or even suggestive. For example, in Figure 2, it is shown that the convergence slope of Adam is steeper than that of SGD, when the x-axis is the model size. Very naturally I would be interested in a hypothesis like \\u201cAdam converges quicker than SGD as we increase the model size\\u201d, but there is no discussion like that. Throughout the paper there are many experimental results, but results are presented one after another, without many conclusions or suggestions made for practice. I don\\u2019t have a good take-away after reading it.\\n\\nThe writing of this paper also needs to be improved significantly. In particular, lots of statements are made casually without justification. For example,\\n\\n\\u201cIf hidden dimension is wide enough to absorb all the information within the input data, increasing width obviously would not affect convergence\\u201d -- Not so obvious to me, any reference? \\n\\n\\u201cFigure 4 shows a sketch of a model\\u2019s convergence curve ...\\u201d -- it\\u2019s not a fact but only a hypothesis. For example, what if for super large models the convergence gets slow and the curve gets back up again?\\n\\nIn general, I think the paper is asking an interesting, important question, but more developments are needed from these initial experimental results.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
r1xwS3RqKQ | Differential Equation Networks | [
"MohamadAli Torkamani",
"Phillip Wallis"
] | Most deep neural networks use simple, fixed activation functions, such
as sigmoids or rectified linear units, regardless of domain or
network structure. We introduce differential equation networks, an
improvement to modern neural networks in which each neuron learns the
particular nonlinear activation function that it requires. We show
that enabling each neuron with the ability to learn its own activation
function results in a more compact network capable of achieving
comperable, if not superior performance when compared to much larger
networks. We
also showcase the capability of a differential equation neuron to
learn behaviors, such as oscillation, currently only obtainable by a
large group of neurons. The ability of
differential equation networks to essentially compress a large neural network, without loss of overall performance
makes them suitable for on-device applications, where predictions must
be computed locally. Our experimental evaluation of real-world and toy
datasets show that differential equation networks outperform fixed activatoin networks in several areas. | [
"deep learning",
"activation function",
"differential equations"
] | https://openreview.net/pdf?id=r1xwS3RqKQ | https://openreview.net/forum?id=r1xwS3RqKQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"rygo3IdWx4",
"HyxikBgohX",
"r1lL2yBcnQ",
"BylD8IttnQ"
],
"note_type": [
"meta_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544812210823,
1541240035163,
1541193646092,
1541146191395
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1549/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1549/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1549/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1549/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"The reviewers unanimously agreed the paper did not meet the bar of acceptance for ICLR. They raised questions around the technical correctness of the paper, as well as the experimental setup. The authors did not address any reviewer concerns, or provide any response. Therefore, I recommend rejection.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Rejection, reviewer concerns not addressed\"}",
"{\"title\": \"Interesting approach but limited experiments and insights\", \"review\": \"Personal Expertise:\\nThe reviewer has extensive practical and theoretical experience with deep networks, different activation functions, as well as some practical experience using deep networks to model physical phenomena which are different inverse problems than the more common perception modeling. However, the reviewer is not knowledgeable in using ODEs for deep networks.\", \"contributions_of_the_paper\": \"The contribution of the paper is in proposing a learnable activation function in form of an ODE which can help to better model highly oscillatory and irregular functions more efficiently. This can be potentially useful for special applications in inverse problems where (by field knowledge) we know highly non-linear and specific activation functions can be more reasonable than the common ReLU and its recent variants.\", \"quality_and_composition\": \"The composition of the theoretical part of the paper is clear. The experimental part is very limited though and does not include all the necessary details. Many of the details in the initial sections can be taken to the appendices to make room for more empirical studies. \\n\\nNovelty, related works:\\nThe work seems novel in proposing the specific activation function but in general there are many other works that propose learnable or more elaborate activation functions, neurons, or local parts of networks. It is not so clear how this work is different from those and nor is compared to those works. This includes networks in networks, maxout networks, capsule networks, etc.\", \"critique_of_the_theories_and_experiments\": \"\", \"theoretical_design\": \"The theoretical design and derivation of the paper seem correct, although the reviewer is not an expert on this topic. However, it does not clearly mention why the ODE is not designed and solved for each problem separately. Should there be different design choices for y for each task/dataset? Why not solving the ODE during the training as well? If we are solving the ODE only once and based on some initialization of coefficients, it seems to be equivalent to designing a learnable activation function such as leaky ReLU. In that regard, one could call leaky-ReLU a DifEN?\", \"experimental_setup\": [\"The motivation of the new activation function is for specific use-cases where oscillatory or decaying functions are to be modelled. In that respect, the experimental setup is quite limited and inconclusive.\", \"MNIST experiment: to conclusively evaluate the performance of the proposed activation function, it is important to try fixed activation functions on the same architecture as the DifEN and vice versa.\", \"Diabetes regression experiment: since the task is not a well-studied regression task, more experiments on various datasets are required to make a conclusion.\", \"The learnable activation function can potentially make the network more prone to overfitting, this needs to be tested thoroughly.\", \"An important application of the proposed activation function is mentioned to be for model compression. That should be properly tested on problems with large sets of parameters (such as ImageNet networks) and observe if the performance drop due to a decrease in the number of parameters in a standard network is sharper than that of a network with DifEN activations.\", \"More tasks and more analysis should be performed for the real-world tasks that the authors mention as the motivation of this work (specifically medical diagnosis or predictions). The analysis should demonstrate and give insight on the extra generalization power that the new activation function brings to those problems.\", \"Following on the previous point, it is important to empirically demonstrate for which applications DifEN is useful.\", \"Some analyses are missing on when a neuron can accentuate the problem of vanishing and/or exploding gradients in certain configurations of the DifEN parameters. It seems like the situation can arise during the training where the activation function become too steep or too saturated.\"], \"summary_judgment\": \"All in all, I think the proposed work has some potential in specific applications, however, the experimental setup does not give a clear and conclusive message of where and how the new activation functions are useful.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Interesting concept, needs stronger backing empirically\", \"review\": \"This paper proposes the use of structured activations based on ordinary differential equations, as an activation in neural network architectures.\\nThere are validations that the approach discovers different activations, and comparisons to a variety of other architectures with fixed activations. In general, I would like to see additional contextualization of your work with other approaches which learn activation functions. How does your approach differ from max-out and Agostinelli? References to Network in Network, Lin et al and even Hypernetworks, Ha et al would also be helpful. Some of these papers are cited, but only the comparison to Ramachadran et. al. is directly discussed and the methodological difference there is not true for the case of Maxout.\\n\\nThe paper indicates that the ODE should be solved before beginning to use the network. Should this process be performed once per network design? Once per dataset? What happens if instead of solving f1 and f2, these are set randomly? How is this actually solved in the case of the MNIST convnet?\\n\\nThe experiments were useful in demonstrating the proposed method. However, some discussion and comparison to other learned activation functions would be helpful (for instance, one could perform similar experiments as in Maxout). Performance on larger datasets, as seen in Swish, would make the results more compelling. \\n\\nThe MNIST experiments shown are also pretty far from standard baselines. See for example the benchmark performance in Maxout, which also references an architecture from Jarrett et. al., 2009 which is quite similar to the baseline architecture, but ~.5% error. It isn't necessary to get the absolute best performance with a new activation, just show that the proposed doesn't actively hurt and enables new interpretation or direction. But as it stands, it isn't possible to tell from the experiments if the proposed method has serious limitations, because the baselines on MNIST are below where they should be.\\n\\nIn general, a larger test suite that compares on more standard datasets is necessary to really prove out this idea, or see if there are problems with cases on larger datasets. CIFAR10 at a minimum would be a key addition as well as other datasets (besides the diabetes dataset shown) where there can be direct comparison to existing work. Currently, only MNIST is filling that role. Many of the cited / compared work (PReLU, Swish, LReLU, SELU) has a broad suite of benchmarks, all on datasets with existing numbers tuned by the authors of respective past papers.\\n\\nWhat are the downsides of this method? Tradeoffs in memory and training time should be discussed in detail if application to low power hardware is a real application area, perhaps along with an inclusion of the time / effort required for solving the ODEs in Maple. A difference of 2x+ in parameter count may not be a difference if the computational time is much worse. Can you consider a case where \\\"normal\\\" architectures don't fit in memory, but one based on ODE activations will? The paper directly discusses mobile and low footprint deployments, but without discussion of the computational overhead and complexity it is speculative, especially when there are also numerous methods for compressing or distilling very large architectures to much smaller sizes, as well as small models which directly achieve high performance. A few relevant methods are linked below.\", \"mobilenets___https\": \"//arxiv.org/abs/1704.04861\", \"enet___https\": \"//arxiv.org/abs/1606.02147\\n\\nIn the Network Compression section, the paper fails to discuss a number of successful foundational and modern network compression techniques that would improve the argument, including: \\nOptimal brain damage - \\u201cremoving unimportant weights can actually improve performance\\u201d - https://papers.nips.cc/paper/250-optimal-brain-damage\", \"deep_compression\": \"Compressing deep neural network with pruning, trained quantization and huffman coding - pruned state-of-the-art CNN models with no loss of accuracy - https://arxiv.org/abs/1510.00149\", \"bayesian_compression_for_deep_learning___https\": \"//arxiv.org/abs/1705.08665\", \"practical_variational_inference_for_neural_networks___http\": \"//papers.nips.cc/paper/4329-practical-variational-inference-for-neural-networks\\n\\nThere are repeated claims of first use of ODE in neural networks, which is frankly false. Though the specific use proposed here may be new, neural networks and ODEs have been used together many times. Clarifying what particular usage of ODE inside this setting is novel would be much better than a broad claim such as \\\"While the presented model, algorithms and results in this paper are the first application of ODEs in neural networks...\\\". Much of this work has been about controlling or solving ODEs, but particularly the setting of Meade Jr. et. al. strongly resembles a \\\"neuron\\\" in this architecture, so a discussion of the relevant differences would be useful. In addition Neural Ordinary Differential Equations allows the end-to-end training of ODEs in larger models, which also closely resembles the use of ODEs here.\", \"artificial_neural_networks_for_solving_ordinary_and_partial_differential_equations___https\": \"//ieeexplore.ieee.org/document/712178/\", \"solution_of_nonlinear_ordinary_differential_equations_by_feedforward_neural_networks___https\": \"//www.sciencedirect.com/science/article/pii/089571779400160X\", \"neural_ordinary_differential_equations___https\": \"//arxiv.org/abs/1806.07366\\n\\nOverall, a stronger focus on empirical results on comparable datasets would be beneficial, especially larger tasks. If larger tasks are not possible, a description of what it may take to \\\"scale up\\\" would be useful. The written focus on novelty detracts from the presentation, and a discussion of neural ODE methods (whether acting as activations, or solvers) would serve as good background material. If compute / performance in low footprints or mobile hardware is a focus, it should be described and tested. If lower parameter count is a perceived benefit, a more direct exploration and discussion of parameter count settings for this architecture and baselines would also be useful. Particularly, hyperparameters become very important in small architectures, so \\\"Dropout probability,\\nbatch size, epochs and learning rate were consistent across all networks\\\" is not a positive (presuming the authors have likely tuned toward their own architecture). Baselines should be given equal treatment and tuning in order to compare \\\"best-on-best\\\" performance.\\n\\nThe description of universal approximation, visualization of the adaptivity of the method, and background are all very nice. My concerns come primarily to relation to prior and relevant work, strength of relevant experimentation, and claims of application and novelty / \\\"first past the post\\\".\\n\\n\\u2014\\u2014\", \"minor_nitpicks\": \"\", \"page_1\": \"In the sentence - \\u201cresearchers have introduced highly effective network structures such as convolutional neural networks\\u201d, it seems inconsistent to cite a foundational paper for CNNs and not RNNs.\", \"page_2\": \"It seems like there is a word missing here - \\u201cThe size of a neural network is delineated its number of hidden neurons and their interconnections, which together determine the network\\u2019s complexity\\u201d\\n\\nThere seems to be a missing word in \\u201c3.3 DIFEN IS UNIVERSAL APPROXIMATOR\\u201d . \\n\\nNumerous spelling errors should be corrected - \\n3.1 differentiatial\\n4.1 challnging\\nFigure 1 - fucntion\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Differential Equation Networks\", \"review\": \"This paper proposes an intriguing idea, that of using solutions to a differential equation as activation functions in a neural network. Coefficients of the differential equation (five parameter equations in implementations done) are trainable, realising different activation functions on different nodes. Back propagation is used with the ADMA optimiser to train the parameters of the DE as well as the remaining weights. What the network implements is shown to achieve universal approximation by considering a second order differential equation producing different sinusoidal functions which can add up to a desired function. The paper is written clearly and easy to follow. The idea is novel. The work is illustrated on three problems: (a) a toy dataset, (b) MNIST classification problem and (c) a regression task from a diabetes problem.\\nWhile the idea is novel and the paper is clear, the empirical work presented in the paper does not go far enough to be supportive of its acceptance. Firstly, Tables 1 and 2 do not provide any uncertainty in results. Simply saying accuracy of one method is marginally higher (in the second decimal place in Table 1) than another method is not persuasive. This is particularly so when no training set results are reported. I would strongly urge to report uncertainty coming from cross validation (three fold is too small; the data is large enough to do ten-fold). Second, some sort of error analysis has to be carried out to understand how the improved performance is attributable to the new idea being advanced. Confusion matrices on the classification problem might help. Is there a specific part of the task in which the new method separates characters that the more classic ones fail to do? Similar criticisms apply to the regression task; is the improvement across all examples or localized to some particularly hard ones; this is an issue when comparing (squared) errors because a few outliers can dominate the evaluation/comparison.\\nIn summary, the paper has a novel idea, but has to be better developed in its empirical part.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
BJxLH2AcYX | Unsupervised Multi-Target Domain Adaptation: An Information Theoretic Approach | [
"Behnam Gholami",
"Pritish Sahu",
"Ognjen (Oggi) Rudovic",
"Konstantinos Bousmalis",
"Vladimir Pavlovic"
] | Unsupervised domain adaptation (uDA) models focus on pairwise adaptation settings where there is a single, labeled, source and a single target domain. However, in many real-world settings one seeks to adapt to multiple, but somewhat similar, target domains. Applying pairwise adaptation approaches to this setting may be suboptimal, as they would fail to leverage shared information among the multiple domains. In this work we propose an information theoretic approach for domain adaptation in the novel context of multiple target domains with unlabeled instances and one source domain with labeled instances. Our model aims to find a shared latent space common to all domains, while simultaneously accounting for the remaining private, domain-specific factors. Disentanglement of shared and private information is accomplished using a unified information-theoretic approach, which also serves to provide a stronger link between the latent representations and the observed data. The resulting single model, accompanied by an efficient optimization algorithm, allows simultaneous adaptation from a single source to multiple target domains.
We test our approach on three publicly-available datasets, showing that it outperforms several popular domain adaptation methods. | [
"domain adaptation",
"information theoretic",
"target domains",
"unsupervised",
"uda",
"models",
"pairwise adaptation settings",
"single",
"source",
"single target domain"
] | https://openreview.net/pdf?id=BJxLH2AcYX | https://openreview.net/forum?id=BJxLH2AcYX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"B1laTZyC14",
"Byeo_L0dCQ",
"B1eo5rRORm",
"S1gfdmR_0m",
"B1elng0OCQ",
"SkeiWEs52X",
"S1exkKEc2X",
"rJeOjiPF3X"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544577477022,
1543198323501,
1543198099034,
1543197545701,
1543196839815,
1541219330595,
1541191895646,
1541139360044
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1547/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1547/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1547/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1547/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1547/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1547/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1547/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1547/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper proposes the unique setting of adapting to multiple target domains. The idea being that their approach may leverage commonality across domains to improve adaptation while maintaining domain specific parameters where needed. This idea and general approach is interesting and worth exploring. The authors' rebuttal and paper edits significantly improved the draft and clarified some details missing from the original presentation.\\n\\nThere is an ablation study showing that each part of the model contributes to the overall performance. However, the approach provides only modest improvements over comparative methods which were not designed to learn from multiple target domains. In addition, comparison against the latest approaches is missing so it is likely that the performance reported here is below state-of-the-art. \\n\\nOverall, given the modest experimental gains combined with incremental improvement over single source information theoretic methods, this paper is not yet ready for publication.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Useful problem statement but incremental technical advances with modest empirical improvements\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"Thank you for the review! To improve the quality of the paper, we have made several adjustments to our paper in accordance with your review.\\n\\n\\\"The contribution is limited since the techniques involved are very common in the domain adaptation\\\".\", \"we_clarify_our_contribution_bellow\": [\"We propose a novel information theoretic (IT) framework based on which a novel adversarial approach for jointly learning the shared/private features for multiple domains is proposed. Moreover, our IT framework naturally leads to utilizing the unlabeled target samples during training that is absent in most of uDA works (We did an ablation study in Sec. 4.3 on how much the information in unlabeled target samples is beneficial to the final performance). To the best of our knowledge, no other work exists that uses this combination of structures for domain adaptation, which we treat as a novel contribution.\", \"Adaptation from one source to multiple target adaptation problem setting has been relatively under explored. To the best of our knowledge, our work is a first work addressing this DA setting, showing that jointly adapting multiple target domains in a clever way offers empirical benefit over naive solutions (combining all target datasets into single one or adapting each source-target separately). We note that Reviewer pointed to another recent work, published on arxiv 10 days prior to ICLR submission deadline, which considers a similar setting. We contrast our approach to theirs in the comment below.\", \"Moreover, our paper offers a new justification of why the popular auto-encoder-based regularization (derived from maximizing the mutual information between the samples and the latent features) and classifier uncertainty minimization (derived from minimizing the mutual information between the target latent features and class labels) can work for domain adaptation from an information theoretic perspective (see Sec. 2.1 for more details).\", \"We conducted extensive experiments with detailed ablation studies on three well-known domain\", \"adaptation benchmarks to validate our approach on multiple domain adaptation, demonstrating the superiority of our model over the state-of-the-art DA methods.\", \"\\\"Desirable to have a discussion and comparison with \\\"Multi-target Unsupervised Domain Adaptation without Exactly Shared Categories\\\"\\\"\", \"Thanks for pointing this out.\", \"First of all, this paper addresses the one source, multiple target domain adaptation in the context of data clustering rather than data classification. Even tough the authors of the mentioned paper use the labels of the source samples and assume that the set of all classes in target domains is a subset of source classes, it is not clear why resort to reporting clustering performance instead of the classification scores.\", \"The paper uses the sparse representation and dictionary learning framework for domain adaptation , making it less scalable to large dataset settings compared to our approach. Indeed, this is reflected in their focus on small datasets such as Office or Yale B, which are rarely used in modern domain adaptation evaluations.\", \"Additionally, no comparisons are reported to state-of-the-art deep domain adaptation approaches, such as ADDA, DSN, DTN, DAAN, etc.\", \"Moreover, although this paper claims to address a new setup where the target domains not necessarily share the same categories, it assumes that the source categories contain all target categories. By this assumption, there is no difference between this setup and the standard domain adaptation setup since the classifier is trained based on the labeled source samples and the domain alignment task is independent of the target categories (due to the lack of labels in target domain).\"]}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"Thank you for the review! To improve the quality of the paper, we have made several adjustments to our paper in accordance with your review.\\n\\n\\\"The proposed loss is a combination of 4 different mutual information. The effectiveness of each one is unclear. An ablation study should be provided\\\".\\n\\nWe provided a detailed ablation study analyzing the effectiveness of each term in our proposed loss function in Sec. 4.3. The conclusion is that disabling each of the model's components leads to degraded performance. More precisely, the average drop by disabling the classifier entropy loss is about 3.5%. Similarly, by disabling the reconstruction loss and the multi-domain separation loss, we have about 4.5% and 22% average drop in performance, respectively. Clearly, by disabling the multi-domain separation loss, the accuracy drops significantly due to the severe data distribution mismatch between different domains. See Sec. 4.3 for more details.\\n\\n\\\"The descriptions for experiments should be improved. I am confused by the experimental settings of MTDA-ITA, c-MTDA-ITA, and s-MTDA-ITA.\\\"\\n\\nThe experimental setups for the c-MTDA-ITA, s-MTDA-ITA and MTDA-ITA results are as follows.\", \"c_mtda_ita\": \"for this case, we consider a dataset (for example SVHN) as the source and combine the others (MNIST,MNIST-M, USPS) into a single target dataset. Hence, this is a standard single source single target domain adaptation, where the target contains multiple datasets without knowing which sample belongs to which dataset (the domain label of the source samples are set to 0, and the domain label of all target samples are set to 1).\", \"s_mtda_ita\": \"in this case, we consider a dataset (for example SVHN) as the source and another one (MNIST) as the target. Thus, this setup also corresponds to a standard single source single target domain adaptation, where the target contains only one dataset.\", \"mtda_ita\": \"for this case, we consider a dataset (SVHN) as source and consider others (MNIST,MNIST-M, USPS) as multiple disjoint target domains. Therefore, this setup corresponds to a novel setting where we adapt jointly multiple target domains.\\nIt should be noted that although for both MTDA-ITA and c-MTDA-ITA, we do domain adaptation for multiple target dataset, for c-MTDA-ITA, we do not have access to the domain labels of the target datasets while for MTDA-ITA, we have access to the target domain labels (we know which target sample belong to which domain). \\n\\n\\\"The meaning of mean classification accuracy\\\".\\n\\nWe use this term to indicate the mean of classification accuracy of five different runs using random initialization. We have included the standard deviation of the reported accuracies to the tables in the paper. Based on the standard deviation results, our model has lower variances than the other competing methods.\\n\\n\\\"It seems the c-MDTA-ITA cannot provide convincing superior performance compared to c-ADDA and c-DTN\\\".\\n\\nThe performance scores reported in the tables indicate that c-MDTA-ITA outperforms c-ADDA (27 out of 32 cases) and c-DTN (22 out of 32 cases). More importantly, one of the contributions of our work is to demonstrate our specific, novel way of simultaneously adapting to multiple target domains offers empirical benefit over naive solutions (combining all target datasets into single domain or adapting each source-target separately in a pair-wise fashion). Our experimental results support this claim: On digit experiments, our approach ranks 1 in 9 out of 12 cases, On Multi-PIE dataset, it ranks 1 in 17 out of 20 cases. We also included the average rank of each method over all adaptation pairs to the (last column of) tables. The scores indicate that MDTA-ITA significantly outperforms other competing methods.\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"Thank you for the review! To improve the quality of the paper, we have made several adjustments to our paper in accordance with your review.\\n\\n\\\"The motivation for this new domain adaptation setting is not clear\\\".\\n\\nThe reviewer is correct in her/his assertion that most current works focus on either pairwise or multi-source single-target domain adaptation settings. However, the single-source multi-target setting considered here is strongly connected to many important practical problems, beyond the multi-view adaptation exemplified in this paper. E.g., 1) Simultaneous personalization: Different target domains coincide with subjects in the test set, to which we seek adapt to. Since all subjects exhibit e.g. similar facial expressions, it is natural to exploit data sharing across target domains. 2) Simultaneous adaptation to different scene contexts: one seeks to adapt to e.g. driving scenarios taken under multiple scene conditions such as illumination, seasonality, or weather. As all target domains share common scene objects, simultaneous adaptation can leverage data sharing to improve both individual as well as global adaptation.\\n\\n\\\"The proposed framework is quite similar to DSN, which limits this work's novelty\\\".\\n\\nWhile our framework tackles an entirely different setting (single source, multi-target), it is indeed based on DSN, which itself embodies classical domain separation concepts. We go beyond DSN in several essential ways (see also discussion in Appendix C). Specifically, our loss functions, eqs 8 and 10, contain terms that encourage 1) classifier determination (low entropy, second terms) to suppress prediction of uncertain labels and 2) balanced labeling (last term) to avoid degenerate solutions where all instances in target are assigned to a single class.\\n\\n\\\" Extending DSN with a shared encoder for multiple target domains just like MDTA-ITA\\\".\\n\\nThanks for pointing this out. We modified DSN to have one private encoder for multiple domains called \\\"1p-DSN\\\", and provided its results to the tables. As it was expected, \\\"1p-DSN\\\" results are quite similar to the \\\"c-DSN\\\", where we use one private encoder for the source domain and one private encoder for all target domains. Actually, the only difference between \\\"1p-DSN\\\" and \\\"c-DSN\\\" is that the former contains a single private encoder for all source and target domains, while the latter contains a private encoder for source domain and a private encoder for all target domains. \\n\\n\\\"Authors replace the probability/distribution q(x|z) and q(d|z) with concrete terms, which is technically wrong.\\\"\\n\\nWe used the term ||x-F(z;\\\\phi)|| to represent the variational distributions q( x| z;\\\\phi) as q( x| z;\\\\phi) \\\\propto exp(\\\\| x - F( z;\\\\phi)\\\\|_1). Similarly, we model q( y| z_s) = SoftMax(C( z_s;\\\\theta_c)), q( d|z) = SoftMax( D( z;\\\\psi)), where Softmax(.) denotes the softmax or normalized exponential function. we have revised the Sec. 2.1 to clarify this.\\n\\n\\\"The authors claimed that this work is different from ELBO optimization, but, the right way to optimize the proposed loss in Eqn. (1)(2) is exactly the ELBO.\\\"\\n\\nAlthough both the ELBO and our work use a variational bound to make the computation tractable, by looking at the ELBO objective function H[q( z)] + E_{q( z| x)}[\\\\ln p( x, z)] and our variational bound in Eq. 3, H( x) + E_p( x, z)[ln q( x | z)], show the following differences (i) in ELBO, we take the expectation of the log joint distribution p(x, z) w.r.t the variational distribution q( z| x) while in Eq.2, we take the expectation of the log variational distribution q( z| x) w.r.t the joint distribution p( x, z). (ii) in ELBO, we compute the entropy of the marginal distribution of the latent features q( z), while in Eq. 3, we compute the entropy of the marginal distribution of the data points p( x).\\n\\n\\\"The features by DSN which also separates private from shared features should be compared.\\\"\\n\\nThanks for pointing this out. We contrasted in detail the visualization of DSN with a single private encoder to our model in Appendix H. Briefly, both MTDA-ITA and 1p-DSN reduce the domain mismatch for the shared features and separate the shared features from private features. On the other hand, MTDA-ITA increases the domain separation for the private features while 1p-DSN is unable to enforce the private representation of different domains to be different, resulting in possible information redundancy across different private spaces.\\n\\n\\\"The presentation is in poor quality, including many typos/grammatical errors. Every citation should be included in a brace. The last equation is not right, where p(y|z_s) should be q(y|z_s)\\\"\\n\\nWe have carefully proofread and updated the manuscript to fix typos and remove grammatical errors, as well as removed duplicate references.\\n\\n\\\"The font in Table 2 is too small to read\\\".\\n\\nWe have moved some of results from tables 1 and 2 to the Appendix F to rectify the issue.\"}",
"{\"title\": \"Revisions to paper\", \"comment\": \"We thank the reviewers for their valuable comments. In addition to streamlining the presentation (e.g., fixing typos, improving clarity), we made the following comprehensive additions:\\n\\n 1. Standard deviations of accuracies are now reported in all tables in the paper.\\n 2. Average rank of each method over all adaptation pairs are now reported in the tables 1, and 2 in the paper.\\n 3. Visualization results demonstrating the contribution of each term in our loss function to Section 4.4.\\n 4. Comparison (prediction accuracy) with a new variation of DSN with a single private encoder, denoted as \\\"1p-DSN\\\", to all the tables. \\n 5. Visualization experiments contrasting DSN with a single private encoder to our models in Appendix H.\\n 6. Discussion highlighting the relationship of our model to Information Theoretic representation learning approaches and multiple domain transfer networks in Appendices A and B, respectively.\\nBelow we address specific comments raised by the reviewers.\"}",
"{\"title\": \"limited novelty\", \"review\": \"In this paper, the authors proposed a new domain adaptation setting for adaptation between single source but multiple target domains. To address the setting, the authors proposed a so-called information theoretic approach to disentangle shared and private features and meanwhile to take advantage of the relationship between multiple target domains.\", \"pros\": [\"This paper conducts comprehensive empirical studies.\"], \"cons\": [\"The motivation for this new domain adaptation setting is not clear to me. In the real world, the domain adaptation between multiple source domains and single target domain is in desperate need, as like human beings an agent may gradually encounter many source domains which could altogether benefit a target domain. However, I do not think that the adaptation between single source and multiple targets is intuitively in need.\", \"The proposed framework is quite similar to DSN, which limits this work's novelty. Though the authors take a large paragraph to illustrate the connections and differences between this work and DSN, I cannot be convinced. Especially during empirical study, the comparison is not fair. The adapted mp-DSN models multiple encoders for multiple target domains, while it is correct to extend DSN with a shared encoder for multiple target domains just like MDTA-ITA.\", \"There are technical flaws. The authors claimed that this work is different from ELBO optimisation, but follows an information theoretical approach. Actually, the right way to optimise the proposed loss in Eqn. (1)(2) is exactly the ELBO. Instead, the authors replace the probability/distribution q(x|z) and q(d|z) with concrete terms, which is technically wrong. such concrete term ||x-F(z;\\\\phi)|| cannot represent a probability/distribution.\", \"In the experiments for feature visualisation, I do not think such comparison with original features makes any sense. The features by DSN which also separates private from shared features should be compared.\", \"The presentation is in a quite poor quality, including many typos/grammatical errors.\", \"The most annoying is the inappropriate citations. Every citation should be included in a brace, e.g. \\\"the same underlying distribution Sun et al. (2016)\\\" -> \\\"the same underlying distribution (Sun et al. (2016))\\\". Please kindly refer to other submissions to ICLR 2019.\", \"Typos: in the beginning of Section 2, \\\"without loss of generalizability\\\" -> \\\"without loss of generality\\\"; in the end of Page 3, the last equation is not right, where p(y|z_s) should be q(y|z_s).\", \"The font in Table 2 is too small to read.\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"An information theoretical approach for novel multi-target domain adaptation, but not well justified.\", \"review\": \"This paper investigates multi-target domain adaptation which is an unexplored domain adaptation scenario compared with adapting single/multiple source to single target. A mutual information-based loss is proposed to encourage part of the features to be domain-specific while the other part to be domain-invariant. Instead of optimizing the proposed loss which is intractable, this work proposes to use neural network to model the relative functions and optimize proposed loss\\u2019 lower bound by SGD.\", \"method\": \"The proposed loss has an explanation from information theory, which is nice. However, the proposed loss is a combination of 4 different mutual information. The effectiveness of each one is unclear. An ablation study should be provided.\", \"clarity\": [\"The presentation should be improved, especially in the descriptions for experiments.\", \"Typo: Section 4: TanH should be Tanh\", \"Duplicated reference: Konstantinos Bousmalis, Nathan Silberman, David Dohan, Dumitru Erhan, and Dilip Krishnan. Unsupervised pixel-level domain adaptation with generative adversarial networks. In CVPR, July 2017a.\"], \"results\": [\"I am confused by the experimental settings of MTDA-ITA, c-MTDA-ITA, and c-MTDA-ITA. s-MTDA-ITA. I understand c-MTDA-ITA is to combine all the target domains into a single one and train it using MTDA-ITA. And s-MTDA-ITA is to train multiple MTDA-ITA separately, where each one corresponds to a source-target pair. But I am confused by the MTDA-ITA results in both table 1 and table 2. Could the authors provide some explanation for MTDA-ITA?\", \"For the metric in digits adaptation, the standard metric is classification accuracy. The authors use mean classification accuracy. Is this the mean of classification accuracy of multiple runs? If so, authors should provide the standard deviation. If this is the average per-class accuracy, this is different from standard routine in ADDA, CORAL, etc.\"], \"concerns\": \"The effectiveness of MDTA-ITA, s-MDTA-ITA and c-MDTA-ITA are not convincing. From the experiments, it seems the c-MDTA-ITA cannot provide convincing superior performance compared to c-ADDA and c-DTN.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"A Borderline Paper: the setting is interesting but the proposed approach is incremental.\", \"review\": \"The biggest contribution is the setting part, where one seeks to adapt one source to multiple, but somewhat similar, target domains. It is interesting to explore such direction since in many real-world applications, applying the model to many different target domains are required.\\n\\nIt is also noted that there is one very related work \\\"Multi-target Unsupervised Domain Adaptation without Exactly Shared Categories\\\" available online (https://arxiv.org/pdf/1809.00852.pdf). It is desirable to have a discussion and comparison with them since they are doing Multi-target Unsupervised Domain Adaptation. In their method, the exact shared category is even not required. \\n\\nFor the algorithm part, authors basically adopt the information-theoretic approach to handle the proposed method. This part contribution is limited since the techniques involved are very common in the domain adaptation.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
B1GIB3A9YX | Explicit Recall for Efficient Exploration | [
"Honghua Dong",
"Jiayuan Mao",
"Xinyue Cui",
"Lihong Li"
] | In this paper, we advocate the use of explicit memory for efficient exploration in reinforcement learning. This memory records structured trajectories that have led to interesting states in the past, and can be used by the agent to revisit those states more effectively. In high-dimensional decision making problems, where deep reinforcement learning is considered crucial, our approach provides a simple, transparent and effective way that can be naturally combined with complex, deep learning models. We show how such explicit memory may be used to enhance existing exploration algorithms such as intrinsically motivated ones and count-based ones, and demonstrate our method's advantages in various simulated environments. | [
"Exploration",
"goal-directed",
"deep reinforcement learning",
"explicit memory"
] | https://openreview.net/pdf?id=B1GIB3A9YX | https://openreview.net/forum?id=B1GIB3A9YX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"HkxufGbxgN",
"rkx4PAj7JN",
"rJlV2kj71E",
"S1xeGRqQ14",
"Sym2j9myE",
"rJx7li5XJE",
"Hyg-gbgiA7",
"HJeylmr62X",
"r1e86jUEo7",
"Skg08eT-s7"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544716816435,
1543908955578,
1543905195811,
1543904775934,
1543904170515,
1543903978768,
1543336169046,
1541391079153,
1539759037699,
1539588182086
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1546/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1546/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1546/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1546/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1546/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1546/Authors"
],
[
"~Anirudh_Goyal1"
],
[
"ICLR.cc/2019/Conference/Paper1546/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1546/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1546/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper presents an explicit memory that directly contributes to more efficient exploration. It stores trajectories to novel states, that serve as training data to learn to reach those states again (through iterative sub-goals).\\n\\nThe description of the method is quite clear, the method is not completely novel but has some merit. Most weaknesses of the paper come from the experimental section: too specific environments/solutions, lack of points of comparisons, lacking some details.\\n\\nWe strongly encourage the authors to add additional experimental evidence, and details. In its current form, the paper is not sufficient for publication at ICLR 2019.\\n\\nReviewers wanted to note that the blog post from Uber (\\\"Go-Explore\\\") did _not_ affect their evaluation of this paper.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Interesting research direction but weak paper\"}",
"{\"title\": \"Comparison with Go-Explore\", \"comment\": \"Recently, a similar method is published on uber\\u2019s website (https://eng.uber.com/go-explore/), which they called the go-explore method. Their results are very promising on both Montezuma\\u2019s Revenge and Pitfall, two of the hardest exploration tasks in Atari games.\\n\\nWhile we share the similar 3-stage exploration structure, there are several differences.\\n1. As they assume the environment is resettable/deterministic during training, they can utilize the ability of reset to quickly return to a state the agent want. Instead, we do not rely on the assumption, which brings significant hardness while reaching an intended state, and is the major reason why our performance is not as good as theirs.\\n2. When the training environment is stochastic (in our montezuma\\u2019s setting), they propose to use goal-conditioned policy, which is exactly what we are doing. Furthermore, we also propose to sample sub-goals from the trajectory.\"}",
"{\"title\": \"Our Response\", \"comment\": \"Thanks for your comments and suggestions.\\n\\n1. Comparison with previous methods.\\nDifferent from \\u201cSelf-Imitation Learning\\u201d, our agent uses explicit memory to help exploration, whose advantages are described in the last paragraph of page 1. \\nThe major difference between our method and \\\"Automatic Goal Generation for Reinforcement Learning Agents\\\" is the way to generate the goals and is stated in details in Section 4.2.\\nThe notion of curiosity defined in \\u201cCuriosity-driven exploration by self-supervised prediction\\u201d is employed in our framework for the exploration. Comparison has been made with the ICM model proposed in this paper. See Figure 1-3 and 5.\\n\\n2. The definitions.\\nThe details about Rooms environment can be found in Appendix A. The visit_times[x] means the number of times the cell x is being visited by the agent, accumulated throughout the training process. In the Rooms environment, each cell has a type of empty/wall/border, The stage avg reward is used as a metric for evaluating the trajectories, whose details can be found in Appendix B. We will try to integrate these definitions into the main text.\\n\\n3. Performance in stochastic setting\\nBoth Montezuma\\u2019s Revenge and PrivateEye environments are stochastic: each action leads to 2~4 frame-skips randomly. Our method outperforms the curiosity baseline in both environments. As for random starting states, please see the response to AnonReviewer3 (A. About the same-start assumption)\\n\\n4. Clarifications\\nA. We would change the words to make it more clear. Here what we mean is that the inverse dynamics provides a feature space that ignores the noise which the agent cannot control (e.g. white noise in visual input) (as suggested by Pathak et al., 2017).\\nB. When there are multiple actions leading to the same next state s\\u2019, the inverse dynamics would have multiple answers. This is what we mean \\u201cambiguous\\u201d.\\nC. In Fig2, the accuracy is the number of cases that the output \\\\hat{a} leads to the desired next state s\\u2019, that is, env(s, hat(a)) = s\\u2019.\\nD. (cos+1)^3/8 is chosen empirically, used for modeling the similarity between states.\"}",
"{\"title\": \"Our Response\", \"comment\": \"Thanks for your comments and suggestions, and we will revise the paper as you suggested.\", \"1\": \"About the same-start assumption.\\nWe discuss the starting states in four cases.\\nA. The starting states are always the same, which is our the assumption.\\nB. There is a small randomness (noise) for the starting state. Path function can handle this: after choosing a goal state from a trajectory, Path function will generate a trajectory from the current starting state to the goal state.\\nC. There are multiple possible starting states. New episodes can start in the same states as *some* (not all) previous episodes: the agent can simply remember successful trajectories and apply our algorithm to distinct start states separately.\\nD. If the starting states are too far away (or randomly given) and no assumption is made about their relation/similarity, little can be expected to take advantage of former trajectories, even for humans.\", \"2\": \"Experiment details\\nA. The number of seeds is 2 for experiments in Rooms environment, 3 for Atari Games. \\nB. Re Fig2: better exploration in RL is expected to lead to a faster learning curve, not necessarily a better final model. Fig 2 shows exactly this: our method learns faster than the baseline, without sacrificing the final model performance.\\nC. Re Fig3: as the destination are very far away from the starting point (see Appendix A.2), agents\\u2019 score would be almost 0 if the destination could not be found during the exploration. The Zigzag-shaped rooms environment requires the agents to explore the full map to reach the destination. The results are consistent with Fig1 showing the exploration efficiency.\", \"3\": \"Clarification on technical details.\\nWe apologize for any confusions in the paper and will improve the writing. Specific questions by the reviewer are addressed as the following:\\nOn the first sentence of Section 3.2.2. While Path function (we regard it as skills) is being trained independently with the task, it can be applied on any other tasks. For example, in Zero-shot Visual Imitation [1], the goal-conditioned policy is used to follow a sequence of key-points demonstration.\\nIn Fig 4, the second row shows the number of times each state being chosen as the target state (the last state of a selected trajectory). This number is illustrated as a heatmap with log-scale.\\n\\n[1] Deepak Pathak, Parsa Mahmoudieh, Guanghao Luo, Pulkit Agrawal, Dian Chen, Yide Shentu, Evan Shelhamer, Jitendra Malik, Alexei A. Efros, and Trevor Darrell. Zero-shot visual imitation. In ICLR, 2018.\"}",
"{\"title\": \"Our Response\", \"comment\": \"Thanks for your encouraging comments and nice suggestions. We plan to update the figures in the paper upon the decision. We will also integrate Appendix C and the cosine metrics into the main text.\"}",
"{\"title\": \"Thanks\", \"comment\": \"Thanks for your pointers to the related works. We will definitely add them to our references and compare with them upon the decision.\"}",
"{\"comment\": \"Hello,\\n\\nI just came across your paper. I think few other papers should be cited, which also tries to use explicit memory in terms of high value states or goal states or high bellman error.\\n\\n[1] Recall Traces, https://arxiv.org/abs/1804.00379 (I'm the author of this paper)\\n[2] Self Immitation learning, https://arxiv.org/abs/1806.05635\\n[3] Neural episodic control https://arxiv.org/abs/1703.01988\\n\\nThanks for your time! :)\", \"title\": \"More references\"}",
"{\"title\": \"Good idea, good demonstration, good score\", \"review\": \"This paper is the first showing that achieving self-generated tasks during spontaneous exploration and getting reinforced by self-supervised signals is a promising way for the agent to develop skills itself.\\nThe scores are demonstrative on several tasks.\\nIt opens interesting direction for further research.\", \"rem\": \"few typos like \\\"An state\\\"\\nPlease plot in dash the count methods in the graphs (use oracle information)\\n\\nAnnexe C shall be integrated into the core of the paper. Could be simplified.\\nThe cosine metrics shall be better integrated in it.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Interesting idea, but rather weak paper. Can be improved a lot with additional writing effort\", \"review\": \"In this paper, the authors propose an exploration strategy based on the explicit storage and recall of trajectories leading to novel states. A pool of such trajectories is managed over time, and a method is proposed so that the agent can learn how to follow a path corresponding to these trajectories so as to explore novel states. The idea is demonstrated in a set of room experiments, and quickly shown efficient in Montezuma's Revenge and PrivateEye Atari games.\\n\\nOverall, the idea has some merits, but the empirical study is weak and the paper suffers from unsufficient writing effort (or more probably time).\", \"what_i_like_most_in_the_paper_is_the_split_of_exploration_methods_into_3_categories\": [\"adding some \\\"intrinsic reward\\\" bonuses to novel states (curiosity-driven exploration) , trying to reach various goals (goal-driven exploration) and using memory to reach again novel states (memory-driven exploration). Actually, this split may be debated. For instance, some frameworks based on goals have been labelled curiosity-driven, e.g. \\\"Curiosity-Driven Exploration of Learned Disentangled Goal Spaces\\\" (Laversanne-Finot, P\\u00e9r\\u00e9 and Oudeyer, CoRL 2018), but anyways I find it useful. That said, this aspect of the introduction is reiterated in the \\\"Related Work\\\" section in a quite redundant way, whereas both parts could have been better integrated. Furthermore, the related work section is hardly a draft, I'll come to that later.\", \"The presentation of the method in Section 2 is rather clear and convincing. My only concern is about the assumption that the agent is always starting in the same state. This assumption may not hold in many settings, and the approach appears to be quite dependent on it. A discussion of how it could be extended to a less restricting assumption would be welcome.\", \"The experimental section is weaker. A few concerns:\", \"I could not find much about the number of seeds, trials, the way to depict some variance, the statistical significance of the differences between results presented in Figure 1. The same is true about Figs. 2, 3 and 5.\", \"In Fig.2, the claim that the author's method learns better models is hardly supported by the left-hand-side plot, and significance is not assessed.\", \"I'm puzzled about the very low performance of baselines in the plots of Fig. 3. Could the author explain why these performances are null.\", \"The Atari games section helps figuring out that the framework is not too specific of the rooms environment, but the lack of analysis does not help making sure that this is just the explicit recall mechanism that is responsible for superior performance and why.\", \"Another point about this section is that poor writing does not help understanding some points.\", \"to me, the first sentence of Section 3.2.2 does not make sense at all.\", \"in the caption of Fig. 4, \\\"The second row is the heatmaps for states that the number of times being selected as a target state.\\\": I don't get what it means, thus I don't understand what that row shows.\", \"Fig.5 comes with no caption\"], \"about_the_related_work\": [\"The comparison to other methods using memory needs to be expanded. In particular, I would put HER-like mechanisms here rather than in 4.1, as \\\"explicit recall\\\" shares some importan ideas with \\\"experience replay\\\"\", \"Section 4.4 (HRL) is not useful as is.\", \"Finally, in the conclusion, the claim that the method can be combined with \\\"many sota exploration methods\\\" is not supported, as the authors have only tried two and did not analyse the results in much details.\"], \"typos\": \"- p4:\\nwe can easily counting\\n(include borders) => including\\nis provide => provided\", \"are_less_less_visited_states\": \"quite inelegant\\n\\n- p7:\\nIn Montezuma's Revenge, Comparing => comparing\\nWhere they encourage => remove \\\"Where\\\"\\n\\n- p8:\\nrecallcan => recall can\\nthe problem of reach goals => reaching\\nit succesfully reproduce => reproduces\\n\\nThe last paragraph of Section 4.2 needs a careful rewriting, as long sentences with parenthese in the middle appear to be some draft version.\\n\\ncontrol(Pritzel => Missing space\\nOur method use memory => uses\\nAlthough ..., but => remove but\\n\\nThe path function can be seen as a form of skills => skill?\\nBesides, the \\\"can be seen\\\" needs to be further explained...\\n\\nAppendix\\n\\nFinally, we provided => provide\\n\\nis around (math formula) => cannot you be more specific?\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Many hacks and heuristics.\", \"review\": \"In this paper, the authors propose a heuristic method to overcome the exploration in RL. They store trajectories which result in novel states.\\nThe final state of the trajectory is called goal state, and the authors train a path function which given a state and a subgoal states (some states in the trajectory) the most probably action the agent needs to take to reach the subgoal. These way they navigate to the goal state. The goal state is claimed to be achieved if the feature representation stoping state is close to goal (or subgoal for subgoal navigation).\\n\\n\\nThe authors mainly combine a few previous approaches \\\"Self-Imitation Learning,\\\" \\\"Automatic Goal Generation for Reinforcement Learning Agents,\\\" and \\\"Curiosity-driven exploration by self-supervised prediction\\\" to design this algorithm which makes this approach less novel.\\n\\nGeneral comment; there are variable and functions in the paper that are not defined, at least at the time, they have been used. The Rooms environment is not described. What is visit_times[x] and x is not a wall? What is stage avg reward? and many others\\n\\nThe main idea of the algorithm is clear, but the description of the pieces is missing.\\n\\nIt is not clear in stochastic setting how well this approach will perform. \\n\\nThe authors state that\\n\\\"Among different choices of the modeling, we choose inverse dynamics (Pathak et al., 2017) as the environment model, which has been proved to be an effective way of representing states under noisy environments.\\\"\\nI took a look at this paper and could not find neither proof or quantification of \\\"effective\\\"-ness. Please clarify what the meaning this statement is.\\n\\nWhy s=s' is ambiguous to the inverse dynamics?\\n\\nWhat is the definition of acc in fig2?\\n\\nwhy (consin+1)^3/8 is chosen?\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
rJl8BhRqF7 | Improving machine classification using human uncertainty measurements | [
"Ruairidh M. Battleday",
"Joshua C. Peterson",
"Thomas L. Griffiths"
] | As deep CNN classifier performance using ground-truth labels has begun to asymptote at near-perfect levels, a key aim for the field is to extend training paradigms to capture further useful structure in natural image data and improve model robustness and generalization. In this paper, we present a novel natural image benchmark for making this extension, which we call CIFAR10H. This new dataset comprises a human-derived, full distribution over labels for each image of the CIFAR10 test set, offering the ability to assess the generalization of state-of-the-art CIFAR10 models, as well as investigate the effects of including this information in model training. We show that classification models trained on CIFAR10 do not generalize as well to our dataset as it does to traditional extensions, and that models fine-tuned using our label information are able to generalize better to related datasets, complement popular data augmentation schemes, and provide robustness to adversarial attacks. We explain these improvements in terms of better empirical approximations to the expected loss function over natural images and their categories in the visual world. | [
"image classification",
"human experiments",
"risk minimization"
] | https://openreview.net/pdf?id=rJl8BhRqF7 | https://openreview.net/forum?id=rJl8BhRqF7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"H1xywymlxV",
"SJgtofrqnm",
"r1gzsT493Q",
"HJl5F7-53m",
"Syg4O1pYcm",
"rkeWn1Q4qX",
"HJgVn7UX9Q"
],
"note_type": [
"meta_review",
"official_review",
"official_review",
"official_review",
"comment",
"official_comment",
"comment"
],
"note_created": [
1544724311194,
1541194401190,
1541193114076,
1541178241646,
1539063660233,
1538695081087,
1538642860378
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1545/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1545/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1545/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1545/AnonReviewer2"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1545/Authors"
],
[
"(anonymous)"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper presents a new annotation of the CIFAR-10 dataset (the test set) as a distribution over labels as opposed to one-hot annotations. This datasets forms a testbed analysis for assessing the generalization abilities of the state-of-the-art models and their robustness to adversarial attacks.\\n \\nAll the reviewers and AC acknowledge the contribution of dataset annotation and that the idea of using label distribution for training the models is sound and should improve the generalization performance of the models.\", \"however_the_reviewers_and_ac_note_the_following_potential_weaknesses\": \"(1) the paper requires major improvement in presentation clarity and in-depth investigation and evidence of the benefits of the proposed framework \\u2013 see detailed comments of R3 on what to address in a subsequent revision; see the suggestions of R2 for improving the scope of the empirical evaluations (e.g. distortions of the images, incorporating time limits for doing the classifications) and the requests of R1 for clarifications; (2) the related work is inadequate and should be substantially extended \\u2013 see the related references suggested by the R2; also R1 rightly pointed out that two out of four future extensions of this framework have been addressed already, which questions the significance of findings in this submission.\\nThe R2 raised concerns that the current evaluation is missing comparisons to a) the calibration approaches and b) cheaper/easier ways of getting soft labels -- see R2\\u2019s suggestion to use the Brier score for model calibration and to use a cost matrix about how critical a misclassification is (cat <-> dog, versus cat <-> car) as soft labels.\\nAmong these, (2) did not have a substantial impact on the decision, but would be helpful to address in a subsequent revision. However, (1) and (3) makes it very difficult to assess the benefits of the proposed approach, and was viewed by the AC as a critical issue.\\n \\nThere is no author response for this paper. The reviewer with a positive view on the manuscript (R3) was reluctant to champion the paper as the authors have not addressed the concerns of the other reviewers (no rebuttal).\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Meta-Review\"}",
"{\"title\": \"Interesting idea, but the empirical investigation seems lacking\", \"review\": \"The authors create a new dataset with label distributions (rather than one-hot annotations) for the CIFAR-10 test set. They then study the effect of fine-tuning using this dataset on the generalization performance of SOTA deep networks. They also study the effects on adversarial robustness.\\n\\nI think that datasets such as the one generated in this paper could indeed be a valuable testbed to study deep network generalization and robustness. There are many nice benefits of label distributions over one hot labels (that the authors summarize in Section 2.) The paper is also clear and well-written. \\n\\nThat being said, I do not find the investigation of this paper completely satisfactory. For instance in the generalization experiments, the numbers presented seem to show some interesting (and somewhat surprising) trends, however the authors do not really pursue these or provide any insight as to why this is the case. I also find the section on robustness very weak.\", \"detailed_comments\": \"- The theoretical contribution mentioned in the appendix does not really seem to be a contribution - it is just a simple derivation of the loss under label distributions. Theoretical contributions are not necessary for a paper to have merit - the authors should remove this statement from the introduction as it detracts from the value of the paper.\\n\\n- I find it somewhat surprising that the accuracy of the models does not change on training with Cifar10H. Do the authors have any intuition as to why this is the case? The model cross entropy seems to go down, indicating that probability assigned to the correct class increases. I would think that training with label distributions would actually reduce misclassification on confusing instances. It would be interesting to see how the logit distributions change for different examples. For instance, how does the model confidence change on correctly vs wrongly classified examples?\\n\\n- The authors mention that they run each hyperparameter configuration for three random seeds. It would be nice then to see error bars for the results reported Tables 1 and 2, particularly because the differences in accuracy are small. Did the authors try different train-test splits of the test set? It would also be helpful if the authors could make plots for the results in these tables (at least in the appendix). It is hard to compare numbers across different tables.\\n\\n-I find the results in Table 2 confusing. Comparing the numbers to Table 1, it seems that mixup does not really change accuracies/loss. The model names in Table 2 do not exactly match Table 1 so it is hard to identify the additional gain from using mixup that the authors mention. The authors should add plots for these results to illustrate the effect of adding mixup more clearly.\\n\\n-I am not convinced by the section on robustness. Firstly, it is not clear to me why the authors chose FGSM which is known to be a somewhat simple attack to illustrate improved robustness of their model. To perform a useful study of robustness, the authors should study SOTA attacks such as PGD [Madry et al., 2017]. I also do not understand the claim that the top-1 choice becomes less confident after training with CIFAR10H -- this seems to be contradicted by the fact that the cross entropy loss goes down. The authors should provide supporting evidence for this claim by looking at changes in confidence (see point 3 above). Also, the comment about the trade-off between accuracy and robustness seems vague - could the authors clarify what they mean?\\n\\nOverall, I like the premise of this paper and agree that with the potential benefits of the dataset generated. However, I think that the current experiments are not strong enough to corroborate this.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Multiple GT labels\", \"review\": \"The authors propose to improve classification accuracy in a supervised learning framework, by providing richer ground truth in the form a distribution over labels, that is not a Dirac delta function of the label space. This idea is sound and should improve performance.\\n\\nUnfortunately this work lacks novelty and isn't clearly presented.\\n(1) Throughout the paper, there are turns that used without definition prior to use, all table headers in table 1. \\n(2) Results are hard to interpret in the tables, and there are limited details. Mixup for example, doesn't provide exact parameters, but only mentions that its a convex sum.\\n(3) There is no theoretical justification for the approach.\\n(4) This approach isn't scalable past small datasets, which the authors acknowledge. \\n(6) This has been already done. In the discussion the authors bring up two potential directions of work:\\n (a) providing a distribution over classes by another model - > this is distillation (https://arxiv.org/abs/1503.02531)\\n (b) adding a source of relationships between classes into the objective function -> this is (https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/42854.pdf)\", \"rating\": \"3: Clear rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Official Review: Not fully motivated and related to previous work on learning with class uncertainty and calibration\", \"review\": \"The paper presents a new version of CIFAR10 that is labelled by multiple people (the test part of the data). They use it to improve the calibration of several image classifiers through \\u201cfine-tuning\\u201d and other techniques\\nThe title is too general, taking into account that this setting has appeared in classification in many domains, with different names (learning from class distributions, crowd labellers, learning from class scores, etc.). See for instance,\", \"https\": \"//www.ncbi.nlm.nih.gov/pmc/articles/PMC3994863/\", \"http\": \"//www.cs.utexas.edu/~atn/nguyen-hcomp15.pdf\\nAlso, at the end of section 2 we simply reach logloss, which is a traditional way of evaluating the calibration of a classifier, but other options exist, such as the Brier score. At times, the authors mention the trade-off between classification accuracy and cross-entropy. This sounds very much the trade-off between refinement and calibration, as one of the possible decompositions of the Brier score.\\nThe authors highlight the limitations of this work, and they usually mention that the problem must be difficult (e.g., low resolution). Otherwise, humans are too good to be useful. I suggest the authors to compare with psychophysics and possible distortions of the images, or time limits for doing the classifications. \\nNevertheless, the paper is not well motivated, and the key procedures, such as \\u201cfine-tuning\\u201d lack detail, and comparison with other options.\\nIn section 2, which is generally good and straightforward, we find that p(x|c) being non-overlapping as a situation where uncertainty would be not justified. Overlap would simply say that it is a categorisation (multilabel classification) problem rather than a classification problem, but this is different from the situation where labels are soft or given by several users. \\nIn the end, the paper is presented from the perspective of image recognition, but it should be compared with many other areas in classification evaluation where different metrics, presentation of the data, levels of uncertainty, etc., are used, including different calibration methods, as alternatives to the expensive method presented here based on crowd labelling.\", \"pros\": [\"More information about borderline cases may be useful for learning. This new dataset seems to capture this information.\"], \"cons\": [\"The extra labelling is very costly, as the authors recognise.\", \"The task is known in the classification literature, and a proper comparison with other approaches is required.\", \"Not compared with calibration approaches or other ways where boundaries can be softened with less information from human experts. For instance, a cost matrix about how critical a misclassification is considered by humans (cat <-> dog, versus cat <-> car) could also be very useful, and much easier to obtain.\"], \"rating\": \"3: Clear rejection\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"comment\": \"Thanks. I would like to use your data/code and I'm waiting for the release.\", \"title\": \"That's nice\"}",
"{\"title\": \"Dataset release\", \"comment\": \"Thanks for your interest. We are indeed planning to release the dataset (both the aggregate and individual human responses) as soon as the paper lands. We also intend to release some code and trained models. It's intended to serve as a new benchmark, target of research questions, and even a training/tuning dataset.\"}",
"{\"comment\": \"It is a neat idea to, for CIFAR classification problem, create soft labels by collecting human supervision, which should be useful to improve generalization ability. Do you have any plan to release the data CIFAR10H?\", \"title\": \"Interesting data\"}"
]
} |
|
Byl8BnRcYm | Capsule Graph Neural Network | [
"Zhang Xinyi",
"Lihui Chen"
] | The high-quality node embeddings learned from the Graph Neural Networks (GNNs) have been applied to a wide range of node-based applications and some of them have achieved state-of-the-art (SOTA) performance. However, when applying node embeddings learned from GNNs to generate graph embeddings, the scalar node representation may not suffice to preserve the node/graph properties efficiently, resulting in sub-optimal graph embeddings.
Inspired by the Capsule Neural Network (CapsNet), we propose the Capsule Graph Neural Network (CapsGNN), which adopts the concept of capsules to address the weakness in existing GNN-based graph embeddings algorithms. By extracting node features in the form of capsules, routing mechanism can be utilized to capture important information at the graph level. As a result, our model generates multiple embeddings for each graph to capture graph properties from different aspects. The attention module incorporated in CapsGNN is used to tackle graphs with various sizes which also enables the model to focus on critical parts of the graphs.
Our extensive evaluations with 10 graph-structured datasets demonstrate that CapsGNN has a powerful mechanism that operates to capture macroscopic properties of the whole graph by data-driven. It outperforms other SOTA techniques on several graph classification tasks, by virtue of the new instrument. | [
"CapsNet",
"Graph embedding",
"GNN"
] | https://openreview.net/pdf?id=Byl8BnRcYm | https://openreview.net/forum?id=Byl8BnRcYm | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"HJxw7Rz3FE",
"BkgugyPUuV",
"S1lMPzS8u4",
"B1eVSzvZe4",
"HyxlLbgy14",
"HJgaY4upCX",
"BylgrNNTCX",
"rJlwCtK5nm",
"S1gMPXrcn7",
"ByxrZRED3m"
],
"note_type": [
"official_comment",
"official_comment",
"comment",
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1554947615381,
1553522415548,
1553515098127,
1544806972451,
1543598407925,
1543500933309,
1543484472507,
1541212622997,
1541194586349,
1540996604958
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1544/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1544/Authors"
],
[
"~Benedek_Rozemberczki1"
],
[
"ICLR.cc/2019/Conference/Paper1544/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1544/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1544/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1544/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1544/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1544/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1544/AnonReviewer2"
]
],
"structured_content_str": [
"{\"title\": \"Implementation\", \"comment\": \"The Tensorflow implementation is available at https://github.com/XinyiZ001/CapsGNN\"}",
"{\"title\": \"Number of epoch\", \"comment\": \"Hi, just as what is mentioned in the paper. The exact number of epochs depends on the validation accuracy.\\n\\nWhen I did the experiments, I conducted 10-fold cross validation and for each fold I will run enough and a same number of epochs so that the models are overfitting on almost all the validation folds. Then I chose the testing accuracy of the model which achieves the highest accuracy on corresponding validation fold as the final reported accuracy. \\n\\nSo here, I can provide you the largest number of epochs I set for each dataset and you can find the exact number of epochs based on your validation data:\", \"mutag\": \"2000\", \"enzymes\": \"3000\", \"proteins\": \"2000\\n D&D: 300\", \"nci1\": \"1500\", \"collab\": \"300\", \"imdb_b\": \"2000\", \"imdb_m\": \"2000 (but usually reach highest within 500 epochs)\", \"reddit_m5k\": \"150 (can try more epochs)\", \"reddit_m12k\": \"150 (can try more epochs)\"}",
"{\"comment\": \"I tried to implement the paper in PyTorch: https://github.com/benedekrozemberczki/CapsGNN.\\n\\nWhat was the actual number of epoch usually used in the paper?\", \"title\": \"Implementation\"}",
"{\"metareview\": \"AR1 asks for a clear experimental evaluation showing that capsules and dynamic routing help in the GCN setting. After rebuttal, AR1 seems satisfied that routing in CapsGNN might help generate 'more representative graph embeddings from different aspects'. AC strongly encourages the authors to improve the discussion on these 'different aspects' as currently it feels vague. AR2 is initially concerned about experimental evaluations and whether the attention mechanism works as expected, though, he/she is happy with the revised experiments. AR3 would like to see all biological datasets included in experiments. He/she is also concerned about the lack of ability to preserve fine structures by CapsGNN. The authors leave this aspect of their approach for the future work.\\n\\nOn balance, all reviewers felt this paper is a borderline paper. After going through all questions and responses, AC sees that many requests about aspects of the proposed method have not been clarified by the authors. However, reviewers note that the authors provided more evaluations/visualisations etc. The reviewers expressed hope (numerous times) that this initial attempt to introduce capsules into GCN will result in future developments and improvements. While AC thinks this is an overoptimistic view, AC will give the authors the benefit of doubt and will advocate a weak accept.\\n\\nThe authors are asked to incorporate all modifications requested by the reviewers. Moreover, 'Graph capsule convolutional neural networks' is not a mere ArXiV work. It is an ICML workshop paper. Kindly check all ArXiV references and update with the actual conference venues.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"The reviewers hope to see further improvements.\"}",
"{\"title\": \"Review update\", \"comment\": \"Thanks for the revised version and the comments. I will update my review accordingly.\"}",
"{\"title\": \"Review updated\", \"comment\": \"Thanks for providing a revised version with additional experimental results. I have updated my review accordingly.\"}",
"{\"title\": \"WL is computational efficient\", \"comment\": \">> So kernel-based algorithms require computationally intensive preprocessing effort especially when processing large datasets.\\n\\nI would like to note that WL is computational efficient and scales to large data sets when using a linear SVM. Moreover, the WL optimal assignment kernel [1] provides better accuracy results on many data sets. But I agree that WL in many cases still is a fair baseline.\\n\\n[1] On Valid Optimal Assignment Kernels and Applications to Graph Classification\\nNils M. Kriege, Pierre-Louis Giscard, Richard C. Wilson, NIPS 2016.\"}",
"{\"title\": \"The proposed CapsGNN is original and achieves good results on some datasets; Some more discussions may further help.\", \"review\": \"This paper was written with good quality and clarity. Their idea was original and experiment results show the proposed CapsGNN is effective in large graph data analysis, particularly on graphs with macroscopic properties.\", \"pros\": \"1) The paper makes a clear and detailed comparison between the proposed CapsGNN and the related models in section 3.2.\\n\\n2) Use of capsules nets and routing in CapsGNN are close to that in the original CapsNet, with the core characteristics (and potential advantages) of capsules and dynamic routing being perserved in the proposed CapsGNN to handle the targeted problem. \\n\\n3) The comparison and model analysis are thorough and comprehensive.\", \"cons_or_unclear_points\": \"1) Why the paper does not include all biological datasets (6 datasets in total, only 4 used in this papaer) presented in (Verma & Zhang, 2018) in the experiment section. The experiments in Verma & Zhang, (2018) show that the GCAPS-CNN achieved SOTA results on nearly all biological datasets. Does GCAPS-CNN outperformed CapsGNN on biological datasets? It will be nice if there is comparison on more datasets and more analysis is provided between CapsGNN and GCAPS-CNN.\\n\\n2) Why CapsGNN is not suitable for preserving information of fine structures? Can the authors give more explanation and discussions?\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Capsule networks for graphs without convincing motivation and experimental evaluation\", \"review\": \"The authors provide an architecture that applies recent advances in the field of capsule networks in the graph neural network domain. First, hierarchical node level capsules are extracted using GCN layers. Second, after weighting each capsule by the output of a proposed attention module, graph level capsules are computed by performing a global dynamic routing. These graph level capsules are used for training a capsule classifier using a margin loss and a reconstruction loss.\\n\\nThe general architecture seems to be a reasonable application of the capsule principle in the graph domain, following the proof of concept MNIST architecture proposed by Sabour et al.\\n\\nMy main concern is that I have problems grasping the motivation behind using capsules in the given scenario. Besides an unprecise motivation in the introduction, there is no clear reason why the routing mechanism helps with solving the given tasks. Capsule networks capture pose covariances by applying a linear, trainable transformation to pose vectors and computing the agreement of the resulting votes. It is not clear to me how discrete information like graph connectivity can be encoded in a pose vector so that linear transformations are able to match different \\\"connectivity poses\\\".\\n\\nIs there a more formal argument that explains why capsules should be able to capture more information about the input graph than other GCNNs?\\n\\nAlso, some design choices seem to be quite arbitrary. One example is using the last feature maps of the GCN as positions for coordinate addition. Is there a theoretical/intuitive motivation for this?\\n\\nResults for the given experiments show improvement on some graphs. However, the authors proposed several concepts: a global pooling method using dynamic routing, an attention mechanism, a novel reconstruction loss, interpreting deep node embeddings as spatial positions. It is not clear to what extent the individual aspects of the method contribute to the gains. The qualitative capsule embedding analysis is interesting. However, this part needs a comparison to standard global graph embeddings to see if there is a significant difference.\\n\\nIn my opinion, the paper needs:\\n1) a clear experimental evaluation showing that capsules and the dynamic routing lead to improved results (i.e. by providing an ablation study to show which gains result from the attention-based global pooling mechanism, the reconstruction loss, the dynamic routing and from the coordinate addition), or\\n2) a more precise motivation for the use of dynamic routing to capture correlation between pose vectors in graphs in general (i.e. formal arguments why the method is stronger in capturing statistics or for what types of graphs it provides more discriminative power).\\n\\nOverall, the paper does not convince me that capsules and dynamic routing provide advantages if used like the authors propose. Therefore, I tend to voting for rejecting the paper as long as points 1) and 2) are not addressed properly.\", \"minor_remarks\": \"- There are quite a lot of grammatical errors (especially missing articles).\\n\\n--------------------------\", \"update\": \"The authors addressed some of the weak points mentioned above adequately. The experimental evaluation was significantly improved and the results are a nice contribution. However, the theoretical contribution and the poor motivation of capsules in the graph context remain weak points. I have updated my rating accordingly.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"A long paper with incomplete experiments\", \"review\": \"The paper fuses Capsule Networks with Graph Neural Networks. The idea seems technically correct and is well-written. With 13 pages the paper seems really long. Moreover, the experimental part seems to be too short. So, the theoretical and experimental part is not well-balanced.\\n\\nMinor concerns/ notes to the authors:\\n1.\\tPage 1: The abbreviation GNN is used before it is defined.\\n2.\\tPage 2: I guess there is a mistake in your indices. Capital N == n or?\\n3.\\tPage 4: What is \\\\mathbf{I}? I guess you mean the identity matrix.\\n4.\\tPage 4: Could you define/describe C_all?\\n5.\\tPage 5: Can you describe how you perform the coordinate addition or add a reference?\\n6.\\tPage 6: The idea to use reconstruction as regularization method is not new. May you can add a respective reference?\\n7.\\tPage 8: The abbreviations in your result tables are confusing. They are not aligned with the text. For example, what is Caps-CNN for a model?\\n\\nMy major concern is about your experimental evaluation. Under a first look the result tables looking great. But that\\u2019s due to fact, that you marked the two best values in bold type. To be more precise, the method WL is in the most cases better than your proposed method. This makes me wondering if there is a real improvement by your method. It would be easier to decide if you would present the training/inference times and the number of parameters. By having that, I could relate your results regarding an accuracy-complexity tradeoff. Moreover, your t-SNE and attention visualizations are not convincing. As you may know, the output of a t-SNE strongly dependents on the chosen hyper-parameters like the perplexity, etc. You not mentioned the setting of these values. Additionally, it is hard to decide if your embeddings are good or not because you are not presenting a baseline or referencing a respective work. You are complaining that this is due to the space restrictions. But you have unlimited capacity in the appendix. So please provide some clarifying plots. Finally, I\\u2019m also not convinced that your attention mechanism works as expected. It\\u2019s again due to missing baseline results and/or a reference. If it\\u2019s not possible to add one of them, you could perform an easy experiment where you freeze your fully-connected layers of the attention module to fixed values (maybe such that it performs just an averaging) and repeat your experiments. In case your attention module works as expected you should observe a real change in terms of accuracy and in your visualizations too.\\nYou could also think about to publish your code or present further results/plots in a separate blog.\", \"update\": \"According to the revised version which addresses a lot of my concerns, I vote for marginally above acceptance threshold.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
Syl8Sn0cK7 | Learning a Meta-Solver for Syntax-Guided Program Synthesis | [
"Xujie Si",
"Yuan Yang",
"Hanjun Dai",
"Mayur Naik",
"Le Song"
] | We study a general formulation of program synthesis called syntax-guided synthesis(SyGuS) that concerns synthesizing a program that follows a given grammar and satisfies a given logical specification. Both the logical specification and the grammar have complex structures and can vary from task to task, posing significant challenges for learning across different tasks. Furthermore, training data is often unavailable for domain specific synthesis tasks. To address these challenges, we propose a meta-learning framework that learns a transferable policy from only weak supervision. Our framework consists of three components: 1) an encoder, which embeds both the logical specification and grammar at the same time using a graph neural network; 2) a grammar adaptive policy network which enables learning a transferable policy; and 3) a reinforcement learning algorithm that jointly trains the embedding and adaptive policy. We evaluate the framework on 214 cryptographic circuit synthesis tasks. It solves 141 of them in the out-of-box solver setting, significantly outperforming a similar search-based approach but without learning, which solves only 31. The result is comparable to two state-of-the-art classical synthesis engines, which solve 129 and 153 respectively. In the meta-solver setting, the framework can efficiently adapt to unseen tasks and achieves speedup ranging from 2x up to 100x. | [
"Syntax-guided Synthesis",
"Context Free Grammar",
"Logical Specification",
"Representation Learning",
"Meta Learning",
"Reinforcement Learning"
] | https://openreview.net/pdf?id=Syl8Sn0cK7 | https://openreview.net/forum?id=Syl8Sn0cK7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"BJlUkwHxeV",
"SJgbtwC0RX",
"SJgqg7ARRm",
"Hye08GRCCX",
"SkeU1J2CCQ",
"rkxb8RoCAX",
"H1ge36JK0m",
"H1x37nytCm",
"rJl81oyFC7",
"BJlJVz9dA7",
"S1xZNmeeCX",
"Hklw2Tke07",
"BygXD6yl0X",
"rkgf_2Je0Q",
"SJltos1eAQ",
"rJlT6Ucq27",
"HkgUk7O5hm",
"HyeXDiav3X"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544734429549,
1543591801289,
1543590642091,
1543590485565,
1543581405831,
1543581257322,
1543204263693,
1543203876032,
1543203549794,
1543180838900,
1542615848588,
1542614446823,
1542614362743,
1542614122046,
1542613921190,
1541215941142,
1541206750511,
1541032795066
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1543/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1543/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1543/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1543/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1543/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1543/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1543/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1543/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1543/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1543/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1543/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1543/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1543/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1543/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1543/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1543/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1543/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1543/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper presents an RL agent which progressively synthesis programs according to syntactic constraints, and can learn to solve problems with different DSLs, demonstrating some degree of transfer across program synthesis problems. Reviewers agreed that this was an exciting and important development in program synthesis and meta-learning (if that word still has any meaning to it), and were impressed with both the clarity of the paper and its evaluation. There were some concerns about missing baselines and benchmarks, some of which were resolved during the discussion period, although it would still be good to compare to out-of-the-box MCTS.\\n\\nOverall, everyone agrees this is a strong paper and that it belongs in the conference, so I have no hesitation in recommending it.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Exciting work\"}",
"{\"title\": \"-\", \"comment\": \"If you believe a score of 7 reflects your opinion of and support for the paper after discussion, then all is good.\"}",
"{\"title\": \"Removing dissatisfaction with the treatment of EUSolver in the original paper\", \"comment\": \"I sympathize with your point of view, and consequently, I'm removing my previous dissatisfaction. Given the reiteration you did, you've got a good paper here.\"}",
"{\"title\": \"Reassessment\", \"comment\": \"My major concern was the treatment of EUSolver in the original paper, and that is why I lowered the score after the first iteration. Admittedly, the authors did address that issue, so I'm happy to go back to 7.\"}",
"{\"title\": \"Reassessment\", \"comment\": \"Reviewer 2, thank you for participating in discussion with the authors. They appear to have been able to clarify some points that were of concern to you. Are you satisfied with this clarification? You seem to think the paper is good, but give it a borderline score. While we invite you to reconsider your assessment in light of the response, if you wish to stick by it, can you provide a short explanation as to why?\"}",
"{\"title\": \"Please consider author response\", \"comment\": \"Reviewer 3, the authors of this paper have submitted a fairly detailed response to your own detailed review (thanks for that!). It is important that there be some consideration of their reply, and if needed, discussion. Please take the time to review and respond to their rebuttal, and either reconsider your assessment or explain why you stand by it in its current form.\"}",
"{\"title\": \"Thanks for helping us to correct the mistake\", \"comment\": \"We apologize that the EUSolver was not evaluated in the first draft, which is in part due to the fact that we were not able to get the executable of EUSolver (and then collect results using our evaluation setup before the deadline). On the other hand, we thought CVC4 is the fairest representative as the state-of-the-art when designing experiments, as it is not specialized for a particular set of benchmarks. But we agree with the reviewer that we should have made this very clear in the original paper. Now we are glad to see that we got the chance to finish the evaluation of EUSolver using our experiment setting and include its result in the revision.\\n\\nThe conclusion is now updated, which was forgotten in revision 1 (sorry for that). \\n\\nEUSolver was actually cited in the related work of our original paper (see the reference: Rajeev Alur, Arjun Radhakrishna, and Abhishek Udupa, TACAS 2017). Now we also cite it in the experiment section and add the reference to SyGuS 2017 Competition evaluation report (arXiv:1711.11438). \\n\\n We also add the description of our evaluation setup difference from SyGuS 2017 competition in the footnote of page 7, as it is fairly short.\"}",
"{\"title\": \"Thanks for sharing the recent work\", \"comment\": \"Thanks for the comments. Though we believe these works are not closely related to our work, they are interesting directions to explore in the general program synthesis community, which will help readers have a better view of this area. So we are happy to include these works into our citation.\\n\\nWe would like to clarify a bit for our work. We primarily study the setup where direct input-output supervision is unavailable, i.e. one can only check if the program is correct after generating the complete program and evaluate it with the oracle. This is different from the 3 papers mentioned above, where they all assume some forms of input-output pairs are given for supervision. This difference leads significant changes in designing the model: first, we adopt the RL setting because in our setting the supervision is sparse and non-differentiable. Second, program specification is provided as a CFG in our setting, thus we can use the graph embedding to encode the constraint, which further enables us to transfer the knowledge across different programs.\"}",
"{\"title\": \"Paper revision 2\", \"comment\": [\"We updated our paper with the following changes:\", \"We updated the conclusion by making it consistent with the abstract updated in the previous revision.\", \"We included a few recent program synthesis work from ICML 2018 suggested by the anonymous reviewer into our references.\"]}",
"{\"title\": \"Dissatisfied with the treatment of EUSolver in the original paper\", \"comment\": \"> Difference from SyGus competition\\n\\nCould you be so kind to add the description of the difference to the paper? Appendix will do.\\n\\n> ESymbolic being a reasonable baseline (?)\\n\\nThe question here was whether ESymbolic is considered a baseline or a SOTA engine, as the abstract mentions two SOTA engines, but I was not able to find ESymbolic as a SOTA model described anywhere else. Especially since its performance was way too low to be considered a SOTA engine. I now see the answers is no, and I see you corrected the wording in the text now.\\n\\n> EUSolver\\n\\nAlbeit EUSolver performs better and significantly faster, I would not say its performance negatively impacts your contribution here. Especially as the argument of overspecialization vs generality would have been an easy one to digest. Would you please update your conclusion to reflect the relationship between the SOTAs and your model now, akin to how you updated the abstract?\\n\\nHowever, what I do not understand is why you did not include those results in the paper in the first place? Not just that, you did not even mention/cite the EUSolver in the original paper, as the obvious SOTA model (plus the the model is not correctly cited even right now). From the original paper, an uniformed reader would have easily been convinced that your model achieved SOTA on that subset of tasks, and that would have been the wrong conclusion to make. To reflect my dissatisfaction with this, I will lower my score to 6 for time being.\"}",
"{\"title\": \"Paper revision 1\", \"comment\": [\"We updated our paper with the following changes:\", \"We fixed typos in Figure-1, improved Figure-2 and updated sec tion3.2 to clarify confusions about the graph representation.\", \"We added an evaluation of EUSolver at the reviewers' suggestion in section 5.\", \"We included a brief discussion about the recent DREAMCODER work in section 6.\", \"We also fixed a few other minor typos.\"]}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"We appreciate your insightful review comments. We address the concerns and questions as follows:\\n\\n>> Have you considered any tree search baseline, for example, Monte-Carlo Tree Search? \\n\\nIn our evaluation, the ESymbolic baseline is a tree search method, except that it expands the nonterminals in a deterministic depth-first fashion and does pruning using constraint solving (e.g. 2QBF) along the way. For the proposed method, however, while the generated program that our model operates on indeed can be represented by a tree, the RL algorithm we use is essentially model-free, i.e. it is agnostic to the transition dynamics. We agree with the reviewer that this approach can be further improved with a model-based approach such as MCTS, since we can track the dynamics easily, and presumably yields better performance than the current purely model-free approach. On the other hand, as one of the main motivations of our work is to study how to cast the classical problem into a learning task, we have been focused on the comparison between learning and non-learning methods, instead of model-free and model-based methods. However, it would be definitely interesting to explore more on the model-based methods for program synthesis, and we leave this to our future work.\\n\\n>> How about generalization without fine-tuning? \\n\\nIndeed, it would be great to generalize to unseen programs even without fine-tuning, but in the meta-learning setting, it is typically very hard as it requires a lot of samples not only in the data space but also in the task space, for which we only have around 200 tasks. We did test the performance of the learner without fine-tuning, and, with no surprise, it turns out to perform worse than the out-of-the-box version. \\n\\nOn the other hand, this train-and-finetune fashion is becoming widely accepted by a number of recent works on meta-reinforcement-learning, for instance, \\u201cRecasting Gradient-Based Meta-Learning as Hierarchical Bayes\\u201d.\\n\\n>> Programs seems too low level and lacks of control flow/internal state, which are common features in general programming language like C, Java, Python, etc.\\n\\nThis is a great suggestion for our future work. We believe learning programs from logical specifications in a general programming language is an important direction in artificial intelligence, and our work is a step towards this direction.\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"We appreciate your effort in providing detailed and helpful reviews. We address the concerns and questions as follows:\\n\\n>> Could you clarify the SSA form for graph representation?\", \"the_graph_representation_is_roughly_inspired_by_the_so_called_static_single_assignment\": \"though the same variable is assigned and used at many places, they can be distinguished by attaching a subscript at each place it is assigned. We view the same logical operator used in different grammar rules as slightly different ones, but they do have the same semantic meaning. So we create separate nodes for the same logical operator in different grammar rules, but also introduce a corresponding global node, which is intended to summarize its effects in different rules. Given that SSA is simply an analogy rather than a formal notion for grammar and specification, we would like to give more intuitive names (e.g. global node and global link) for the current SSA node and SSA link in the graph representation.\\n\\n>> Typos in figure-1.\\nThanks for pointing this out, and we apologize for the typos in figure-1. The rule d1 -> X OR Y | d2 OR d2 is meant to be d1 -> X | Y | d2 OR d2. In the case where two OR derivations are indeed given, there would be two d_OR nodes. And, d_T is used to indicate that X and Y are terminals. \\n\\n>> How the policy learns a conditional distribution over the variable number of actions. Is there some form of padding of the matrix and then masking being used?\\nWhen choosing the action, we perform dot product between the state vector and each row of the H_{\\\\alpha_t}, which yields a n_{\\\\alpha_t}-dimensional vector, where n_{\\\\alpha_t} is the number of possible expansions. Then we take the softmax over this vector, which gives the multinomial over actions. This is similar to an attention mechanism. Therefore, no additional parameter or padding is needed to handle the variable number of actions.\\n\\n>> How is the interpolation being performed? Also, how many examples were typically used in the experiments? It might be interesting to explore whether different number of examples lead to different results. How does the learning perform in the absence of these examples with the simple binary 0/1 reward?\\nInterpolation is more straightforward in the domain where numerical values are involved. For the domain in our evaluation, which contains only Boolean values, by interpolation we mean randomly flipping the truth value of some variable of an example to get a new example. We view interpolation as an approximation to the exhaustive enumeration; reward obtained with more interpolated samples will certainly be more reliable than that obtained with less samples. One extreme case is to keep a single sample at a time, which is essentially the simple binary 0/1 reward. We ran the experiment as the reviewer suggested, and out-of-box solver with 0/1 reward can solve 122 tasks. In terms of the number of examples, typically, 200 (or less) examples are used for each task.\\n\\n>> Can you please run EUSolver using your setup? \\nAs suggested by the reviewer, we have run EUSolver with the same setup used in our evaluation. It solves 153 tasks (1 more task is solved in contrast with the SyGus 2017 report). These solved tasks are strictly a superset of tasks solved by CVC4 and ESymbolic. But EUSolver fails to solve 4 tasks solved by our framework. In terms of absolute number of solved tasks, our framework is not yet as good as EUSolver, but it provides a new and complementary way to SyGus tasks. We have incorporated this discussion in our revision.\\n\\nIn terms of comparison with the state-of-the-art, we favored CVC4 solver rather than EUSolver, because CVC4 is a general SMT solver, while EUSolver is designed as a collection of specialized heuristics (e.g. indistinguishability and unification) for each benchmark category of SyGus competition, and (to our best knowledge) its design and implementation are guided and heavily tuned according to SyGus benchmarks. Our framework is also a general solver without requirement for specialized heuristics for each domain. The speciality of EUSolver motivates us to develop a more general solver as baseline, namely ESymbolic, by replacing domain-specific heuristics used in EUSolver with a more general heuristic (i.e. partial program pruning with QBF). \\n\\n>> How about other categories in SyGus competition? \\nThe other categories are not included in our evaluation due to two reasons. First, they have a very few number of tasks, most of which is around 30 or even less. Second, most tasks only have a few input/output example pairs, rather than a logical formal specification that is necessary for our approach to draw counterexamples.\"}",
"{\"title\": \"Response to Reviewer 2 (continue)\", \"comment\": \">> In the extreme case where all inputs can be enumerated - how often does this happen in the tasks you solve?\\nWe randomly sample 100 inputs upfront for each task, which enumerates all inputs for 20 tasks with 6 (or less) variables, and a large fraction of inputs for 57 tasks with 7 variables. For the remaining tasks, we collect a new input (i.e. counter-example) and a few interpolated nearby inputs only when all current inputs have passed, which does not happen very often, and thus we do not end up enumerating all inputs for tasks with 8 or more variables.\\n\\n>> What is the meaning of \\\\tau^(t-1) in figure-2? Is the tree on the right a generated subtree?\\n\\\\tau^(t-1) is the partially generated program (\\\\tau^(0) is the start symbol), which may contain non-terminals. The tree on the right shows the best rule that is going to be used to expand a particular non-terminal according to the current policy. \\n\\n>> Can you provide some intuition and details on state-tracking and state value estimator?\\nWe use LSTM to track states throughout each episode starting from s0. S0 here is an embedding vector obtained from the graph embedding module that encodes the entire original program. For each RL step, we perform the following: (1) get the current state from LSTM; (2) use the current state to generate action and modify program tree; (3) use the embedding of the action to update LSTM; repeat until episode ends. When training using A2C, the error will back-prop end-to-end through both LSTM and graph embedder. The intuition to use LSTM to track the state is that we want the policy to be aware of its current context, i.e. how much progress on the tree has been made so far and this is reflected by the action taken so far. The value estimator is standard MLP with 128 nonlinear hidden units and linear outputs that takes the current state and outputs the estimated state value, which is used in A2C training.\\n\\n\\n>> the probability of each action (..) is defined as \\u2026.H_\\\\alpha^(i) - what does the i stand for? Was that supposed to be the t or \\\\alpha_t was supposed to be \\\\alpha_i?\\nt and i are two different notions. \\\\alpha_t here stands for the non-terminal node to be expanded at timestep t. For non-terminal node \\\\alpha_t, there are n_{\\\\alpha_t} possible ways to expand. \\nFor example, consider expand non-terminal s (s -> d1 OR d1 | d1 AND d1), then \\\\alpha_t refers to s, and n_{\\\\alpha_t} is 2. We define each of the expansions as the action and is associated with a embedding which is H_{\\\\alpha_t}^{(i)}, so i here stands for the ith action among the n_{\\\\alpha_t} possible ones.\\n\\n>> minor stuff \\nWe apologize for certain unclear presentations and typos, which we have fixed in the revision. \\nFor figure-1, \\u201cd_1 -> X OR Y\\u201d is meant to be \\u201cd_1 -> X | Y\\u201d. And yes, d1_OR should be connected to the global OR node. Each concrete sub-figure in \\u201cone step\\u201d shows a particular node sending/collecting messages to its neighbour nodes. Also, we will include DREAMCODER in our related work.\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"We appreciate your effort in providing detailed and helpful reviews. We address the concerns and questions as follows:\\n\\n>> Cryptographic circuit synthesis tasks should consist of 214 tasks.\\nWe ignored 4 tasks that contain integer arithmetic operations (e.g. +), because circuit should only have logical operators. To avoid confusion, we have now updated it to 214.\\n\\n>> How about other categories in SyGus competition? \\nThe other categories are not included in our evaluation due to two reasons. First, they have a very few number of tasks, most of which is around 30 or even fewer. Second, most tasks only have a few input/output example pairs, rather than a logical formal specification that is necessary for our approach to draw counterexamples.\\n\\n>> What is the setup difference from SyGus competition?\\nThe actual hardware and timeout limit are different. For each task, SyGus competition gives each solver 4-core 2.4GHz Intel processors with 128 GB memory and wallclock time limit of 1 hour. Our evaluation uses AMD Opteron 6220 processor, and assigns each solver a single core with 32 GB memory. We run each solver for 6 hours on each task. While our framework could take advantage of massively parallel hardware like GPUs, however, our evaluation does not use such hardware.\\n\\n>> Is ESymbolic a baseline? \\n \\nESymbolic is a reasonable baseline because both ESymbolic and our framework use a top-down search based approach. ESymbolic expands a partial program by enumerating grammar rules in a fixed order, relies on the validity check of partially generated program by leveraging 2QBF (Quantified Boolean Formula), and backtracks immediately when the check fails. However, our framework prioritizes grammar rules in the partial tree expansion based on the learned policy. \\n\\n>> Can you elaborate your choice for the state-of-the-art solver? EUSolver seems to the state-of-the-art. \\nIn terms of comparison with the state-of-the-art, we chose CVC4 solver over EUSolver, because CVC4 is a general SMT solver, whereas EUSolver is designed as a collection of specialized heuristics (e.g. indistinguishability and unification) for each benchmark category of the SyGuS competition, and (to our best knowledge) its design and implementation are guided and heavily tuned according to the SyGuS benchmarks. Our framework is also a general solver without requiring specialized heuristics for each domain. The speciality of EUSolver motivated us to develop a more general solver as baseline, namely ESymbolic, by replacing domain-specific heuristics used in EUSolver with a more general heuristic (i.e. partial program pruning with QBF). \\n\\nAt the reviewer\\u2019s suggestion, we ran EUSolver with the same setup used in our evaluation. It solves 153 tasks (1 more task is solved in contrast with the SyGus 2017 report). These solved tasks are strictly a superset of those solved by CVC4 and ESymbolic. But EUSolver fails to solve 4 tasks solved by our framework. In terms of the absolute number of solved tasks, our framework is not yet as good as EUSolver, but it provides a new and complementary way to solve SyGuS tasks. We have incorporated this discussion in our revision.\\n\\n>> Can you describe how to calculate global graph embedding?\\nThanks for pointing this out. We simply sum over all the node embeddings to get the global graph embedding. We have clarified this in the revision.\\n\\n>> W for different edge types and different propagation steps t? Why is there a need for such a large number of parameters? What is the number of propagation steps?\\nThis is a general form of the Graph Neural Network. Since the #parameters is not the bottleneck in our task, we choose the most expressive parameterization. One could certainly choose to tie the weights in different layers. We use t=20 in all the experiments.\"}",
"{\"title\": \"Good paper\", \"review\": \"This paper presents a (meta-)solver for particular program synthesis problems, where the model has access to a (logic) specification of the program to be synthesized, and a grammar that can change from one task instance to another. The presented model is an RL-based model that jointly trains 1) the joint graph-based embedding of the specification and the grammar, and 2) a policy able to operate on different (from instance to instance) grammars. Interestingly, not only can the model operate as a stand-alone solver, but it can be run as a meta-solver - trained on a subset of tasks, and applied (with tuning) on a new task. Experiments show that the model outperforms two baselines (one being a (near-to-)SOTA model) in the stand-alone setting and that the model successfully transfers knowledge (considers fewer candidates) in the meta-solving mode.\\n\\nFirst, I enjoyed reading the paper. I think the problem is interesting, particularly due to the model being able to train and operate on various grammars (from task to task), and not on a single, pre-specified grammar. The additional bonus is that the problem the paper solves does not require program as supervision, but an external verifier.\\nThe evaluation shows that this approach not only makes sense but (significantly) outperforms, under same conditions, specialized program synthesis programs. However, there\\u2019s one issue here, and that\\u2019s what the comparison hasn\\u2019t been done to SOTA model but to a less performant model (see issues). \\nThe particular approach of jointly training a specification+grammar graph embedding and learning a policy that acts on different grammars seems original and significant enough for publication.\\nThe paper is well (with a few kinks) written, and mostly clear. There are still some issues in the paper.\", \"issues\": [\"The dataset used is 210 cryptographic circuit synthesis tasks from SyGuS 2017. Why only this particular subset of all the tasks, and not the other tasks/categories (there is 569 of them in total, no)?\", \"Alur et al mention 214 examples in the said tasks, yet the paper says 210. Why?\", \"The SyGuS results paper https://arxiv.org/abs/1711.11438 mentions EUSolver as the SOTA model, solving 152 tasks (out of 214). Why didn\\u2019t you compare your model to EUSolver?\", \"The same paper reports CVC4 solving 117 tasks (out of 214), as opposed to 129 (out of 210) reported in your paper. Could you comment on the (possible) differences in the experimentation protocol?\", \"you mention global graph embedding, but you never describe how you calculate it\", \"abstract mentions outperforming two SOTA engines, but later you say ESymbolic is a baseline (which it seems by description)\"], \"questions\": [\"W for different edge types and different propagation steps t? Why is there a need for such a large number of parameters? What is the number of propagation steps?\", \"In the extreme case where all inputs can be enumerated - how often does this happen in the tasks you solve?\", \"figure 2 is not clear. There is too much information on one side (grammar) and too little on the other (what is the meaning of \\\\tau^(t-1)?)? Is the tree on the right a generated subtree?\", \"details of the state s are unclear - it is tracked by an LSTM? Is there a concrete training signal for s, or is it a part of the architecture and everything is end-to-end trainable from the final reward? The same for s0=MLP(h(G)) - is that also trained in the same way?\", \"can you provide some intuition on why you chose that particular architecture (state-tracking LSTM, s0 as such, instead of something simpler?)\", \"can you provide details on the state value estimator MLP architecture, as well as the s0 MLP, and the state-tracking LSTM?\", \"the probability of each action (..) is defined as \\u2026.H_\\\\alpha^(i) - what does the i stand for? Was that supposed to be the t or \\\\alpha_t was supposed to be \\\\alpha_i?\"], \"minor_stuff\": [\"Figure 5a is referred to as Table 5a in the text\", \"out-of-out-solver\", \"global graph embedding, figure 1 - G(phi, G), figure 2 - h(G)\", \"a figure of the policy architecture would be beneficial\", \"Figure 1\", \"d_1 ->X OR Y in the graph is d1T, why isn\\u2019t it d1_OR, and connected to the OR node?\", \"why isn\\u2019t d1_OR connected to OR node?\", \"AST edge - but grammar is a DAG - (well, multigraph)\", \"what are the reversed links? e.g. if A->B, reversed link is B->A ?\", \"what is the meaning of the concrete figures in \\u2018one step\\u2019?\", \"consider relating to \\u2018DREAMCODER: Bootstrapping Domain-Specific Languages for Neurally-Guided Bayesian Program Learning\\u2019 (https://uclmr.github.io/nampi/extended_abstracts/ellis.pdf), as it\\u2019s another model that steps away from the fixed-DSL story\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting technique for a challenging synthesis domain, but some details are not clear\", \"review\": \"This paper presents a reinforcement learning based approach to learn a search strategy to search for programs in the generic syntax-guided synthesis (SyGuS) formulation. Unlike previous neural program synthesis approaches, where the DSL grammar is fixed or the specification is in the form of input-output examples only, the SyGuS formulation considers different grammars for different synthesis problems and the specification format is also more general. The main idea of the approach is to first learn a joint representation of the specification and grammar using a graph neural network model, and then train a policy using reinforcement learning to guide the search with a grammar adaptive policy network that is conditioned on the joint representation. Since the specifications considered here are richer logical expressions, it uses a SAT solver for checking the validity of the proposed solution and to also obtain counterexamples for future rewards. The technique is evaluated on 210 SyGuS benchmarks coming from the cryptographic circuit synthesis domain, and shows significant improvements in terms of number of instances solved compared to CVC4 and ESymbolic baseline search techniques from the formal methods community. Moreover, the learnt policy is also showed to generalize beyond the benchmarks on which it is trained and the meta-solver performs reasonably well compared to the per-task out-of-box solver.\\n\\nOverall, this paper tackles a more challenging synthesis problem than the ones typically considered in recent neural synthesis approaches. The previous synthesis approaches have mostly focused on learning programs in a fixed grammar (DSL) and with specifications that are typically based on either input-output examples or natural language descriptions. In the SyGuS formulation, each task has a different grammar and moreover, the specifications are much richer as they can be arbitrary logical expressions on program variables. The overall approach of using graph neural networks to learn a joint representation of grammars with the corresponding logical specifications, and then using reinforcement learning to learn a search policy over the grammar is quite interesting and novel. The empirical results on the cryptographic benchmarks compare favorably to state of the art CVC4 synthesis solver.\\n\\nHowever, there were some details in the model description and evaluation that were not very clear in the current presentation.\\n\\nFirst, the paper mentions that it uses the idea of Static Single Assignment (SSA) form for the graph representation. What is the SSA form of a grammar and of a specification? \\n\\nIt was also not very clear how the graphs are constructed from the grammar. For example, for the rule d1 -> X OR Y | d2 OR d2 in Figure 1, are there two d_OR nodes or a single node d_OR shared by both the rules? Similarly, what is the d_T node in the figure? It would be good to have a formal description of the nodes and edges in the graph constructed from the spec and grammar.\\n\\nSince the embedding matrix H_d can be of variable size (different sizes of expansion rules), it wasn\\u2019t clear how the policy learns a conditional distribution over the variable number of actions. Is there some form of padding of the matrix and then masking being used?\\n\\nFor the reward design, the choice of using additional examples in the set B_\\\\phi was quite interesting. But there was no discussion about how the interpolation technique works to generate more examples around a counterexample. Can you provide some more details on how the interpolation is being performed? \\n\\nAlso, how many examples were typically used in the experiments? It might be interesting to explore whether different number of examples lead to different results. How does the learning perform in the absence of these examples with the simple binary 0/1 reward?\\n\\nFrom last year\\u2019s SyGuS competition, it seems that the EUSolver solves 152 problems from the set of 214 benchmarks (Table 4 in http://sygus.seas.upenn.edu/files/SyGuSComp2017.pdf). For the evaluation, is ESymbolic baseline solver different that the EUSolver? Would it be possible to evaluate the EUSolver on the same hardware and timeout to see how well it performs on the 210 benchmarks? \\n\\nThe current transfer results are only limited to the cryptographic benchmarks. Since SyGuS also has benchmarks in many other domains, would it be interesting to evaluate the policy transfer to some other non-cryptographic benchmark domain?\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Generating (syntactic and functional) specification-satisfying programs via Reinforcement Learning\", \"review\": \"The authors design a program synthesizer that tries to satisfy per-instance specific syntactic and functional constraints,\\nbased on sampling trajectories from an RL agent that at each time-step expands a partial-program.\\n\\nThe agent is trained with policy gradients with a reward shaped as the ratio of input/output examples that the synthesized program satisfies.\\n\\nWith the 'out-of-box' evaluation, the authors show that their agent can explore more efficiently the harder problems than their non-learning alternatives even from scratch.\\n(My intuition is that the agent learns to generate the most promising programs)\\nIt would be good to have a Monte Carlo Tree Search baseline on the'out-of-box' evaluation, to detect exploration exploitation trade-offs.\\n\\nThe authors show with the 'meta-solver' approach that the agent can generalize to and also speed up unseen (albeit easy-ish in the authors words) instances.\", \"clarity\": \"Paper is clear and nicely written.\", \"significance\": \"Imagine a single program synthesizer that could generate C++/Java/Python/DSLs programs and learn from all its successes and failures! This is a step towards that.\", \"pros\": \"+ Generating spec-following programs for different grammars.\\n+ partial tree expansion takes care of syntactic constraints.\\nNeutral\\n\\u00b7 The grammar and specification diversity may be too low to feel impressive.\\n\\u00b7 It would have been nicer by computing likelihood for unseen instances with unique and known solutions (that is, without finetuning).\", \"cons\": [\"No Tree Search baseline.\", \"No results on programs with control flow/internal state.\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}"
]
} |
|
H1eSS3CcKX | Stochastic Optimization of Sorting Networks via Continuous Relaxations | [
"Aditya Grover",
"Eric Wang",
"Aaron Zweig",
"Stefano Ermon"
] | Sorting input objects is an important step in many machine learning pipelines. However, the sorting operator is non-differentiable with respect to its inputs, which prohibits end-to-end gradient-based optimization. In this work, we propose NeuralSort, a general-purpose continuous relaxation of the output of the sorting operator from permutation matrices to the set of unimodal row-stochastic matrices, where every row sums to one and has a distinct argmax. This relaxation permits straight-through optimization of any computational graph involve a sorting operation. Further, we use this relaxation to enable gradient-based stochastic optimization over the combinatorially large space of permutations by deriving a reparameterized gradient estimator for the Plackett-Luce family of distributions over permutations. We demonstrate the usefulness of our framework on three tasks that require learning semantic orderings of high-dimensional objects, including a fully differentiable, parameterized extension of the k-nearest neighbors algorithm | [
"continuous relaxations",
"sorting",
"permutation",
"stochastic computation graphs",
"Plackett-Luce"
] | https://openreview.net/pdf?id=H1eSS3CcKX | https://openreview.net/forum?id=H1eSS3CcKX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"Hyx57EymlN",
"BkgtftDpJN",
"SJgx2mx2p7",
"SJefvGxnaX",
"SyeSE1x2aX",
"r1xmEny2TQ",
"SyxXUGsc3Q",
"rkgvVs-q27",
"Byeq6TEfn7"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544905762452,
1544546577416,
1542353832378,
1542353497643,
1542352684689,
1542351915470,
1541218890757,
1541180206582,
1540668866372
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1541/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1541/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1541/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1541/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1541/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1541/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1541/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1541/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1541/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper proposes a general-purpose continuous relaxation of the output of the sorting operator. This enables end-to-end training to enable more efficient stochastic optimization over the combinatorially large space of permutations.\\n\\nIn the submitted versions, two of the reviewers had difficulty in understanding the writing. After the rebuttal and the revised version, one of the reviewers is satisfied. I personally went through the paper and found that it could be tricky to read certain parts of the paper. For example, I am personally very familiar with the Placket-Luce model but the writing in Section 2.1 does not do a good job in explaining the model (particularly Eq 1 is not very easy to read, same with Eq. 3 for the key identity used in the paper). \\n\\nI encourage authors to improve writing and make it a bit more intuitive to read.\\n\\nOverall, this is a good paper and I recommend to accept it.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Good paper, but writing can be improved.\"}",
"{\"title\": \"thanks for updates; looking great\", \"comment\": \"reviewed rebuttal; still support strong accept\"}",
"{\"title\": \"Summary of the revised paper\", \"comment\": [\"We thank the reviewers for their helpful comments! In light of these comments, we have revised the paper. Here is a summary of changes:\", \"Sections 3, 4: Motivation and background for the content in these sections have been stated more explicitly. Figure 1 has been added to supplement Section 3.\", \"The experimental setup in Section 6.1 and illustration in Figure 4 (which was Figure 3 in the previous version) have been revised to lend more clarity.\", \"Appendix E has been added to connect the experiments more concretely with the theory. This appendix includes the precise objective functions for each experiment.\", \"Few additional experiment analysis results (Figure 8, Table 4, Table 5) have been added.\"]}",
"{\"title\": \"Response to reviewer questions and feedback\", \"comment\": \"Thanks for reviewing our paper and the helpful feedback! We have addressed your questions and comments below.\\n\\nQ1. Experimental setup for Section 6.1 and Figure 3.\\nA1. We can see the source of confusion now, sorry about that! We have edited the description in Section 6.1 to clarify this point and replaced what was previously Figure 3 with a more illustrative Figure 4 and a descriptive caption. The reviewer\\u2019s understanding of our last response is correct --- we have a sequence of n large-MNIST images (where each large-MNIST image is a 4 digit number) and the goal is to sort the input sequence. In Figure 4 for example, the task is to sort the input sequence of n=5 images given as [2960, 1270, 9803, 1810, 7346] to [1270, 1810, 2960, 7346, 9803].\\n\\nQ2. Section 2.2.\\nA2. In Section 2.2, we intend to provide background on stochastic computation graphs (SCG). SCGs are a widely used tool for visualizing and contrasting different approaches to stochastic optimization, especially in the context of stochastic optimization with the backpropagation algorithm since the forward and backward passes can be visualized via the topological sorting of operators in the SCG (e.g., Figures 1, 3). Due to the lack of space, we could not include a detailed overview of stochastic computation graphs and pointed the readers to the canonical reference of Schulmann et al., 2015. The key takeaway is stated in the last paragraph of Section 2.2 --- a sort operator is non-differentiable w.r.t. its inputs and including it in SCGs necessitates the need for relaxations. For a more detailed exposition to SCGs, we have included an illustrative example in Figure 6 that grounds the terminology introduced in Section 2.2. \\n\\nQ3. Concrete goal in section 3 and 4.\\nA3. At its core, this work seeks to include general-purpose deterministic nodes corresponding to sort operators (Section 3) and stochastic nodes corresponding to random variables defined over the symmetric group of permutations (Section 4) in computational pipelines (represented via a stochastic computation graph). Following up on the reviewer\\u2019s feedback, we have significantly expanded the motivating introductions for Section 3 and 4 to clearly state the goal beforehand and how we intend to achieve it.\\n\\nQ4. Connecting theory with experiments. Where is this true permutation matrix captured as an argument of f in (6)? Is the optimisation/gradients in (7) over s or over the CNN parameters?\\nA4. Following up on the reviewer\\u2019s feedback, we have made the following edits in the revised version:\\n- Revised Figures 4, 5 (which were Figure 3, 4 in the old version) to clearly indicate the scores \\u201cs\\u201d for each experiment.\\n- Included a new Appendix E which formally states the loss functions optimized by the Sortnet approaches and explains the semantics of each terms for all three experiments.\\n\\nRegarding the specific follow-up questions with respect to Equation 7 and 8 (which were previously Equation 6 and 7 in the first version of the paper):\\n- For the experiments in Section 6.1, the function f would include an additional argument corresponding to the true permutation matrix. We did not explicitly include the ground-truth permutation as an argument to the function f in Equation 7 to maintain the generality since such objectives also arise in unsupervised settings e.g., latent variable modeling where there is no ground-truth label. See Appendix E.1 for the precise loss function.\\n- The gradients in Equation 8 are w.r.t. the scores s that parameterize a distribution q. In the experiments, the scores s are given as the output of a CNN and the optimization is over the CNN parameters. Evaluating gradients w.r.t. the CNN parameters is straightforward via the chain rule/backpropagation.\\n\\nPlease let us know if there is any other detail that needs further clarification!\"}",
"{\"title\": \"Response to reviewer questions and feedback\", \"comment\": \"Thanks for reviewing our paper and the helpful feedback! We have addressed your questions and comments below.\\n\\nQ1. Clarity in Sections 3, 4. Connection with experiments.\\nA1. Following up on the reviewer\\u2019s feedback, we have made the following edits in the revised version:\\n- Edited and expanded the introductory paragraphs for Section 3 and Section 4 to ensure a smooth transition.\\n- Revised Figures 4, 5 (which were previously Figure 3, 4 in the in the first version of the paper) to clearly indicate the scores \\u201cs\\u201d for each experiment.\\n- Included a new Appendix E which formally states the loss functions optimized by the Sortnet approaches for all three experiments. \\n\\nFor the specific follow-up questions in the review, we first note that Equation (7) (which was previously Equation (6) in the first version of the paper) is the general style of expressions used in the relevant literature on stochastic optimization, see e.g., Section 3 in Jang et al., 2017. These expressions are succinct, but as the reviewer points out, they need additional clarification when extended to the experiments. We hope Appendix E will help clarify these formally. For completeness, we address the two questions specifically raised by the reviewer here:\\n\\nIn all our experiments, we are dealing with sequences of n objects x = [x1, x2, \\u2026, xn] and trying to sort these objects for an end goal. In Section 6.1, the goal is to output the sorted permutation for a sequence of n largeMNIST images; in 6.2, the goal is to output the median value in the sequence; in 6.3, the goal is to sort a sequence of training points as per their distances to a query point for kNN classification. We now explain the notation in the context of largeMNIST experiments in Section 6.1/6.2 which share the same experimental setup and dataset; the kNN experiments in Section 6.3 follow similarly.\\n\\n- s=[s1, s2, \\u2026, sn] corresponds to a vector of scores, one for each largeMNIST image in the input sequence. Each score si is the output of a CNN which takes as input an image xi. The CNNs across the different largeMNIST images x1, x2, ..., xn share parameters. Note that we directly specify the vector s (and skip x as well as the CNN parameters relating x to s) in Equation (7) for brevity. In Section 4, we derived gradients of the objective w.r.t. s, which can be backpropagated via chain rule to update the CNN parameters in a straightforward manner.\\n- q is the Plackett-Luce distribution over permutations z and parameterized by scores s. \\n- f is any function (that optionally depends on additional parameters \\\\theta) that acts over a permutation matrix P_z. In the experiments in Section 6.1, the function f is the element-wise cross-entropy loss between the true permutation matrix that sorts x and P_z. Again for the purpose of generality , we do not explicitly include the ground-truth permutation as an argument to the function f in Equation (7) since such objectives also arise in unsupervised settings, e.g., latent variable modeling where there is no ground-truth label. \\n- The parameters \\\\theta for specifying f as a function of P_z are optional and task-specific. In particular, the cross-entropy loss function f for experiments in Section 6.1 does not needs any additional parameters \\\\theta. For the experiments in Section 6.2, we cannot compute a loss directly with respect to the permutation matrix P_z since we need to regress a scalar value for the median. Instead, we feed the predicted median image in the input sequence (can be obtained via sorting x as per P_z) to a neural network (with parameters \\\\theta) to obtain a real-valued, scalar prediction. We then compute f as the the MSE regression loss between the true median value and the value predicted by the parameterized neural network.\\n- Lastly, L denotes the expected value of the objective function f w.r.t. the distribution q.\\n\\nPlease refer to Figures 4, 5 for the computational pipeline and Appendix E for the precise loss functions for each experiment. Let us know if there is any other detail that needs clarification!\\n\\nQ2. Confusing use of the phrase \\\"Sorting Networks\\\" in the title of the paper.\\nA2. Thanks for pointing it out! If permitted by the conference rules, we will consider substituting \\u2018networks\\u2019 to \\u2018operators\\u2019 in the title of the final version of the paper.\\n\\nQ3. Page 2 -- Section 2 PRELIMINARIES -- It seems that sort(s) must be [1,4,2,3].\\nA3. We believe the sort(s) expression in the paper is correct. This is because the largest element (=9) is at index 1, second largest element (=5) is at index 3, third largest element (=2) is at index 4 and the smallest element (=1) is at index 2. Hence, sort(s)=[1,3,4,2]^T as indicated in the paper.\"}",
"{\"title\": \"Response to reviewer questions and feedback\", \"comment\": \"Thanks for reviewing our paper and the helpful feedback! We have addressed your questions below.\\n \\nQ1. How much of the improvement is attributable to the lower dimension of the parameterization? (e.g. all Sinkhorn varients have N^2 params; this has N params) Is there any reduction in gradient variance due to using fewer gumbel samples?\\nA1. Precise quantification of the gains due to lower dimension of the parameterization alone is hard since the relaxation itself is fundamentally different from the Sinkhorn variants. In an attempt to get a handle on these aspects (n^2 vs. n parameters and doubly stochastic vs. unimodal matrices), we analyzed the signal-to-noise (SNR) ratio for the Stochastic Sortnet and Gumbel-Sinkhorn approaches with the same number of Gumbel samples (=5). Here, we define SNR as the ratio of the absolute value of the expected gradient estimates and the standard deviation. For the experiments in Section 6.1, the SNR ratio averaged across all the parameters is shown in Figure 8. We observe a much higher SNR for the proposed approach, in line with the overall gains we see on the underlying task.\\n\\nQ2. More details needed on the kNN loss (uniform vs inv distance wt? which one?); and the experiment overall: what k got used in the end?\\nA2. We used a uniformly weighted kNN loss for both the Sortnet approaches, while noting that it is straightforward to extend our framework to use an inverse distance weighting. Appendix E.3 includes the formal expressions for the loss functions optimized in our framework. Furthermore, we have included new results in Table 5 which show the raw performance of Deterministic and Stochastic Sortnet for all values of k considered. \\n\\nQ3. The temperature setting is basically a bias-variance tradeoff (see Fig 5). How non-discrete are the permutation-like matrices ultimately used in the experiments? \\nA3. That\\u2019s a great suggestion! One way to quantify the non-discreteness could be based on the element-wise mean squared difference between the inferred unimodal row stochastic matrix and its projection to a permutation matrix, for the test set of instances. We have included these results for the sorting experiment in Table 4.\\n\\nPlease let us know if there are any further questions!\"}",
"{\"title\": \"An improvement to relaxed sort operators; some even-harder experiments\", \"review\": \"This work builds on a sum(top k) identity to derive a pathwise differentiable sampler of 'unimodal row stochastic' matrices. The Plackett-Luce family has a tractable density (an improvement over previous works) and is (as developed here) efficient to sample.\\n\\n[OpenReview did not save my draft, so I now attempt to recover it from memory.]\", \"questions\": [\"How much of the improvement is attributable to the lower dimension of the parameterization? (e.g. all Sinkhorn varients have N^2 params; this has N params) Is there any reduction in gradient variance due to using fewer gumbel samples?\", \"More details needed on the kNN loss (uniform vs inv distance wt? which one?); and the experiment overall: what k got used in the end?\", \"The temperature setting is basically a bias-variance tradeoff (see Fig 5). How non-discrete are the permutation-like matrices ultimately used in the experiments? While the gradients are unbiased for the relaxed sort operator, they are still biased if our final model is a true sort. Would be nice to quantify this difference, or at least mention it.\"], \"quality\": \"Good quality; approach is well-founded and more efficient than extant solutions. Fairly detailed summaries of experiments in appendices (except kNN). Neat way to reduce the parameter count from N^2 to N.\\n\\nI have not thoroughly evaluated the proofs in appendix.\", \"clarity\": \"The approach is presented well, existing techniques are compared in both prose and as baselines. Appendix provides code for maximal clarity.\", \"originality\": \"First approach I've seen that reduces parameter count for permutation matrices like this. And with tractable density. Very neat and original approach.\", \"significance\": \"More scalable than existing approaches (e.g: only need N gumbel samples instead of N^2), yields better results.\\n\\nI look forward to seeing this integrated into future work, as envisioned (e.g. beam search)\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Nice results\", \"review\": \"After responses: I now understand the paper, and I believe it is a good contribution.\\n\\n================================================\\n\\nAt a high level, the paper considers how to sort a number of items without explicitly necessarily learning their actual meanings or values. Permutations are discrete combinatorial objects, so the paper proposes a method to perform the optimization via a continuous relaxation. \\n\\nThis is an important problem to sort items, arising in a variety of applications, particularly when the direct sorting can be more efficient than the two step approach of computing the values and then sorting.\\n\\nI like both the theoretical parts and the experimental results. In the context of ICLR, the specific theoretical modules comprise some cute results (Theorem 4; use of past works in Lemma 2 and Proposition 5). possibly of independent interest. The connections to the (Gumbel distribution <--> Plackett Luce) results are also nicely used. This Gumbel<-->PL result is well known in the social choice community but perhaps not so much in the ML community, and it is always nice to see more connections drawn between techniques in different communities. The empirical evaluations show quite good results.\\n\\nHowever, I had a hard time parsing the paper. The paper is written in a manner that may be accessible to readers who are familiar with this (or similar) line of research, but for someone like me who is not, I found it quite hard to understand the arguments (or lack of them) made in the paper connecting various modules. Here are some examples:\\n\\n- Section 6.1 states \\\"Each sequence contains n images, and each image corresponds to an integer label. Our goal is to learn to predict the permutation that sorts these labels\\\". One interpretation of this statement suggests that each row of Fig 3a is a sequence, that each sequence contains n=4 images (e.g., 4 images corresponding to each digit in 2960), and the goal is to sort [2960] to [0269]. However, according to the response of authors to my earlier comment, the goal is to sort [2960,1270,9803] to [1270,2960,9803]. \\n\\n- I did not understand Section 2.2.\\n\\n- I would appreciate a more detailed background on the concrete goal before going into the techniques of section 3 and 4.\\n\\n- I am having a hard time in connecting the experiments in Section 6 with the theory described in earlier sections. And this is so even after my clarifying questions to the authors and their responses. For instance, the authors explained that the experiments in Section 6.1 have \\\\theta as vacuous and that the function f represents the cross-entropy loss between permutation z and the true permutation matrix. Then where is this true permutation matrix captured as an argument of f in (6)? Is the optimisation/gradients in (7) over s or over the CNN parameters?\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Interesting theoretical results, but connection to the experimental results is not clear\", \"review\": \"In many machine learning applications, sorting is an important step such as ranking. However, the sorting operator is not differentiable with respect to its inputs. The main idea of the paper is to introduce a continuous relaxation of the sorting operator in order to construct an end-to-end gradient-based optimization. This relaxation is introduced as \\\\hat{P}_{sort(s)} (see Equation 4). The paper also introduces a stochastic extension of its method\\nusing Placket-Luce distributions and Monte Carlo. Finally, the introduced deterministic and stochastic methods are evaluated experimentally in 3 different applications: 1. sorting handwritten numbers, 2. Quantile regression, and 3. End-to-end differentiable k-Nearest Neighbors.\\n\\nThe introduction of the differentiable approximation of the sorting operator is interesting and seems novel. However, the paper is not well-written and it is hard to follow the paper especially form Section 4 and on. It is not clear how the theoretical results in Section 3 and 4 are used for the experiments in Section 6. For instance:\\n** In page 4, what is \\\"s\\\" in the machine learning application?\\n** In page 4, in Equation 6, what are theta, s, L and f exactly in our machine learning applications?\", \"remark\": \"** The phrase \\\"Sorting Networks\\\" in the title of the paper is confusing. This term typically refers to a network of comparators applied to a set of N wires (See e.g. [1])\\n** Page 2 -- Section 2 PRELIMINARIES -- It seems that sort(s) must be [1,4,2,3].\\n\\n[1] Ajtai M, Koml\\u00f3s J, Szemer\\u00e9di E. An 0 (n log n) sorting network. InProceedings of the fifteenth annual ACM symposium on Theory of computing 1983 Dec 1 (pp. 1-9). ACM\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
BylBr3C9K7 | Energy-Constrained Compression for Deep Neural Networks via Weighted Sparse Projection and Layer Input Masking | [
"Haichuan Yang",
"Yuhao Zhu",
"Ji Liu"
] | Deep Neural Networks (DNNs) are increasingly deployed in highly energy-constrained environments such as autonomous drones and wearable devices while at the same time must operate in real-time. Therefore, reducing the energy consumption has become a major design consideration in DNN training. This paper proposes the first end-to-end DNN training framework that provides quantitative energy consumption guarantees via weighted sparse projection and input masking. The key idea is to formulate the DNN training as an optimization problem in which the energy budget imposes a previously unconsidered optimization constraint. We integrate the quantitative DNN energy estimation into the DNN training process to assist the constrained optimization. We prove that an approximate algorithm can be used to efficiently solve the optimization problem. Compared to the best prior energy-saving techniques, our framework trains DNNs that provide higher accuracies under same or lower energy budgets. | [
"model compression",
"inference energy saving",
"deep neural network pruning"
] | https://openreview.net/pdf?id=BylBr3C9K7 | https://openreview.net/forum?id=BylBr3C9K7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"ryxnoNTCJE",
"SkgP-w3aRQ",
"Sylap5jTAX",
"ryxef2id07",
"H1g_J-vO6m",
"rJlG0gwdpQ",
"rygIrxvOpm",
"ByxoQev_T7",
"BJgr8CIOpX",
"H1em0nLOTm",
"r1xncHxi2X",
"ByeuQTS92Q",
"ByeTwU9O3m"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544635555993,
1543517951342,
1543514820534,
1543187463886,
1542119648359,
1542119625584,
1542119486180,
1542119458645,
1542118988861,
1542118602981,
1541240212290,
1541197088086,
1541084772731
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1540/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1540/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1540/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1540/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1540/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1540/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1540/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1540/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1540/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1540/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1540/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1540/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1540/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"All of the reviewers agree that this is a well-written paper with the novel perspective of minimizing energy consumption in neural networks, as opposed to maximizing sparsity, which does not always correlate with energy cost. There are a number of promised clarifications and additional results that have emerged from the discussion that should be put into the final draft. Namely, describing the overhead of converting from sparse to dense representations, adding the Imagenet sparsity results, and adding the time taken to run the projection step.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Good paper on minimizing energy cost in neural networks.\"}",
"{\"title\": \"Response to the rebuttal\", \"comment\": \"Thank you for the detailed response. I think the paper became more convincing and I will adapt my rating.\"}",
"{\"title\": \"thanks for the clarifications\", \"comment\": \"The authors provided reasonable clarifications, so I will bump up my score.\"}",
"{\"title\": \"final comment\", \"comment\": \"I would like to thanks the authors for the response that clarifies my questions. I would suggest adding several lines describing the overhead of packing and unpacking of sparse representation in the final revision of the paper. I agree with the authors that methods from Louizos et al., NIPS'17 and Neklyudov et al., NIPS'17 are quite orthogonal to the method considered in the paper. Nevertheless, these methods are strong baselines and improving them is a good indicator of the significance of the proposed method of pruning input channels.\"}",
"{\"title\": \"Responding to your comments (part 1)\", \"comment\": \"We very much appreciate your careful review. We clarify the questions point by point below and plan to sort out some confusions in our revision to improve the clarity.\\n\\n> \\u201cMy first concern is that this paper exceeds the recommended 8 page limit for reasons that are seemingly quite unnecessary. There are no large, essential figures/tables, and nearly the first 6 pages is just introduction and background material.\\u201d\\n\\nWe think you refer to the Section 3.\\nIn Section 3, we show how the energy of a DNN inference is analytical modelled. We want to include these details in the paper because they form the final energy constraint proposed in problem (18). In the revised version, we will take your suggestion to reduce the number of pages to 8 by condensing this section and moving the details of energy estimation into the Appendix.\\n\\n\\n> \\u201cLikewise the paper consumes a considerable amount of space presenting technical results related to knapsack problems and various epsilon-accurate solutions, but this theoretical content seems somewhat irrelevant and distracting since it is not directly related to the greedy approximation strategy actually used for practical deployment. Much of this material could have been moved to the supplementary so as to adhere to the 8 page soft limit.\\u201d\\n\\nThanks for your suggestion how to reduce the length to 8 pages. Please allow us to clarify the logic of our theorems first. All three algorithms are related to how to solve the key projection step in (22). Theorem 1 shows the projection problem in (22) is NP hard to find the exact optimal solution in general, since it is equivalent to a 0/1 knapsack problem. Theorem 2 shows the optimal computational complexity to find an epsilon *approximate* solution by utilizing the structure of the projection problem. Theorem 3 shows the proposed greedy algorithm (weighted projection algorithm) can achieve a reasonable precision efficiently. We feel that these theorems are useful in that they help understand the difficulty and the complexity of solving (22). We will consider moving Theorem 2 to the supplement in the first priority to shrink the length of this paper.\\n\\n\\n> \\u201cPrior work also uses a mask for controlling the sparsity of network inputs, such as \\\"Structured Bayesian Pruning via Log-Normal Multiplicative Noise,\\\" NIPS 2017 and Louizos et al., \\\"Bayesian Compression for Deep Learning,\\\" NIPS 2017. How do you compare with them?\\u201d\\n\\nWe agree that there is prior work that uses a mask to prune the network activations (i.e., inputs). But we want to emphasize two key differences of our work. First, these two papers you mentioned use the mask (structure sparsity) to remove unnecessary channels; whereas our work uses the mask to filter unimportant elements within each channel, motivated by the observation that many areas in the input image do not really contribute to the recognition task such as the corners of the input image in the digital number recognition. These two mask techniques are orthogonal, and can even be combined.\\n\\nSecond, our mask model is integrated with an energy model to let us train energy-constrained DNNs; whereas these two papers purely aim at reducing the network parameters to get speedup.\\n\\nWe use their released code to train energy-constrained DNNs on the MNIST dataset, the results are below:\\n+--------------------------------------+----------------------+----------+-----------------------------+\\n| Method | Accuracy Drop | Energy | Width of Each Layer |\\n+--------------------------------------+----------------------+----------+-----------------------------+\\n| [Louizos et al., NIPS'17] | 2.2% | 26% | 4-6-52-42 |\\n+--------------------------------------+----------------------+----------+-----------------------------+\\n| [Neklyudov et al., NIPS'17] | 1.5% | 22% | 3-10-23-28 |\\n+--------------------------------------+----------------------+----------+-----------------------------+\\nOur method has 0.5% accuracy drop with 17% energy cost, better than the two approaches.\\n\\n\\n> \\u201cComparison against methods newer than MP and SSL.\\u201d\\n\\nIn the experiment, we also compare against state-of-the-art pruning methods NetAdapt [Yang et al., ECCV 2018] and EAP [Yang et al., CVPR 2017] and show favorable results (Please refer to Table 1 and Table 2). SSL and MP are classic pruning techniques that represent a class of methods that use sparsity as the constraint (regularization). Indeed, EAP is the refined version of SSL and MP.\"}",
"{\"title\": \"Responding to your comments (part 2)\", \"comment\": \"> \\u201cAnother comment I have regarding the experiments is that hyperparameters and the use of knowledge distillation were potentially tuned for the proposed method and then simultaneously applied to the competing algorithms for the sake of head-to-head comparison. But to me, if these enhancements are to be included at all, tuning must be done carefully and independently for each algorithm. Was this actually done? Moreover it would have been nice to see results without the confounding influence of distillation to isolate sources of improvement, but no ablation studies were presented.\\u201d\\n\\nIn our early experiments, we did not use knowledge distillation in other methods and found that the performance is significantly worse than ours. Therefore, we apply the knowledge distillation in all the methods for fair comparison. Recent work (e.g., [Mishra et al., 2018] ) also support our observation that the knowledge distillation trick can significantly improve the test accuracy in other pruning methods. We verified that the performance was not very sensitive to the value of lambda as long as lambda is in a reasonable range. Therefore, we empirically choose lambda to be 0.5 universal to *all* datasets.\\n\\n[Mishra et al., 2018] Mishra, Asit, and Debbie Marr. \\\"Apprentice: Using knowledge distillation techniques to improve low-precision network accuracy.\\\" In ICLR 2018.\\n\\n\\n> \\u201cFinally, regarding the content in Section 5, the paper carefully presents an explicit bound on energy that ultimately leads to a constraint that is NP-hard just to project on to, although approximate solutions exist that depend on some error tolerance. However, even this requires an algorithm that is dismissed as \\\"complicated.\\\" Instead a greedy alternative is derived in the Appendix which presumably serves as the final endorsed approach. But at this point it is no longer clear to me exactly what performance guarantees remain with respect to the energy bound. Theorem 3 presents a fairly inscrutable bound, and it is not at all transparent how to interpret this in any practical sense. Note that after Theorem 3, conditions are described whereby an optimal projection can be obtained, but these seem highly nuanced, and unlikely to apply in most cases.\\u201d\\n\\nThe conditions under Theorem 3 are sufficient conditions to obtain the exactly optimal projection, i.e. the error bound is 0. However, we usually do not require such rigorous result in practice. Because the amount of parameters is very large in DNNs, the remaining budget R(W\\u2019\\u2019) is usually very small compared to E_budget. Therefore, the projection error bound is small enough in most cases.\\nAnother practical aspect of Theorem 3 is quantifying the upper-bound of the projection error in (27). In practice, we can exactly calculate this error bound and even choose to use the more accurate (but slower) algorithm in Theorem 2 when this error bound is not acceptable.\\n\\n\\n> \\u201cAdditionally, it would appear that crude bounds on the energy could also be introduced by simply penalizing/constraining the sparsity on each layer, which leads to a much simpler projection step. For example, a simple affine function of the L0 norm would be much easier to optimize and could serve as a loose bound on the energy, given that the latter should be a non-decreasing function of the L0 norm. Any idea how such a bound compares to those presented given all the approximations and greedy steps that must be included?\\u201d\\n\\nTo use a method based on sparsity constraint of each layer, one must identify the sparsity bound for each of the DNN layers in a way that the whole model satisfies the energy budget while minimizing the loss. Even if an affine function of a layer\\u2019s sparsity bound can be used to estimate the layer\\u2019s energy, we still need to optimize these sparsity variables collectively across all layers for the whole model. Thus, the effectiveness of the layer-wise approach rests upon if we could find the optimal sparsity combination for all the layers.\\n\\nNetAdapt [Yang et al., ECCV 2018] and AMC [He et al., ECCV 2018] already showed that it is non-trivial to find the optimal layer-wise spartisy bounds. NetAdapt proposed a heuristic-driven search algorithm. In our experiment, we compared against NetAdapt and show that we can achieve higher accuracy with lower or same energy consumption (Please see Table 1 and Table 2).\"}",
"{\"title\": \"Responding to your comments (part 1)\", \"comment\": \"Thanks for your thoughtful comments. The posted questions are answered as follows.\\n\\n> \\u201cThe experiments in Sec. 6.2 suggest that the activation mask is mainly beneficial when the data is highly structured. How are the benefits (in terms of weight and activation sparsity) composed in the experiments on Imagenet? How does the weight sparsity of the the proposed method compare to the related methods in these experiments? Is weight sparsity in these cases a good proxy for energy consumption?\\u201d\\n\\nAs the reviewers pointed out, the activation mask applies to cases where the data is highly structured. It does not apply to data from ImageNet. We acknowledge at the beginning of Section 3.2 that \\u201cWe do not claim that applying input mask is a general technique; rather, we demonstrate its effectiveness when applicable.\\u201d\\n\\nIn this work sparsity is not the end goal. Rather, it is a byproduct of energy saving. In fact, we observe that weight spartisy is *not* a good proxy for energy consumption, as also confirmed by prior work EAP [Yang et al., CVPR 2017]. Our method achieves lower energy consumption despite having higher density. The sparsity result on ImageNet is shown as follows. We will add the results in the revision.\\n+-------------------------+--------------------------------+---------------------------------+------------------------+\\n| DNNs | AlexNet | SqueezeNet | MobileNetV2 |\\n+-------------------------+--------------------------------+---------------------------------+------------------------+\\n| Methods | MP | SSL | EAP | Ours | MP | SSL | EAP | Ours | MP | SSL | Ours|\\n+-------------------------+------+-------+------+--------+-------+-------+------+--------+-------+-------+-------+\\n| Weights Sparsity | 8% | 35% | 9% | 31% | 34% | 61%| 28%| 48% | 52% | 63%| 63%|\\n+-------------------------+------+-------+------+--------+-------+-------+------+------+----------+------+-------+\\n\\n\\n> \\u201cHow does the activation sparsity (decay) parameter (\\\\delta) q affect the accuracy-energy consumption tradeoff for the two data sets?\\u201d\\n\\nThe decay parameter $\\\\delta q$ is used to make the tradeoff between training time and accuracy. Smaller $\\\\delta q$ leads to better accuracy, however, we need to run more outer loops of Algorithm 1. As shown in Algorithm 1, the outer loop is time consuming since it requires training of both W and M. Although smaller $\\\\delta q$ could improve the accuracy of our method, we simply set $\\\\delta q = 0.1|M|$ in all the experiments.\"}",
"{\"title\": \"Responding to your comments (part 2)\", \"comment\": \"> \\u201cThe authors show that the weight projection problem can be solved efficiently. How does the guarantee translate into wall-clock time?\\u201d\\n\\nThe most time consuming part of our proposed projection method is sorting the \\u201cprofit density\\u201d in Algorithm 2. This sorting takes O(n logn) theoretical time complexity (n is the number of weights in DNN), and can be efficiently computed on GPUs using dedicated CUDA libraries. \\nWe measured the wall-clock time of our projection algorithm on a GPU server (CPU: Xeon E3 1231-v3, GPU: GTX 1080 Ti), and the result is (the time is averaged over 100 iterations):\\n+-------------------------------------+------------+------------------+--------------------+\\n| DNNs | AlexNet | SqueezeNet | MobileNetV2 |\\n+-------------------------------------+------------+------------------+--------------------+\\n| Projection Time (seconds) | 0.170 | 0.023 | 0.032 |\\n+-------------------------------------+------------+------------------+--------------------+\\nAs the data shows, the projection step can be solved very efficiently. We will include these results in the revision.\\n\\n\\n\\n> \\u201cFilter pruning methods [1,2] reduce both the size of the weight and activation tensors, while not requiring to solve a complicated projection problem or introducing activation masks. It would be good to compare to these methods, or at least comment on the gains to be expected under the proposed energy consumption model.\\u201d\\n\\nFilter pruning methods [1,2] require a sparsity ratio to be set for each layer, and these sparsity hyper-parameters will determine the energy cost of the DNN. Manually setting all these hyper-parameters in energy constrained compression is not trivial. NetAdapt [Yang et al., 2018] proposes a heuristic-driven approach to search such sparsity ratios and use filter pruning as proposed in [2] to train DNN models. In the paper, we directly compared against NetAdapt, and show that we can achieve higher accuracy with lower/same energy consumption. Please see Table 1 and Table 2.\\n\\n\\n> \\u201cKnowledge distillation has previously been observed to be quite helpful when constraining neural network weights to be quantized and/or sparse, see [3,4,5]. It might be worth mentioning this.\\u201d\\n\\nThank you for pointing out this point. We did notice several recently papers that use knowledge distillation for quantization and compression, and we will emphasize this with the suggested references in the revision.\"}",
"{\"title\": \"Responding to your comments\", \"comment\": \"Thanks for your comments on our paper.\\n\\n> \\u201c\\u2018Our energy modeling results are validated against the industry-strength DNN hardware simulator ScaleSim\\u2019. Could the authors please elaborate on this sentence?\\u201d\\n\\nScaleSim simulates the DNN hardware execution cycle by cycle, from which it derives the total execution time and energy consumption of executing a network on the hardware. In this paper, we model the energy consumption of an network analytically (Section 3, in particular Equation 16); we compare the energy consumption analytical derived by our approach with the energy consumption estimated from ScaleSim (which simulates the hardware executions), and found that the two matched.\\n\\n\\n> \\u201cOne of the main assumptions is the following. If the value of the data is zero, the hardware can skip accessing the data. As far as I know, this is a quite strong assumption, that is not supported by many architectures. How do the authors take into account overhead of using sparse data formats in such hardware in their estimations? Is it possible to simulate such behavior in ScaleSim? Moreover, in many modern systems DRAM can only be read in chunks. Therefore it can decrease number of DRAM accesses in (4).\\u201d\\n\\nIn many today\\u2019s DNN hardware the activations and weights are compressed in the dense form, and thus only non-zero values will be accessed. This is done in prior work [Chen et al., 2016; Parashar et al., 2017]. There is a negligible amount of overhead to \\u201cunpack\\u201d and \\u201cpack\\u201d compressed data, which we simply take away from the energy budget as a constant factor. This is also the same modeling assumption used by EAP [Yang et al., CVPR 2017].\\n\\nWe agree with the reviewer that DRAM is accessed in bursts, which we did account for in our modeling. In particular, the per-access energy eDRAM we used in the modeling is the amortized energy of each access across the entire bursts. That is, instead of decreasing the number of DRAM accesses, we decrease the per-access energy. This is a standard modeling assumption widely used in the hardware architecture community and industry [Han et al, ISCA 2016; Yang et al., CVPR 2017].\"}",
"{\"title\": \"Responding to your comments (part 3)\", \"comment\": \"> \\u201cAs an implementation heuristic, the proposed Algorithm 1 gradually decays the parameter q, which controls the sparsity of the mask M. But this will certainly alter the energy budget, and I wonder how important it is to employ a complex energy constraint if minimization requires this type of heuristic.\\u201d\\n\\nThe purpose of our proposed energy constraint is to exactly characterize the dependence between the sparsity of all parameters and the energy consumption, which provides us an (almost) exact energy model and a clear goal to guide us to pursue an energy efficient model. However, due to the nontrivial structure in the energy model, we have to involve some heuristics to solve it approximately. \\n\\n\\n> \\u201cI did not see where the quantity L(M,W) embedded in eq. (17) was formally defined, although I can guess what it is.\\u201d\\n\\nThanks for pointing this out. L is the original loss, e.g., cross-entropy loss for classification. We will clarify this in the revision.\\n\\n\\n> \\u201cIn general it is somewhat troublesome that, on top of a complex, non-convex deep network energy function, just the small subproblem required for projecting onto the energy constraint is NP-hard. Even if approximations are possible, I wonder if this extra complexity is always worth it relative so simple sparsity-based compression methods which can be efficiently implemented with exactly closed-form projections.\\u201d\\n\\nAlthough the energy constrained problem is complex, our main contribution is to simplify it and propose an efficient method to solve it approximately. We measure the wall-clock time of the projection step, and across AlexNet, Squeezenet, and MobileNetV2, the projection step can be solved extremely efficiently -- within 0.2 seconds to be exact. Please also see our response to the 3rd question from Reviewer 3.\\n\\nIn addition, at the technique-level, using a simple sparsity-based compression method to train energy-constrained DNNs would require setting the sparsity threshold for each layer to satisfy the energy constraint while minimizing the loss. Such a hyper-parameter tuning is not trivial. We compare against one such method (NetAdapt) and demonstrate higher accuracy with lower/same energy (Please see Table 1 and Table 2).\\n\\n\\n> \\u201cIn Table 1, the proposed method is highlighted as having the smallest accuracy drop on SqueezeNet. But this is not true, EAP is lower. Likewise on AlexNet, NetAdapt has an equally optimal energy.\\u201d\\n\\nIn Table 1, our evaluation methodology is to configure our method to have an energy that is *the same as or lower than the lowest energy of prior work*, and compare the accuracy drops. In the case of AlexNet, our approach has a lower accuracy drop compared to NetAdapt at the same energy consumption. In the case of SqueezeNet, we show that our approach has the lowest energy among all the methods with only 0.3% higher accuracy drop than EAP. In Figure 2, we perform a comprehensive study where we vary the energy consumption of our method. We show that our method can train a network that has lower energy and less accuracy drop (the rightmost solid blue square) compared to EAP.\\n\\nWe will clarify our writing in the revision.\"}",
"{\"title\": \"Interesting idea for energy-constrained compression, but some improvements still possible\", \"review\": \"This paper describes a procedure for training neural networks via an explicit constraint on the energy budget, as opposed to pruning the model size as commonly done with standard compression methods. Comparative results are shown on a few data sets where the proposed method outperforms multiple different approaches. Overall, the concept is interesting and certainly could prove valuable in resource-constrained environments. Still I retain some reservations as detailed below.\\n\\nMy first concern is that this paper exceeds the recommended 8 page limit for reasons that are seemingly quite unnecessary. There are no large, essential figures/tables, and nearly the first 6 pages is just introduction and background material. Likewise the paper consumes a considerable amount of space presenting technical results related to knapsack problems and various epsilon-accurate solutions, but this theoretical content seems somewhat irrelevant and distracting since it is not directly related to the greedy approximation strategy actually used for practical deployment. Much of this material could have been moved to the supplementary so as to adhere to the 8 page soft limit. Per the ICLR reviewer instructions, papers deemed unnecessarily long relative to this length should be judged more critically.\\n\\nAnother issue relates to the use of a mask for controlling the sparsity of network inputs. Although not acknowledged, similar techniques are already used to prune the activations of deep networks for compression. In particular, various forms of variational dropout essentially use multiplicative weights to remove the influence of activations and/or other network components similar to the mask M used is this work. Representative examples include Neklyudov et al., \\\"Structured Bayesian Pruning via Log-Normal Multiplicative Noise,\\\" NIPS 2017 and Louizos et al., \\\"Bayesian Compression for Deep Learning,\\\" NIPS 2017, but there are many other related alternatives using some form of trainable gate or mask, possibly stochastic, to affect pruning (the major ML and CV conferences over the past year have numerous related compression papers). So I don't consider this aspect of the paper to be new in any significant way.\\n\\nMoreover, for the empirical comparisons it would be better to compare against state-of-the-art compression methods as opposed to just the stated MP and SSL methods from 2015 and 2016 respectively. Despite claims to the contrary on page 9, I would not consider these to be state-of-the-art methods at this point.\\n\\nAnother comment I have regarding the experiments is that hyperparameters and the use of knowledge distillation were potentially tuned for the proposed method and then simultaneously applied to the competing algorithms for the sake of head-to-head comparison. But to me, if these enhancements are to be included at all, tuning must be done carefully and independently for each algorithm. Was this actually done? Moreover it would have been nice to see results without the confounding influence of distillation to isolate sources of improvement, but no ablation studies were presented.\\n\\nFinally, regarding the content in Section 5, the paper carefully presents an explicit bound on energy that ultimately leads to a constraint that is NP-hard just to project on to, although approximate solutions exist that depend on some error tolerance. However, even this requires an algorithm that is dismissed as \\\"complicated.\\\" Instead a greedy alternative is derived in the Appendix which presumably serves as the final endorsed approach. But at this point it is no longer clear to me exactly what performance guarantees remain with respect to the energy bound. Theorem 3 presents a fairly inscrutable bound, and it is not at all transparent how to interpret this in any practical sense. Note that after Theorem 3, conditions are described whereby an optimal projection can be obtained, but these seem highly nuanced, and unlikely to apply in most cases.\\n\\nAdditionally, it would appear that crude bounds on the energy could also be introduced by simply penalizing/constraining the sparsity on each layer, which leads to a much simpler projection step. For example, a simple affine function of the L0 norm would be much easier to optimize and could serve as a loose bound on the energy, given that the latter should be a non-decreasing function of the L0 norm. Any idea how such a bound compares to those presented given all the approximations and greedy steps that must be included?\", \"other_comments\": [\"As an implementation heuristic, the proposed Algorithm 1 gradually decays the parameter q, which controls the sparsity of the mask M. But this will certainly alter the energy budget, and I wonder how important it is to employ a complex energy constraint if minimization requires this type of heuristic.\", \"I did not see where the quantity L(M,W) embedded in eq. (17) was formally defined, although I can guess what it is.\", \"In general it is somewhat troublesome that, on top of a complex, non-convex deep network energy function, just the small subproblem required for projecting onto the energy constraint is NP-hard. Even if approximations are possible, I wonder if this extra complexity is always worth it relative so simple sparsity-based compression methods which can be efficiently implemented with exactly closed-form projections.\", \"In Table 1, the proposed method is highlighted as having the smallest accuracy drop on SqueezeNet. But this is not true, EAP is lower. Likewise on AlexNet, NetAdapt has an equally optimal energy.\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"The good paper, there are several questions\", \"review\": \"The paper is dedicated to energy-based compression of deep neural networks. While most works on compression are dedicated to decreasing the number of parameters or decreasing the number of operations to speed-up or reducing of memory footprint, these approaches do not provide any guarantees on energy consumption. In this work the authors derived a loss for training NN with energy constraints and provided an optimization algorithm for it. The authors showed that the proposed method achieves higher accuracy with lower energy consumption given the same energy budget. The experimental results are quite interesting and include even highly optimized network MobileNetV2.\\n\\nSeveral questions and concerns.\\n\\u2018Our energy modeling results are validated against the industry-strength DNN hardware simulator ScaleSim\\u2019. Could the authors please elaborate on this sentence?\\n\\nOne of the main assumptions is the following. If the value of the data is zero, the hardware can skip accessing the data. As far as I know, this is a quite strong assumption, that is not supported by many architectures. How do the authors take into account overhead of using sparse data formats in such hardware in their estimations? Is it possible to simulate such behavior in ScaleSim? Moreover, in many modern systems DRAM can only be read in chunks. Therefore it can decrease number of DRAM accesses in (4).\", \"small_typos_and_other_issues\": \"Page 8. \\u2018There exists an algorithm that can find an an \\\\epsilon\\u2019\\nPage 8.\\u2019 But it is possible to fan approximate solution\\u2019\\nPage 4. It is better to put the sentence \\u2018where s convolutional stride\\u2019 after (2).\\nIn formulation of the Theorem 3, it is better to explicitly state that A contains rational numbers only since gcd is used.\\nOverall, the paper is written clearly and organized well, contains interesting experimental and theoretical results.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Interesting paper\", \"review\": \"The paper proposes a method for neural network training under a hard energy constraint (i.e. the method guarantees the energy consumption to be upper bounded). Based on a systolic array hardware architecture the authors model the energy consumption of transferring the weights and activations into different levels of memory (DRAM, Cache, register file) during inference. The energy consumption is therefore determined by the number of nonzero elements in the weight and activation tensors. To minimize the network loss under an energy constraint, the authors develop a training framework including a novel greedy algorithm to compute the projection of the weight tensors to the energy constraint.\", \"pros\": \"The proposed method allows to accurately impose an energy constraint (in terms of the proposed model), in contrast to previous methods, and also yields a higher accuracy than these on some data sets. The proposed solution seems sound (although I did not check the proofs in detail, and I am not very familiar with hardware energy consumption subtleties).\", \"questions\": \"The experiments in Sec. 6.2 suggest that the activation mask is mainly beneficial when the data is highly structured. How are the benefits (in terms of weight and activation sparsity) composed in the experiments on Imagenet? How does the weight sparsity of the the proposed method compare to the related methods in these experiments? Is weight sparsity in these cases a good proxy for energy consumption?\\n\\nHow does the activation sparsity (decay) parameter (\\\\delta) q affect the accuracy-energy consumption tradeoff for the two data sets?\\n\\nThe authors show that the weight projection problem can be solved efficiently. How does the guarantee translate into wall-clock time?\\n\\nFilter pruning methods [1,2] reduce both the size of the weight and activation tensors, while not requiring to solve a complicated projection problem or introducing activation masks. It would be good to compare to these methods, or at least comment on the gains to be expected under the proposed energy consumption model.\\n\\nKnowledge distillation has previously been observed to be quite helpful when constraining neural network weights to be quantized and/or sparse, see [3,4,5]. It might be worth mentioning this.\", \"minor_comments\": \"- Sec. 3.4. 1st paragraph: subscript -> superscript\\n- Sec. 6.2 first paragraph: pattens -> patterns, aliened -> aligned\\n\\n[1] He, Y., Zhang, X., & Sun, J. (2017). Channel pruning for accelerating very deep neural networks. ICCV 2017.\\n[2] Li, H., Kadav, A., Durdanovic, I., Samet, H., & Graf, H. P. Pruning filters for efficient convnets. ICLR 2017.\\n[3] Mishra, A., & Marr, D. Apprentice: Using knowledge distillation techniques to improve low-precision network accuracy. ICLR 2018.\\n[4] Tschannen, M., Khanna, A., & Anandkumar, A. StrassenNets: Deep learning with a multiplication budget. ICML 2018.\\n[5] Zhuang, B., Shen, C., Tan, M., Liu, L., & Reid, I. Towards effective low-bitwidth convolutional neural networks. CVPR 2018.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
HyxSBh09t7 | Graph Generation via Scattering | [
"Dongmian Zou",
"Gilad Lerman"
] | Generative networks have made it possible to generate meaningful signals such as images and texts from simple noise. Recently, generative methods based on GAN and VAE were developed for graphs and graph signals. However, the mathematical properties of these methods are unclear, and training good generative models is difficult. This work proposes a graph generation model that uses a recent adaptation of Mallat's scattering transform to graphs. The proposed model is naturally composed of an encoder and a decoder. The encoder is a Gaussianized graph scattering transform, which is robust to signal and graph manipulation. The decoder is a simple fully connected network that is adapted to specific tasks, such as link prediction, signal generation on graphs and full graph and signal generation. The training of our proposed system is efficient since it is only applied to the decoder and the hardware requirement is moderate. Numerical results demonstrate state-of-the-art performance of the proposed system for both link prediction and graph and signal generation. These results are in contrast to experience with Euclidean data, where it is difficult to form a generative scattering network that performs as well as state-of-the-art methods. We believe that this is because of the discrete and simpler nature of graph applications, unlike the more complex and high-frequency nature of Euclidean data, in particular, of some natural images. | [
"graph generative neural network",
"link prediction",
"graph and signal generation",
"scattering network"
] | https://openreview.net/pdf?id=HyxSBh09t7 | https://openreview.net/forum?id=HyxSBh09t7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"ryglXPwZgV",
"r1lYN5pryN",
"r1x35juryN",
"HyeaY7H9CX",
"rkeLHQr5RQ",
"rJlWAWSqRX",
"H1gJGbrqC7",
"BkgwHNj93X",
"r1eDvfHq2X",
"ByeF3skShm"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544808216250,
1544047153086,
1544027028279,
1543291781244,
1543291709742,
1543291336949,
1543291142844,
1541219390801,
1541194334894,
1540844464938
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1539/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1539/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1539/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1539/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1539/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1539/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1539/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1539/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1539/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1539/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"AR1 is concerned about the novelty and what are exact novel elements of the proposed approach. AR2 is worried about the novelty (combination of existing blocks) and lack of insights. AR3 is also concerned about the novelty, complexity and poor evaluations/lack of thorough comparisons with other baselines. After rebuttal, the reviewers remained unconvinced e.g. AR3 still would like to see why the proposed method would be any better than GAN-based approaches.\\n\\nWith regret, at this point, the AC cannot accept this paper but AC encourages the authors to take all reviews into consideration and improve their manuscript accordingly. Matters such as complexity (perhaps scattering networks aren't the most friendly here), clear insights and strong comparisons to generative approaches are needed.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Some merit.\"}",
"{\"title\": \"Response to feedback\", \"comment\": [\"The new uploaded version of the paper addressed your comments on the very few unclear sentences. If you have additional comments let us know.\", \"Thanks for pointing to the new github page with the code of MolGAN. It was not available before submission (it was only initiated, with partial information, 3 days before submission).\"]}",
"{\"title\": \"thanks for the comments\", \"comment\": \"Thanks a lot for an elaborate reply.\\nI still believe that the paper clarity could be improved, and, generally, I am not very convinced by the arguments regarding the novelty and the benefits wrt to existing generative models. \\nBTW, the code of MolGAN seems to be publicly available (I do not know if it was when the paper was submitted though) https://github.com/nicola-decao/MolGAN\"}",
"{\"title\": \"Response to Reviewer 3 (cont'd)\", \"comment\": \"Clarification of statements:\\n\\n* Explanation of \\u201dare complex as well as difficult to train and fine-tune.\\u201d\\nWe meant by \\u201ccomplex\\u201d that it is rather difficult to understand properties of GANs and VAEs. On the other hand, as we mentioned above, for our procedure there is some understanding of its robustness to signal and graph manipulation. As for \\u201cdifficult to train and fine-tune\\u201d, it is well known that it is difficult to train a GAN since it suffers from local minima and diminishing gradients. Furthermore, training VAE in the Euclidean domain to produce unblurred samples is also known to be difficult. Anyway, we slightly rewrote the text.\\n\\n* Explanation of reference to the Euclidean domain. \\nWe first discussed images and texts and thus talked about the Euclidean domain and later mentioned generalization to the graph domain. We slightly rewrote the text to make it even clearer. We remark though that a recurrent network assumes a sequence or time series, which has a 1D Euclidean structure. Similarly, a convolutional neural network assumes a domain where convolution makes sense. Of course there are generalized notions, as we mention, such as \\u201cgraph convolution\\u201d, which is not a mathematical convolution.\\n\\n* Explanation of the importance of the quality of the prescribed representation. \\nWe choose to fix a prescribed encoder and train a decoder only. In this way the loss function is purely associated with the decoder, which is easier to understand and is more tractable. However, since we do not train the encoder, it is very important to make sure that the encoding is meaningful. Properties that make our encoder meaningful are described in Zou & Lerman (2018). We rewrote this sentence. \\n\\n* On whether training two parts is a bad thing. \\nFor VAE, the model can be seen as a single neural net, but its loss function corresponds to two components: \\nthe encoder (represented in the KL part) and the the decoder (represented in the cross-entropy part). The main issue is how to weigh the two parts of the energy function. One way to look at the loss function is to regard it as an approximation to the Evidence Lower Bound (ELBO) and use equal weights, but it is based on a crude approximation. For GAN, training two parts is more difficult since the gradient has to be taken iteratively for the generator and the discriminator. Of course, training one component does not necessarily outperforms VAE or GAN (see e.g., the results in Angles & Mallat (2018)), but it is an interesting paradigm, whose properties might be better understood.\\n\\n* On correctness of \\u201dGAN-type graph networks use a discriminator in order to compete with the generator and make its training more powerful.\\u201d \\nMathematically, GANs minimize KL divergences between the data distribution and the generation distribution. However, intuitively, the discriminator is used for making the task of generation hard, thus the generator has to be sufficiently good in order to fool the discriminator.\\n\\n* On comparison of the QM9 dataset: we mentioned this issue above.\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"We believe that, as indicated by reviewer 2, the paper is clear and well-written. Later on, we address all the small issues you raised. Nonetheless, we think it is unfair to claim that there are various (or possibly many) imprecisions and significant amount of statements which are hard to understand. We agree that there were few unclear sentences.\\n\\nWe discussed issues with novelty above. As for the particular comparison with Angles & Mallat (2018). We indeed follow the basic framework of this paper, but we use it with the recent graph scattering transform instead of the original one, we suggest it for different graph tasks with different types of decoders and most importantly, we are able to obtain competitive results for the discrete graph applications, unlike the results of Angles and Mallat for image data, which are not competitive with state-of-the-art generative neural networks.\", \"in_comparison_to_gans_and_vaes_the_proposed_method_has_the_following_advantages\": \"1. There is no need to train the encoder. 2. There is an established mathematical understanding of the robustness of the encoder to signal and graph manipulation (see Zou & Lerman (2018)). 3. The numerical results of this method are more competitive.\\n\\nThe scattering transform is not part of the training and thus requires to be executed only once. In the numerical section we train 1000 epochs and thus the total time (25.75s for Cora and 17.76s for Citeseer) is smaller than that for VGAE (80s for Cora and 108.2s for Citeseer). Even if we just train 200 epochs (as in the VGAE paper), the total time is still smaller than that for VGAE. In particular, application on our machine of VGAE to the PubMed dataset exhausted all computing resources. Therefore, there is a benefit for not training the encoder.\\n\\nWe indeed claim that our method does not require training of the encoder. As for parameters chosen for the scattering transform, they are very generic and cannot be considered as requiring training. We use the Shannon wavelets, which are the simplest wavelets and have no special parameters, we choose J = 3 and 3 layers, similarly to Zou & Lerman (2018) and we do not expect any improvement with higher J or additional layers. Also, choices of reduced dimensions are mentioned in the text. Dimension reductions were performed in order to save time and reduce redundancies. We did not notice sensitivity with different dimensions chosen. There are no hidden parameters that need to be carefully tuned. \\n\\nOur numerical part emphasizes the graph generation and not the signal generation. The signal generation is presented as a simple sanity check and there are no quantitative estimates for it. Also, one needs to recall that image generation models produce better images than graph-based generation models. There was no space to include images generated by other methods and such a comparison may not be meaningful. We comment though that we did similar experiments with GAN and VAE and we now report them in the appendix. Since we are not aware of any previous work addressing this task, we constructed the networks by replacing specific parts in a standard GAN / VAE with graph networks. For GAN, the discriminator is a graph neural net following Kipf & Welling (2017) and the generator is fully-connected; for VAE, the encoder is a graph neural net following Kipf & Welling (2017). It seems that GAN is worse and VAE is comparable, but it is hard to quantify the differences.\\n\\nBoth MolGAN and GraphVAE do not have their codes available online. We thus had to use their reported results. Therefore, we cannot have the same training set as GraphVAE (their training set was not specified). Furthermore, GraphVAE used a validation set, where other works do not need it. Also, GraphVAE fixes the number of atoms to be nine for each molecule and thus does not use training data with less than 9 atoms. On the other hand, MolGAN trains over the whole QM9 dataset. It also requires padding to deal with fewer atoms than 9. Our particular choices make sense to us and there is nothing we could do about exact comparison with their codes as they did not provide them. We made it very clear in the original manuscript. We do not see any way to exactly compare with the original setting of GraphVAE, but we can compare with the exact setting of MolGAN. Even though this setting is not natural to us, we added such a comparison in the revised version. In this setting, validity by scattering is slightly lower than MolGAN but still good, on the other hand, the novelty and uniqueness by scattering are higher.\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"We believe that we give clear credits to previous works and it is thus surprising that \\u201cit is difficult to discern what parts of this paper were new work.\\u201d As stated in the paper the graph scattering transform is due to Zou and Lerman and it relies on the graph wavelets of Hammond et al. We claimed that \\u201cthe adjustment of the structure of the decoder to the three types of tasks does not require a lot of effort\\u201d. The specific decoder we used for link prediction is a very natural choice (just like using a natural fully connected network for convolutional neural networks). This decoder is also not a main issue in Kipf & Welling\\u2019s work (but the idea of combining VAE and graph convolution for graph tasks makes it interesting). Despite the fact that we follow previous works, we believe they are several interesting points to people who care about graph generation. We detailed these points above when addressing the novelty concern of reviewer 2.\", \"specific_responses\": [\"In the definition of S[p]f (page 5) a \\u201cpath\\u201d p is a sequence of scales (indeed, it was denoted in the text by p = (j_1, \\u00b7 \\u00b7 \\u00b7 , j_m) on Page 5). Each scale corresponds to a wavelet transform at a certain layer (we further clarify it in the new manuscript). Figure 1 in Zou & Lerman (2018) clarifies the transform.\", \"Explanation of the whitening operation A and possible elimination of information encoded in \\\\bar{X}: In generation tasks, it is common to generate a sample from Gaussian noise and send it to the decoder. Our whitening procedure maps the latent variable \\\\bar{X} to Gaussian noise. It also reduces the dimension of the signal. We find such dimension\", \"reduction useful and even necessary. Indeed, the dimension of the output of the scattering transform is very high, since this output corresponds to different paths and thus has a lot of redundancy.\", \"On the choice of loss function at the top of page 6: Thanks for noticing a typo in our paper. It should indeed be a sum of log-likelihoods and this is how we implemented the code. We guess that by \\u201cthis loss doesn\\u2019t seem to account for including edges where there are none\\u201d you mean that the sum term does not include the case where W(i, j) = 0. Note that this is exactly the correct form of a cross-entropy loss. Indeed, here W(i, j) \\\\neq 0 corresponds to probability (of connecting two vertices) 1 and W(i, j) = 0 corresponds to probability 0. A cross-entropy loss contains a term with respect to W(i, j) = 0, which is equal to 0, since the true probability is 0.\", \"On significance of dimension reduction: The dimension is reduced to 256, not from 784, but from the dimension of the output of the scattering transform, which is, 784 \\u00d7 13 = 10, 192 (note that there are 13 paths). The comment \\u201cI wonder how their approach compares to e.g., a low-pass filter or simple compression algorithm\\u201d is unclear to us.\", \"Please let us know what kind of comparison you would like us to pursue and what should be interesting about it.\", \"Clarification of 4 types of atoms: It is sufficient to encode C, N, O, F and the types of bonds that connect them. It is not necessary to encode H because it will be uniquely determined by the other atoms and bonds. For instance, if we have two C\\u2019s connected by a double bond, then it has to be CH2 = CH2 (here the = sign denotes a double bond and not an equality).\"]}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"\", \"on_lack_of_novelty\": \"Even though the components of the graph scattering generative network are derived from previous works, it raises several interesting points: 1. It describes a universal way of using graph scattering for different graph generation tasks. 2. Unlike GAN and VAE based methods, the encoding phase does not require training. 3. Unlike the generative scattering transform of Angles and Mallat (2018), which does not perform as well as state-of-the-art methods for imaging tasks, the proposed one is very competitive for discrete graph data and it should be noted. To demonstrate the problem with your criticism that you expressed as \\u201csimple combination of existing encoders and decoders\\u201d, one may apply it to the very interesting work of Angles and Mallat in ICLR 2018 and misjudge its contribution.\", \"on_lack_of_insights\": \"The reason why this generation method is useful can be explained by its robustness to signal and graph manipulation (see Proposition 5.1 and Theorems 5.2 and 5.3 in Zou and Lerman (2018)). Nevertheless, in our opinion, practical performance, which is emphasized here, is more important to verify (we mention above practical deficiencies of the Euclidean generative network of Angles and Mallat (2018)).\", \"on_qm9_performance_and_lack_of_reference\": \"Even though we find JT-VAE, the method in the ICML paper you mentioned, interesting and we now refer to it, it is not directly relevant to our experiments. Also, your claim \\u201ccould already achieve 100% valid\\u201d is not precise. JT-VAE achieves 93.5% validity without validity check in the decoding phase and 100% only after a full validity check (please check the conference version and not the arXiv one). More importantly, the result is on a different dataset (ZINC), not QM9. For QM9, the tree decomposition of JT-VAE is not expected to improve results since molecules in QM9 are composed of at most nine atoms. Moreover, JT-VAE reinforces molecule validity (see e.g., Step 3 in Algorithm 1 of the ICML paper). This validity reinforcement can be applied to other graph-based methods and to be fair to other methods, comparison with JT-VAE should include applying it to them too.\"}",
"{\"title\": \"Simple combination of existing works\", \"review\": \"The paper used the graph scattering network as the encoder, and MLP as the decoder to generate links/graph signals/graphs.\", \"pros\": \"1.\\tClearly written. Easy to follow.\\n2.\\tNo need to train the encoder\\n3.\\tGood results on link prediction tasks\", \"cons\": \"1.\\tLack of novelty. It is a simple combination of existing encoders and decoders. For example, compared to VGAE, the only difference in the link prediction task is using a different encoder. Even if the performance is very good, it can only demonstrate the effectiveness of others\\u2019 encoder work and this paper\\u2019s correct selection of a good encoder. \\n2.\\tLack of insights. As a combination of existing works, if the paper can deeply explain the why this encoder is effective for the generation, it is also beneficial. But we also do not see this part. In particular, in the graph generation task, the more important component may be the decoder to regulate the validness of the generated graphs (e.g. \\u201cConstrained Generation of Semantically Valid Graphs via Regularizing Variational Autoencoders. In NIPS 2018\\u201d which used the similar decoder but adding strong regularizations in VAE). \\n3.\\t Results on QM9 not good enough and lack of references. Some recent works (e.g. \\u201cJunction Tree Variational Autoencoder for Molecular Graph Generation, ICML 2018\\u201d) could already achieve 100% valid.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting topic, but the paper does not contain enough novel content.\", \"review\": \"## Summary ##\\n\\nThe authors apply the wavelet scattering transform to construct an autoencoder for graphs. They apply this architecture to reconstructing citation graphs, images, and generating molecules.\\n\\n## Assessment ##\\n\\nIt was difficult to discern what parts of this paper were new work. The graph scattering transform seems to have appeared first in Hammond et al. or Zhou and Lerman. The proposed decoder in 3.2.1 is attributed to Kipf and Welling. The molecule generation portion was interesting, but I don't think there was enough novel content in this paper to justify acceptance to ICLR. I could be convinced otherwise if the authors' contribution is clarified in rebuttal.\\n\\n## Questions and Concerns ##\\n\\n* I found the definition of $S[p]f$ (page 5) a little confusing. In particular, what constitutes a 'path' $p$ in this setting?\\n* Can you motivate the whitening operation $A$ that is applied to the encoding? It seems like this is eliminating a lot of the information encoded in $\\\\bar{X}$.\\n* I'm confused by the choice of loss function at the top of page 6. Since $D(z) = \\\\sigma(...)$, it seems like $D(i, j)$ is meant to represent the probability of a link between $i$ and $j$. In that case, the loss is a sum of negative probabilities, which is unusual. Was this meant to be a sum of log probabilities? Also, this loss doesn't seem to account for including edges where there are none. Can you explain why this is the case?\\n* In section 4.2, the encoded dimension is 256 IIUC. Considering that the data was restricted to the \\\"boots\\\" class, the reduction from 784-->256 dimensions does not seem significant. The authors concede that some high-frequency information is lost, so I wonder how their approach compares to e.g. a low-pass filter or simple compression algorithm.\\n* Section 4.3 states that the molecules considered are constructed from atoms C, H, O, N and F. Later, there are multiple references to only 4 atom types, one-hot vectors in $R^4$ etc. Clarify please!\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Interesting paper, may lack novelty and clarity, seems to have problems in experimental evaluation.\", \"review\": \"Summary:\\nThe paper presents a generative model for graphs which is a VAE-like architecture where the encoder is a scattering transform with fixed parameters (rather than a trainable neural net)\", \"pros\": [\"The problem of graph generative models is very important, and a lot of the existing methods are not very scalable.\", \"Using spectral methods in place of standard \\\"neural net operations\\\" makes a lot of sense.\", \"Numerical results for the \\\"link prediction\\\" task seem to be significantly better than those of baselines.\"], \"cons\": \"- The paper contains various imprecisions (see the non-exhaustive list below), and significant amount of statements which are hard to understand.\\n- I am not sure if the work can be considered particularly novel: in particular, it is not really emphasised what is the difference with [Angles & Mallat '2018].\\n- The motivation for the work is not entirely clear: it is true that GANS and VAEs have their issues, but in my view it is not really explained / argued why the proposed method would solve them.\\n- I find the argument about the efficiency not very convincing, especially after looking at the members (bottom of p. 7): the scattering transform alone takes several orders of magnitude longer than the baseline. Authors also mention that their method does not require training \\nof the encoder, but I do not see any comparisons with respect to number of parameters.\\n- The experimental evaluation for \\\"signal generation\\\" and \\\"graph generation\\\" is not very convincing. For the former there is no real comparison to existing models. And for the latter, the experimental setup seems a bit strange: it appears that the models were trained on different subsets of the dataset, making the comparison not very meaningful. Also, I would expect to see the same methods to be compared to a cross all the tasks (unless it is impossible for some reason).\\n\\nVarious typos / imprecisions / unclear statements:\\np.1, \\\"are complex as well as difficult to train and fine-tune.\\\": not at all clear what this means.\\np.1, \\\"Their development is based on fruitful methods of deep learning in the Euclidean domain, such as convolutional and recurrent neural networks.\\\": Recurrent and convolution neural network are not necessarily restricted to Euclidean domains. \\np.1, \\\"Using a prescribed graph representation, it is possible to avoid training\\nthe two components at the same time, but the quality of the prescribed representation is important\\nin order to generate promising results.\\\": not clear what this sentence means.\\np.2, \\\"Unlike GAN or VAE, the model in this paper does not require training two components either iteratively or at the same time.\\\": I do not see why that would necessarily be a bad thing, especially in the case of VAE where traditional training in practice corresponds to training a single neural net.\\np.3, \\\"GAN-type graph networks use a discriminator in order to compete with the generator and make its training more powerful.\\\": I am not sure this statement is strictly correct.\\np.9: \\\"We remark that the number of molecules in the training sets are not identical to that in ...\\\": does this mean that the models are effectively trained on different data? In that case, the comparison is not very meaningful.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
SklrrhRqFX | Learning Physics Priors for Deep Reinforcement Learing | [
"Yilun Du",
"Karthik Narasimhan"
] | While model-based deep reinforcement learning (RL) holds great promise for sample efficiency and generalization, learning an accurate dynamics model is challenging and often requires substantial interactions with the environment. Further, a wide variety of domains have dynamics that share common foundations like the laws of physics, which are rarely exploited by these algorithms. Humans often acquire such physics priors that allow us to easily adapt to the dynamics of any environment. In this work, we propose an approach to learn such physics priors and incorporate them into an RL agent. Our method involves pre-training a frame predictor on raw videos and then using it to initialize the dynamics prediction model on a target task. Our prediction model, SpatialNet, is designed to implicitly capture localized physical phenomena and interactions. We show the value of incorporating this prior through empirical experiments on two different domains – a newly created PhysWorld and games from the Atari benchmark, outperforming competitive approaches and demonstrating effective transfer learning. | [
"Model-Based Reinforcement Learning",
"Intuitive Physics"
] | https://openreview.net/pdf?id=SklrrhRqFX | https://openreview.net/forum?id=SklrrhRqFX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"rkgumKTZl4",
"Bkeq10KhJ4",
"rJlJpsLiJE",
"S1llsjn9JN",
"HJetKohcJN",
"r1xm853cyE",
"S1ecIeKSkE",
"Skgq5LTxR7",
"SkgxeUagCX",
"S1gsaHTlR7",
"ryl_iHpl07",
"SJex19xJam",
"H1euYXIq2m",
"H1ljcqfW37"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544833311786,
1544490466286,
1544412087341,
1544371095661,
1544371072608,
1544370763058,
1544028241688,
1542669970115,
1542669799882,
1542669762731,
1542669728398,
1541503448268,
1541198720452,
1540594322548
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1538/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1538/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1538/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1538/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1538/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1538/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1538/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1538/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1538/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1538/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1538/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1538/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1538/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1538/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper suggests a new way to learn a physics prior, in an action-free way from raw frames. The idea is to \\\"learn the common rules of physics\\\" in some sense (from purely visual observations) and use that as pre-training. The authors made a number of experiments in response to the reviewer concerns, but the submission still fell short of their expectations. In the post-rebuttal discussion, the reviewers mentioned that it's not clear how SpatialNet is different from a ConvLSTM, mentioned the writing quality and the fact that the \\\"physics prior\\\" is really quite close to what others call video prediction in other baselines.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"metareview\"}",
"{\"title\": \"response\", \"comment\": \"Thank you for your response. The extra experiments definitely make this paper much convincing. However, learning *physics* priors is not clarified in the text/experiments. The ablation study is not convincing to me to show *physics priors* is different or superior to \\\"imagination augmented\\\"(video prediction) methods. I think the general direction of this paper is good, but the paper needs major revision. Therefore, I am unable to recommend this paper for acceptance.\"}",
"{\"title\": \"Feedback on Rebuttal\", \"comment\": \"I thank the authors for providing a detailed rebuttal and updating the paper to reflect it. The proposed method outperforms the baselines of ISP and JISP on the physics tasks.\\n\\nMy main pro for the paper is that is exhaustively testing the utility of physics prior in learning a goal directed policy. Additional experiments show the proposed method outperforms the closely related I2A baseline.\", \"my_main_concerns_are\": \"(a) From figure 11, it seems that IPA and PPO are comparable. While it is true, that IPA outperforms PPO in many more games, but in many cases the performance gain within error bars. \\n\\n(b) The difference between ConvLSTM and Spatial Net is still unclear to me. In one of the comments, authors mention: \\\"SpatialNet also has a input copy mechanism that add the current state to the output of the spatialnet encoding\\\". This is trivial to implement in ConvLSTM -- and will give more insights into why Spatial Net is outperforming ConvLSTM. Right now, results are in favor of Spatial net, but the exact differences from ConvLSTM that lead to performance difference is unclear. \\n\\nDespite a very good rebuttal, I am still concerned by (a)/(b) and can therefore cannot recommend the paper to be accepted. I highly encourage authors to clarify these points and resubmit to a future conference. Utilizing physics prior to improve policy learning has great potential.\"}",
"{\"title\": \"Request for feedback\", \"comment\": \"Dear reviewer,\\nThank you so much for your original comments. We spent a large amount of work adjusting the clarifications initially requested. We would appreciate it if you could take a look at the revised version and let us know your thoughts.\"}",
"{\"title\": \"Request for feedback\", \"comment\": \"Dear reviewer,\\nThank you so much for your original comments. We spent a large amount of work adjusting the clarifications initially requested. We would appreciate it if you could take a look at the revised version and let us know your thoughts.\"}",
"{\"title\": \"Clarifications\", \"comment\": \"Thank you for getting back to us! We explicitly state in the abstract that we propose a method for learning physics priors from raw videos and demonstrate its applicability to deep RL. We have provided detailed comparisons to the most related pieces of work in both the Related Work (Section 2), where we mention the key differences between our work and the prior work, as well as in the experiments sections (Sections 4 and 5), where we provide empirical comparison to these methods. These include the papers you mentioned earlier - ConvLSTM (Xingjian et al.), I2A architecture (Weber et al.) and RCNet (Oh et al.), in addition to other baselines.\\n\\nRegarding your comment \\u201cRather, I suggest focusing on the difference with existing prior works\\u201d - we would appreciate if you could provide us with any specific papers you had in mind. If you feel the title is too generic, we would also appreciate any suggestions on specific changes.\"}",
"{\"title\": \"Thank you for additional control experiments.\", \"comment\": \"Thank you authors for adding appropriated baselines and comparisons as requested.\\nI increased rating, but I'm still concerned with the focus of the paper.\\nIn terms of the writing, it seems the paper's focus is about studying \\\"physics priors for reinforcement learning\\\" in general.\\nHowever, the title \\\"physics priors for reinforcement learning\\\" seems too general to differentiate the paper from few existing works related to this topic.\\nRather, I suggest focusing on the difference with existing prior works and put more emphasis on novel contributions would make the paper more clear and easy to understand.\\nBecause this requires major revision in the paper, I would keep my rating as rejection for this submission.\"}",
"{\"title\": \"Added Requested Baselines, Atari Results, Writing Clarifications\", \"comment\": \"We thank all the reviewers for the helpful feedback. We have addressed all the comments below as replies to individual reviews. We have also made the following modifications to the revised version of the paper:\\n\\na) We have included four additional baselines as suggested by the reviewers including ConvLSTM, imagination augmented agents (I2A), and two versions of our IPA model: (1) ISP: initialization of a model with weights learned with SpatialNet on the PhysWorld environment -- where we use the convolutional encoding of SpatialNet as input into the policy network, and (2) JISP: jointly optimizing future frame prediction + environment reward (Section 5.1, Table 2).\\n\\nb) We emphasize that our experiments are on a stochastic version of Atari, which is more challenging than the benchmark used by previous work [Machado et al., 2017]. We have added results for the entire suite of Atari games (along with standard deviations across runs with different random seeds) in the appendix.\\n\\nc) We have also added clarifications to the writing suggested by the reviewers, including ego-dynamics (Section 5), descriptions and details of the new baselines (Section 5.1), as well as analysis on their performance compared to our method.\"}",
"{\"title\": \"Author Response\", \"comment\": \"Thank you for the comments. We provide individual responses below.\\n\\n(a) In our IPA model, the policy net and frame prediction network do not share any parameters. SpatialNet is fine-tuned to more accurately predict future frames on the new environment while the policy net is optimized for control performance. Policy gradients are not back-propagated to SpatialNet. We have added this clarification in the paper (Section 3.2). Both PPO and IPA agents see identical number of frames, we train the frame prediction network on the frames used to train the policy. We have updated the description in Section 3.2 to make this clearer.\\n\\n(b) All the tested approaches see exactly the same number of frames on the control environment. SpatialNet sees extra frames from the PhysVideos data, but these are offline, from a different domain and contain objects of different shapes, colors, dynamics from the target control environments (PhysWorld and Atari). The PhysVideo frames are only used to train SpatialNet for dynamics prediction, and not for any policy learning.\\n\\n(c) Thank you for the suggestions. We have added comparisons with the two baselines: (1) ISP: initialization of a model with weights learned with SpatialNet on the PhysWorld environment -- where we use the convolutional encoding of SpatialNet (z_t) as input into the policy network, and (2) JISP: jointly optimizing future frame prediction + environment reward (see Section 5.1, Table 2). We find that initializing with the weights from SpatialNet performs about the same as normal PPO, likely due to much of the initially learned priors being corrupted with reward updates. As for the second baseline, we find that joint training does provide a benefit in performance over PPO, but not as large as IPA (except on PhysForage).\\n\\n(d) We first emphasize that our experiments are on a stochastic version of Atari, which is more challenging than the benchmark used by previous work [Machado et al., 2017]. Further, we did perform experiments on all Atari games but didn\\u2019t present them all due to lack of space. We have added results for the entire suite of Atari games in the appendix in this revised version. We included the Atari results since they are a standard benchmark to compare with previous approaches -- not all games require an understanding of physical dynamics, which explains the cases where our method does not improve upon PPO. We specifically created PhysWorld and performed empirical studies to test our approach on environments that rely more on understanding basic physics like velocity, collision laws, etc.\"}",
"{\"title\": \"Author Response\", \"comment\": \"Thank you for the helpful feedback. We have added relevant comparisons to both the imagination augmented agents architecture (I2A) (Table 3) and the ConvLSTM model (Table 1) as baselines. Below, we provide a comparison of our method with each baseline along with empirical results.\", \"convlstm\": \"SpatialNet differs from ConvLSTM in two main ways that allow it to maintain dynamics information more accurately: 1) the grid states are updated through convolutions instead of LSTM updates (which blur dynamics over time), and (2) SpatialNet also has a input copy mechanism that add the current state to the output of the spatialnet encoding -- this allows the encoding to focus better on the dynamics.\\n\\nWe trained a ConvLSTM following the specifications in (Xingjian et al., 2015) and also performed some hyperparameter tuning. We find that the ConvLSTM architecture allows for similar 1 step future frame prediction as SpatialNet (our model) -- see Table 1. However, we find that ConvLSTM is unable to maintain dynamics information over longer horizons and achieves significantly worse multi step future frame prediction (Table 1 and Figure 3). ConvLSTM also does not generalize well to new datasets with smaller and faster objects (Figure 8). SpatialNet, on the other hand, has a much simpler mechanism for capturing state transitions and is able to effectively model physics and generalize better.\", \"i2a\": \"I2A encodes a global context summary of future frames which is fed into a policy while IPA stacks future frames, allowing convolutional filters to encode local dynamics of different objects.\\n\\nWe trained the I2A model following the specifications in [Weber et al., 2017] and also performed some hyperparameter tuning, where SpatialNet is used as a future frame predictor.\\nIn our experiments, we find that I2A performs significantly worse than IPA (our approach) and performs on par with PPO on the PhysWorld environments. By feeding stacked future frames in IPA, we allow convolutions to locally extract information about each individual object to predict its dynamics in the future. In contrast, I2A\\u2019s structure only allows global encoding of the future states of objects that makes it difficult for policy to infer the future dynamics of objects and their interactions.\", \"references\": \"[Weber et al., 2017] Imagination-Augmented Agents for Deep Reinforcement Learning\"}",
"{\"title\": \"Author Response\", \"comment\": \"We thank Reviewer 1 for the helpful feedback. We provide answers to individual questions below.\\n\\n(1) We note that the environments in PhysWorld contain objects of different color and shapes than the objects in PhysVideos, the video dataset used for pre-training SpatialNet. As a result, the pretrained model\\u2019s notion of shape or color does not have any transferability to the new task, only its knowledge of physical dynamics does.\\n\\n(2) When comparing average performance across all the Atari benchmark games, PPO is the state of the art approach, competitive with ACER and better than A2C [Schulman et al., 2017]. \\n\\n(3) We have included standard deviation values for rewards in in PhysWorld and Atari environments (Table 2,3,6) . Following prior work [Schulman et al., 2017], we use 3 seeds for all our experiments. We also ran extra experiments for the Atari games (5 seeds) and observed similar mean and standard deviation performance (Tables 3 and 6)\\n\\n(4) We have added in results for the entire suite of Atari games in Appendix A.3 (Table 6). Across all the 49 games, IPA outperforms PPO in 31 games. Not all Atari games require an understanding of physical dynamics, which explains why IPA does not improve upon PPO in those games. We specifically use PhysWorld for this purpose -- to test our approach on environments that rely more on understanding basics physics like velocity, collision laws, etc.\\n\\n(5) Thank you for this suggestion -- we have added a discussion about ego-dynamics in the paper. Since our approach is to learn physics priors that transfer well to new environments, we don\\u2019t learn ego-dynamics, which require the action space of the agent to be input to the model -- this is usually task-specific. The dynamics of the world minus the ego-dynamics is more general and transfers well to new environments. See our comparison for transfer with a \\u201cmodel+policy transfer\\u201d baseline in Table 4.\\n\\n(6) We agree that achieving performance gains on many Atari games are limited by factors other physics such as exploration or reflexive action, which we note maybe the reason we do not achieve universal improvement across all Atari games. However, we believe certain Atari games, such as Asteroids, do benefit from predicting the dynamics of moving rocks, etc., and we do observe substantial gains in such environments. We specifically created PhysWorld and performed empirical studies to test our approach on environments that rely more on understanding various aspects of basic physics like velocity, collision laws, etc.\\n\\n(7) Our transfer learning experiments (Table 4) test the generalization of a policy from a single source environment to a single target environment. In this scenario, techniques like MAML are not directly applicable since they require meta-learning over multiple different environments to find good initialization points for the policy parameters. We also note that methods like PPO do perform better than approaches like MAML on tasks like the Sonic Benchmark [Nichol et al., 2018].\", \"references\": \"[Machado et al., 2017] Revisiting the Arcade Learning Environment: Evaluation Protocols and Open Problems for General Agents \\n[Schulman et al., 2017] Schulman, J., Wolski, F., Dhariwal, P., Radford, A., & Klimov, O. (2017). Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347.\\n[Nichol et al., 2018] Nichol, A., Pfau, V., Hesse, C., Klimov, O., & Schulman, J. (2018). Gotta Learn Fast: A New Benchmark for Generalization in RL. arXiv preprint arXiv:1804.03720.\"}",
"{\"title\": \"Interesting Idea, Unclear Writing\", \"review\": \"A method for learning physics priors is proposed for faster learning and better transfer learning. The key idea in learning physics priors using spatial net, which is similar to a convolutional LSTM model for making predictions. Authors propose to improve the sample efficiency of Deep RL algorithms, by augmenting PPO\\u2019s state input with 3 future frames predicted by the physics prior model.\\n\\nAuthors show that using Spatial-Net leads to better prediction of the future as compared to previous methods on simple simulated physics environment and can be incorporated to improve performance on ATARI games. \\n\\n(a) I am a bit unclear on how Spatial-Net is trained along with the policy in the IPA architecture. In section 5.1 it is mentioned that, \\u201cWe train both SpatialNet and the policy simultaneously and use Proximal Policy optimization (PPO) as our model free algorithm\\u201d, however earlier in Section 3 it is mentioned that first the agent is pre-trained with prediction and then the pre-trained model is used with the RL algorithm. Can the authors clarify the training procedure? Is it the case that the Spatial-Net is first pre-trained with some data and then fine-tuned along with the environment rewards? Do the policy-net and the frame prediction net share any parameters? \\n\\n(b) Is the comparison in Table 2/Figure 5 fair in terms of number of frames seen by the agent? Let a PPO agent see N frames? How many frames does the IPA agent say (both for training spatial Net + Policy). \\n\\n(c) How about baselines, where instead of augmenting PPO with any additional frames, the Policy is initialized with weights learned by Spatial Net? Other baseline is to jointly optimize for future frame prediction + environment reward (in this case atleast some parameters between the spatial net and the policy net will be shared), but without augmenting the input state with future predicted frames? \\n\\nThe Spatial net architecture is similar to convolutional LSTM \\u2014 and I therefore don\\u2019t think that is a significantly novel technical contribution. The application of spatial net to augment frames in the state is although novel in my best knowledge. The above questions will help me understand the experiments better. Right now the method is slightly unclear to me and the results on ATARI (figure 11) are a bit underwhelming. Also, why did the authors chose the specific ATARI games that they reported results on \\u2014 why not other games too?\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Comparison with closely related method is necessary\", \"review\": \"Summary\\nThis paper propose to learning a dynamics model with future prediction in video and using it for reinforcement learning.\\nThe dynamics models is a variants of convolution LSTM and it is trained mean squared error in the future frame.\\nThe way of using dynamics model for reinforcement learning is similar to Weber et al., 2017, where K step prediction of the dynamics model is uses as an augmented input of the policy.\\n\\nStrength\\nTraining dynamic model to understand physic and using it for reinforcement learning is an interesting problem that worth exploring. This paper tackles this problem and demonstrated experimental setting based on physics games. \\n\\nWeakness\\nThe part for understanding dynamics model is very close to existing convolutional LSTM model (Xingjian et al., 2015), which is a popular baseline in video modelling community and how pretrained dynamics model is used for reinforcement learning is similar to Weber et al., 2017, but this paper does not provide comparison to any of these two baseline. \\nSince the difference with these existing method is subtle, clear comparison with these method and difference in characteristic is essential to show the novelty of the paper. \\n\\nOverall comment\\nThis paper address the interesting problem of understanding dynamics for solving reinforcement learning, but the suggested method is not novel and comparison with existing close methods are not performed.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Interesting Direction\", \"review\": \"Quality: The paper proposed a new method to learn some physics prior in an environment along with a new SpatialNetwork Architecture. Instead of learning a specific dynamics model, they propose to learn a dynamics model that is action-free, purely learning the extrinsic dynamics. They formulate this problem as a video prediction problem. A series of experiments are conducted on PhysWorld (a new physics based simulator) and a subset of Atari games.\", \"clarity\": \"The writing is good.\", \"originality\": \"This work is original as most of the model-based RL works are focusing on learning one environment instead of common rules of physics.\", \"significance_of_this_work\": \"This work propose an interesting direction to pursue.\", \"cons\": \"1. In Figure 4, the authors show that a pretrained model can learn faster than random initialization. However, it is hard to ablate the factor that causes this effect. Does the dynamics predictor learn the physics priors or is it just because it learn the visual prior of the shape of the objects, etc? \\n2. The baseline for atari games is quite limited. First of all, 3 out of 5 atari games in the original PPO paper show that ACER performs better than PPO. (asteroid, breakout, DemonAttack). I think it is better to make some improvement upon state-of-the-art methods.\\n3. All the experiments are shown with only 3 random seeds, without error bar in the main paper. Although the reward plots are shown in Figure 11. \\n4. 5 out of 10 atari games are similar to PPO (according to Figure 11). It's hard to be conclusive when half of the experiments are positive and the rest are not. \\n5. Lack of discussion about ego-dynamics. There are physics priors for both the environment and the controller. Usually the controller/agent requires an action to predict its dynamics. Then why should we omit the ego-dynamics and only model the outer world. \\n6. Physics prior usually happen in physical environment. The proposed method works well in the physworld environments. But is there some task that are more realistic than atari games that can leverage the power of physics priors more? It's good that this method works in some atari games. But isn't learning the dynamics of atari games a bit off the topic? \\n7. The transfer learning experiments should contain a baseline -- maml/reptile. Since you are learning physics prior, it is fair to add meta-learning baselines for comparison.\\n\\nI think the direction is interesting and the effort is made well. But the experiments are less convincing than the abstract/introduction.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
|
rygrBhC5tQ | Composing Complex Skills by Learning Transition Policies | [
"Youngwoon Lee*",
"Shao-Hua Sun*",
"Sriram Somasundaram",
"Edward S. Hu",
"Joseph J. Lim"
] | Humans acquire complex skills by exploiting previously learned skills and making transitions between them. To empower machines with this ability, we propose a method that can learn transition policies which effectively connect primitive skills to perform sequential tasks without handcrafted rewards. To efficiently train our transition policies, we introduce proximity predictors which induce rewards gauging proximity to suitable initial states for the next skill. The proposed method is evaluated on a set of complex continuous control tasks in bipedal locomotion and robotic arm manipulation which traditional policy gradient methods struggle at. We demonstrate that transition policies enable us to effectively compose complex skills with existing primitive skills. The proposed induced rewards computed using the proximity predictor further improve training efficiency by providing more dense information than the sparse rewards from the environments. We make our environments, primitive skills, and code public for further research at https://youngwoon.github.io/transition . | [
"reinforcement learning",
"hierarchical reinforcement learning",
"continuous control",
"modular framework"
] | https://openreview.net/pdf?id=rygrBhC5tQ | https://openreview.net/forum?id=rygrBhC5tQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"ryl27G2exV",
"S1gjDqMD0X",
"rygq35VZ0X",
"SyxGmZcUTm",
"Bkgfie58TX",
"r1lb_e98Tm",
"BkghwsHAnQ",
"Ske4BidpnQ",
"SJltXWIi3X"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544761892376,
1543084643090,
1542699697815,
1542000921988,
1542000793908,
1542000745363,
1541458787827,
1541405499999,
1541263649456
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1537/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1537/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1537/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1537/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1537/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1537/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1537/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1537/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1537/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"Strengths: The paper tackles a novel, well-motivated problem related to options & HRL.\\nThe problem is that of learning transition policies, and the paper proposes\\na novel and simple solution to that problem, using learned proximity predictors and transition\\npolicies that can leverage those. Solid evaluations are done on simulated locomotion and\\nmanipulation tasks. The paper is well written.\", \"weaknesses\": \"Limitations were not originally discussed in any depth.\\nThere is related work related to sub-goal generation in HRL.\", \"ac\": \"I suggest a poster presentation; it could also be considered for oral presentation based\\non the very positive reception by reviewers.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Well motivated problem; good solution\"}",
"{\"title\": \"Score raised fro 7 to 8\", \"comment\": \"The authors have responded thoroughly to the review comments. Also, looking at the simulations more closely they appear quite effective.\"}",
"{\"title\": \"reviews & responses are appreciated; remaining consideration of author responses?\", \"comment\": \"The reviews and author responses are appreciated.\\nIf there are any further comments from the reviewers with regard to the authors responses, or changes in score,\\nnow would be the time to put these forward.\\n\\nthanks again for your insights.\\n-- area chair\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"We thank the reviewer for the feedback and address the concerns in detail below.\\n\\n> Reviewer 1 (R1): \\u201cIn the metapolicy, what ensures consistency, \\u2026 ?\\u201d\\n\\nOur meta-policy executes a primitive policy and waits for a termination signal from the primitive policy before choosing the subsequent one. In other words, a termination signal (success/failure of the primitive policy) comes from the primitive policy, i.e. the walker falls down or the arm picks up a box. This call-and-return style [1-3] of execution ensures the same policy is utilized in consecutive steps until its completion. Hierarchical reinforcement methods have employed this call-and-return style when sub-policies are learned for well-defined sub-tasks that do not require a context switch during their execution.\\n\\n> R1: \\u201c... the weaknesses and the limits of the method?\\u201d\\n\\nWe discuss a few assumptions that we made and good follow-up directions below. We will also add the discussion to the revised version.\\n\\nOur model-free transition policies rely on random exploration. Specifically, we made an assumption that successful transition trajectories between two consecutive policies should be achievable by random exploration (i.e. an initiation set of a primitive policy should be reachable from the ending states of the previous policies). As soon as a transition policy succeeds once, the proximity predictor will learn what good states are and subsequently the transition policy will succeed more frequently. To alleviate the exploration problem with sparse rewards, our transition policy training can incorporate exploration methods that utilize count-based exploration bonuses [4-6], curiosity-driven intrinsic rewards [7-10], etc.\\n\\nOur current framework is designed to focus on acquiring transition policies that can connect a given set of primitive policies. We believe that additionally enabling an agent to adaptively augment its primitive set [11-12] based on a new environment or task is a promising future direction.\\n\\nWe assume our primitive policies return a signal that indicates whether the execution should be terminated or not, similar to [1-3, 13]. Without access to this termination signal, the transition policy would learn from very sparse and delayed reward. \\n\\n\\n[1] Oh et al. \\u201cZero-shot task generalization with multi-task deep reinforcement learning\\u201d, ICML 2017\\n[2] Andreas et al. \\u201cModular multitask reinforcement learning with policy sketches\\u201d, ICML 2017\\n[3] Kulkarni et al. \\u201cHierarchical deep reinforcement learning: Integrating temporal abstraction and intrinsic motivation\\u201d, NIPS 2016\\n[4] Strehl Littman \\u201cAn analysis of model-based interval estimation for markov decision processes\\u201d, Journal of Computer and System Sciences (JCSS) 2008\\n[5] Bellemare et al \\u201cUnifying Count-Based Exploration and Intrinsic Motivation\\u201d, NIPS 2016 \\n[6] Martin et al. \\u201cCount-Based Exploration in Feature Space for Reinforcement Learning\\u201d, IJCAI 2017\\n[7] Schmidhuber \\u201cA possibility for implementing curiosity and boredom in model-building neural controllers\\u201d, From animals to animats: Proceedings of the first international conference on simulation of adaptive behavior, 1991\\n[8] Pathak et al. \\u201cCuriosity-driven Exploration by Self-supervised Prediction\\u201d, ICML 2017\\n[9] Achiam and Sastry \\u201cSurprise-Based Intrinsic Motivation for Deep Reinforcement Learning\\u201d, NIPS Workshop 2016\\n[10] Stadie et al. \\u201cIncentivizing exploration in reinforcement learning with deep predictive models\\u201d, NIPS Workshop 2015\\n[11] Hausman et al. \\u201cLearning an Embedding Space for Transferable Robot Skills\\u201d, ICLR 2018\\n[12] Gudimella et al. \\u201cDeep reinforcement learning for dexterous manipulation with concept networks\\u201d, arXiv 2017\\n[13] Le et al. \\u201cHierarchical Imitation and Reinforcement Learning\\u201d, ICML 2018\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"We thank the reviewer for the feedback. We are glad that the reviewer found the idea novel and useful for enabling the smooth composition of skills and that the reviewer recognized the importance of utilizing previously learned skills to compose complex skills.\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"We thank the reviewer for the feedback and address the concerns in detail below.\\n\\n> Reviewer 3 (R3): \\u201c... the choice of exponential (\\u201cdiscounted\\u201d) proximity function. Wouldn\\u2019t a linear function of \\u201cstep\\u201d be more natural here?\\u201d\\n\\nThe proximity predictor is used to reward the ending state of a transition trajectory in how close it is to the initiation set of the next primitive as well as actions that increase proximity. As R3 suggested, both linear and exponential functions are valid choices for a proximity function. \\n\\nWe have experimentally compared the linear and exponential proximity functions. Our model is able to learn well with both functions and they perform similarly. We added the result to our website (Ablation study on Proximity functions: https://sites.google.com/view/transitions-iclr2019#h.p_qGO2W2Dk2q8G ) and will it add to the supplementary.\\n\\nOriginally, we opted for the exponential proximity function with the intuition that the faster initial decay near the initiation set would help the policy discriminate successful states from failing states near the initiation set. Also, in our experiments, as we use 0.95 as a decaying factor, the proximity is still reasonably large (e.g., 0.35 for 20 time-steps and 0.07 for 50 time-steps).\"}",
"{\"title\": \"Useful learning scheme for transitioning between options in continuous domains.\", \"review\": \"The paper proposes a scheme for transitioning to favorable starting states for executing given options in continuous domains. Two learning processes are carried out simultaneously: one learns a proximity function to favorable states from previous trajectories and executions of the option, and the other learns the transition policies based on dense reward provided by the proximity function.\\n\\t\\nBoth parts of the learning algorithms are pretty straightforward, but their combination turns out to be quite elegant. The experiments suggest that the scheme works, and in particular does not get stuck in local minima. \\n\\nThe experiments involve fairly realistic robotic applications with complex options, which renders credibility to the results. \\n\\nOverall this is a nice contribution to the options literature. The scheme itself is quite simple and straightforward, but still useful. \\n\\nOne point that I would like to see elaborated is the choice of exponential (\\\"discounted\\\") proximity function. Wouldn't a linear function of \\\"step\\\" be \\n more natural here? The exponent loses sensitivity as the number of steps away increases, which may lead to sparser rewards.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"An elegant method with comprehensive evaluations\", \"review\": \"The paper presents a method for learning policies for transitioning from one task to another with the goal of completing complex tasks. In the heart of the method is state proximity estimator, which measures the distance between states in the originator and destination tasks. This estimator is used in the reward for the transition policy. The method is evaluated on number of MojoCo tasks, including locomotion and manipulation.\", \"strengths\": [\"Well motivated and relevant topic. One of the big downsides in the current state of the art is lack of understanding how to learn complex tasks. This papers tackles that problem.\", \"The paper is well written and the presentation is clear.\", \"The method is simple, yet original. Overall, an elegant approach that appears to be working well.\", \"Comprehensive evaluations over several tasks and several baselines.\"], \"questions\": [\"In the metapolicy, what ensures consistency, i.e. it selects the same policy in the consecutive steps?\", \"Can the authors comment on the weaknesses and the limits of the method?\"], \"rating\": \"9: Top 15% of accepted papers, strong accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Potentially very useful idea\", \"review\": \"** Summary **\\nThe authors propose a new training scheme with a learned auxiliary reward function to optimise transition policies, i.e. policies that connect the ending state of a previous macro action/option with good initiation states of the following macro action/option.\\n\\n** Quality & Clarity **\\nThe paper is well written and features an extensive set of experiments.\\n\\n** Originality **\\nI am not aware of similar work and believe the idea is novel.\\n\\n** Significance **\\nSeveral recent papers have proposed to approach the topic of learning hierarchical policies not by training the hierarchy end-to-end, but by first learning useful individual behavioural patterns (e.g. skills) which then later can be used and sequentially chained together by higher-level policies. I believe the here presented work can be quite helpful to do so as the individual skills are not optimised for smooth composition and are therefore likely to fail when naively used sequentially.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
ryxSrhC9KX | Revealing interpretable object representations from human behavior | [
"Charles Y. Zheng",
"Francisco Pereira",
"Chris I. Baker",
"Martin N. Hebart"
] | To study how mental object representations are related to behavior, we estimated sparse, non-negative representations of objects using human behavioral judgments on images representative of 1,854 object categories. These representations predicted a latent similarity structure between objects, which captured most of the explainable variance in human behavioral judgments. Individual dimensions in the low-dimensional embedding were found to be highly reproducible and interpretable as conveying degrees of taxonomic membership, functionality, and perceptual attributes. We further demonstrated the predictive power of the embeddings for explaining other forms of human behavior, including categorization, typicality judgments, and feature ratings, suggesting that the dimensions reflect human conceptual representations of objects beyond the specific task. | [
"category representation",
"sparse coding",
"representation learning",
"interpretable representations"
] | https://openreview.net/pdf?id=ryxSrhC9KX | https://openreview.net/forum?id=ryxSrhC9KX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"H1ebD4WQgV",
"rkeMpyTZAm",
"HkgpFdn-07",
"SJlB7_nWRQ",
"B1gYJd3-Rm",
"BylNJ83Z0Q",
"HJe5KS2-AX",
"S1x6zNA92m",
"BJe3wuvchQ",
"SJxwLXOY27"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544914008872,
1542733753840,
1542731908968,
1542731805400,
1542731744611,
1542731228081,
1542731137681,
1541231637378,
1541204068207,
1541141327108
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1536/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1536/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1536/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1536/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1536/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1536/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1536/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1536/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1536/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1536/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"The reviewers viewed the work favorably, with only one reviewer providing a score slightly below acceptance. The authors thoroughly addressed the reviewer's original concerns, and they adjusted their score upwards afterwards. The low-rating reviewer remains skeptical of the significance of the work, but the other two reviewers make firm cases for the appeal of the work to the ICLR audience. In follow-up discussion after the author's responses were submitted and discussed, the low-rating reviewer did not make a clear case for rejecting the paper, and further, the higher-rating reviewers' arguments for the impact of the paper were convincing. Therefore, I recommend accepting this paper.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Reviewer consensus is accept\"}",
"{\"title\": \"Changes to Manuscript in Response to Reviewer Comments\", \"comment\": \"We would like to thank all reviewers for their time and their thoughtful input. We did our best to address all of their comments in the individual responses below, which contain details not added to the paper itself due to space constraints. Based on the comments, we made the following changes to the paper:\\n\\n\\u2022 We more carefully describe the experiment / clarify the quality of the data in section 2.1. \\n\\u2022 We clarify the justification for the sparsity and non-negativity properties of our model in section 2.2.\\n\\u2022 We added references suggested by the reviewers.\\n\\u2022 To make room for these additions, we shortened the introduction (\\u201ctomato example\\u201d), and compressed the text in a few places where it could be done without losing clarity.\\n\\u2022 We fixed the formatting of some references and fixed the template from 2018 to 2019.\"}",
"{\"title\": \"Response to Reviewer 1 - Part 1\", \"comment\": \"We thank the reviewer for their assessment of our work. To address their criticisms, below and in the updated text we more carefully describe the experiment. In addition, we better justify the heuristics by being more explicit about how they are based on previous studies and empirical evidence, and we ran an additional analysis based on the reviewer\\u2019s comment on our modeling choices. Note that while the scale of our experimental data - as pointed out by the reviewer - strongly contributes to the significance of our work, we are not aware of comparable work that has revealed interpretable dimensions underlying human behavior from similarity ratings. It is, in fact, the *combination* of scale and the use of our sparse non-negative model that makes this work unique. As mentioned in the manuscript, all data will be released at the end of the study. We hope that our responses will provide the reviewer with the information sufficient to raise their rating above the acceptance threshold.\\n \\n\\n> What are the precise instructions, how are the object/images presented (it is well known that relative positions, asymmetry, etc can play an important role), are there any temporal/learning effects (how clear is the task to the workers?).\\n\\nThe reviewer raises a number of points that we did not address at length in the paper due to space constraints and which, in our opinion, were not critical to reproducing the results. However, we fully agree that it is important to add further justification, which we will do in this review and in the manuscript, as space permits. Please note that we also tried to better highlight some of the details (e.g. the release of the data) which were already present in the manuscript.\", \"the_precise_instructions_to_the_workers_were\": \"\\u201cIn each round, you will see three pictures each showing an object or \\\"thing\\\". Two of them will be more similar to each other. Your job is to select the *odd-one out* by clicking on it. Sometimes the decision is very difficult. Base your decision _only_ on the most prominent object or \\\"thing\\\" in an image.\\u201c\\nNote that these instructions were intentionally left rather open, so as to allow workers to decide according to whatever criteria were most salient to them for carrying out the task.\\n\\nRegarding the issues mentioned by the reviewer (effects of temporal learning, effects of position, symmetry, etc.), we empirically tested the validity of the method in a separate study before acquiring the large-scale dataset. This study is extensive in itself and will thus be published separately. In it, we acquired similarity ratings for a set of 48 objects and a separate set of 92 objects using the triplet odd-one-out task. We then compared two other common similarity tasks (pairwise similarity and object arrangement) to the triplet odd-one-out task and related them to both synset embeddings and deep convolutional neural networks as well as human brain data (functional MRI and magnetoencephalography). The triplet odd-one-out task was highly correlated with the other two similarity tasks, overall, and performed equally well or better than those tasks in predicting embeddings and human brain data. This demonstrates that, all issues raised by the reviewer aside, the triplet odd-one-out task is as good or better than two common state-of-the-art alternatives for human similarity judgments.\\n\\nNote that any preference of position or sequence effects would only affect the variance of the estimates, not the bias. Most workers only carried out very few trials, weakening the possible contribution of learning effects to bias. If there was any strong bias present in the data, we believe that the model could not have performed as close to optimal as it did. However, we agree with the reviewer that, in the future, it would be interesting and important to investigate possible learning effects and, specifically, individual differences in how humans use those dimensions. This is, however, beyond the scope of the present work.\"}",
"{\"title\": \"Response to Reviewer 1 - Part 2\", \"comment\": \"> The modeling work is basic and contains a number of steps that have unknown influence on the final outcome. For example model dimension: Is you claim that \\\"D=49\\\" is a law of human nature?\\n\\nWe do not attempt to recover the \\\"true\\\" dimensionality of the embedding, but rather a \\u201cuseful\\u201d embedding to explain the data and provide interpretability. We can conclude from our results that there exist at least 49 useful embedding dimensions; it is possible that, after collecting more data, we might find more dimensions. As detailed in 2.2. under \\u201cParameter fitting\\u201d, we fit a model with a large initial number of dimensions (in this case, 90), but the L1 shrinkage (with cross-validated lambda) naturally causes many of dimensions to have a maximum weight close to zero. We set a threshold to eliminate dimensions below a certain average weight, which results in an embedding with a much smaller number of dimensions. In our current dataset, this procedure resulted in picking an L1 penalty equal to 0.0080 (a penalty of 0.0270 or higher would result in all dimensions being shrunk to 0). We deleted all dimensions with average weight less than 0.02, resulting in the 49 dimensions presented.\\n\\n\\n> The presentation of the inference process is clear. Not so clear what the uncertainties are\\n\\nFor any triplet presented to a participant, there are two possible sources of randomness. One is variation in the decision-making process of the participant. The choice of the participant may be affected by biases such as ordering of the objects, and by uncontrolled factors such as their physiological state. A second source of variation is differences in the decision-making process between participants. Participants with different personalities and worldviews may evaluate the similarity of objects in different ways. Our model accounts for both sources of uncertainty by representing the choice for a given triplet for a random participant as a draw from a multinomial distribution, with probabilities given by equation 2, page 3.\\n\\n\\n> But the data quality is unclear.\\n\\nSee above comments clarifying the quality of the data. We have also added a description to the text specifying more precisely what the exclusion criteria were and how many triplets were excluded.\\n\\n**Updated / new text**\\n\\u201cThe dataset in this paper contains all of the data acquired to date, comprising judgments on 1,450,119 randomly selected triplets, roughly 0.13% out of all possible triplets using the 1,854 concepts. These were the triplets remaining after excluding AMT participants that showed evidence of inappropriate subject behavior, namely if responses were unusually fast, exhibited systematic response bias, or were empty (137,281 triplets or 8.65%).\\u201d\"}",
"{\"title\": \"Response to Reviewer 1 - Part 3\", \"comment\": \"> The modeling approach involves a number of untested heuristics (non-negative, exponentiation etc). \\n\\nWe agree with the reviewer and provide more justification for our modeling choices below, as well in the updated manuscript. Note that, in the manuscript we already mention an analysis finding that embeddings based on the dot product and the Euclidean distance - both very common measures of psychological similarity / distance - are very similar in performance and interpretability.\\n\\nThe use of Luce's choice rule with exponential weights is a very common approach for arbitrating between probabilistic choices, not only in supervised machine learning, but also in game theory and reinforcement learning (e.g. Sutton & Barto, 2001), and there is strong empirical support for its use in humans (e.g. Daw et al., 2006 - Nature). In the context of behavioral similarity, there is strong evidence that the probability of choice is exponentially related to psychological distance, as reviewed in Xu et al. (2011).\\n\\nThe sparsity constraint is a reasonable assumption given that, without sparsity, all concepts would carry all dimensions, which would be in contrast to what is found empirically for feature norms of real-world objects that turn out to be sparse (see McRae et al. 2005). Note that if a non-sparse model was best for predicting the data, cross-validation would have revealed a lambda very close to 0 (our lambda was 0.0080, which was much larger than the grid spacing 0.0001, and as mentioned above, a penalty of 0.0270 or higher would result in all dimensions being shrunk to 0).\\n\\nSince our aim was to obtain interpretable embeddings, we used the constraint of non-negativity, which in the word embedding literature has been found to improve interpretability (Murphy et al. 2012). In response to the reviewer comment, we ran a similar analysis without non-negativity and sparsity constraints which, as expected, led to comparable performance (0.6453) and 18 dimensions which turned out to be much less interpretable. If \\u201ctrue\\u201d dimensions turn out to be signed, we expect SPoSE dimensions to either be shifted to the positive range or split into positive and negative parts. Importantly, this transformation does not affect their interpretability.\\n\\n**Updated / new text**\\n\\u201cWe assume that each feature/dimension x_if in the vector x_i is real and non-negative, so as to make it interpretable as the *degree* to which the aspect of meaning it represents is present and influences subject behavior (Murphy et al. 2012).. Further, in accordance with empirical findings, we expect features/dimensions to be sparse (McRae et al. 2005), which motivated us add a sparsity parameter to our model.\\u201d\\n\\n**References**\\nRichard S. Sutton and Andrew G. Barto. Reinforcement learning: An introduction. MIT press, 2018.\\n\\nNathaniel D. Daw, John P. O'Doherty, Peter Dayan, Ben Seymour, and Raymond J. Dolan. \\\"Cortical substrates for exploratory decisions in humans.\\\" Nature 441(7095):876-879, 2006.\\n\\n\\n> I did not understand if it is planned to release the data.\\n\\nPlease note that under 2.1. (The Odd-One-Out Dataset), it reads \\\"We plan to collect additional triplets and release all data by the end of the study.\\\"\\n\\n\\n> References have many issues\\n\\nThank you for bringing this to our attention. We have standardized the formatting of our references.\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"We thank the reviewer for their positive evaluation of our study. We agree that the prediction from CSLB features is particularly interesting, and we are currently working on improving this further by interpolating to other objects in a semi-supervised manner (similar to what was proposed by the reviewer). We also strongly agree that testing additional embeddings would be very interesting! For the present work, we focused on synset embeddings because they represent a closer match to the meaning of each individual object than word embeddings would and provide a one-to-one match for the meanings. For example, our list contains four different meanings for the object named by the word \\u201cbaton\\u201d, referring to (1) an item in relay races, (2) in twirling, (3) a weapon used by police, and (4) an item used by a musical conductor. Due to the novelty of this line of research, to our knowledge there are no other synset embeddings available than the ones we used, and we included both a 50d dense and a 300d dense version. In addition, we would have liked to include sparse positive synset embeddings as a reference, however those are currently not available; for that reason, we included NNSE word embeddings instead. In the future, we would like to add sparse positive synset embeddings and test their interpretability relative to our similarity embedding. We hope this will underline the unique contribution of a behavior-based similarity embedding presented here.\\n\\nIn addition, we would like to thank the reviewer for their idea on how to extend the embedding. Indeed, we are currently working on predicting similarities for other concepts and images from pretrained synset vectors and activations in deep convolutional neural networks. However, this effort is still in its early stages and beyond the scope of the present work.\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"We thank the reviewer for their positive evaluation of our work. In response to their suggestion under Point 4, we have added the suggested references to sections 2.3 (Related Work) and 4 (Discussion and Conclusion)\", \"new_text\": \"\\u201cYet another possible extension is to consider different types of similarity judgments (Veit et al. 2017), e.g. resulting from asking subjects to group objects based on a specific attribute (size, color, etc.).\\u201d\"}",
"{\"title\": \"Interesting well done paper\", \"review\": \"Following the suggested rubric:\\n1. Briefly establish your personal expertise in the field of the paper.\\n2. Concisely summarize the contributions of the paper.\\n3. Evaluate the quality and composition of the work.\\n4. Place the work in context of prior work, and evaluate this work's novelty.\\n5. Provide critique of each theorem or experiment that is relevant to your judgment of the paper's novelty and quality.\\n6. Provide a summary judgment if the work is significant and of interest to the community.\\n\\n1. I work at the intersection of machine learning and biological vision\\nand have worked on modeling word representations.\\n\\n2. This paper develops a new representation system for object\\nrepresentations from training on data collected from odd-one-out human\\njudgements of images. The vector representation for objects is\\ndesigned to be sparse and low dimensional (and ends up being about\\n49D). Similarity is measured by dot products in the space and\\nprobabilities of which pair of items will be paired are modeled as the\\nexponential of the similarity.\\n\\n\\n3,5 The resulting embedding\\tdoes a good job\\tof predicting human similarity\\njudgements and seems to cover similar features to those named by\\nhumans. They also explain typicality judgements and cluster semantic\\ncategories well. The creation of the upper limit based on noise between \\nand within subjects was a nice addition.\\n\\n\\n4. Some relevant related work is discussed and this seems like a novel\\nand interesting contribution. The authors might also want to compare\\nto similar work that looked at similarities among triplets (Similarity\\nComparisons for Interactive Fine-Grained Categorization\", \"http\": \"//ttic.uchicago.edu/~smaji/papers/similarity-cvpr14.pdf;\", \"conditional_similarity_networks_https\": \"//arxiv.org/abs/1603.07810 ).\\n\\n\\n6. While this paper is not especially surprising or ground breaking, the\\nnumber and quality of the comparisons make it a worthwhile\\ncontribution and the resulting embeddings are worth further exploration\\nand could be very useful for future research.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting paper\", \"review\": \"This is an interesting paper with a new approach to learn a sparse, positive (and hence interpretable) semantic space that maximizes human similarity judgements, by training to specifically maximize the prediction of human similarity judgements. The authors have collected the dataset themselves and have rating of sets of 3 objects from 1854 unique objects. They end up with a space (SPoSE) with relatively low dimensionality with respect to usual word embeddings (49 dimension) but perhaps not surprising when considering the small size of the words to embed. The authors run a set of experiment to show the usefulness of SPoSE. The most interesting one is the prediction of its dimensions by the CSLB features, which reveals a nice clustering in the different SPoSE dimensions. Perhaps the results would be a little more convincing if additional common word embeddings were also tested.\\n\\nDue to the different objects used in the different datasets, some of the experiments have a smaller set of words. A good extension of this work would be to combine a text-derived embedding or the synsets to interpolate the SPoSE dimensions for missing words in the original set. Or perhaps the object similarity ratings could be used in a semi-supervised setting to inform the learning of a co-occurence word embedding. This will allow the model to better describe a larger set of words. Another possible extension is to test this larger set of words on a non-behavioral NLP task to show possible improvements that the behavioral data and the interpretable space give.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Behavioral experiment on human representations - Manuscript has improved -revison\", \"review\": \"This is a paper that communicates a large scale experiment on human object/semantic representations and a model of such representations. The experiment could have been more carefully controlled (and described in the paper) and the modeling work is inconclusive.\\n\\nQuality, \\nThe experiment design is conventional, based on rating pair-wise similarity among triplets. Compared to earlier experiments, this data has more objects and more triplets. Additional control experiments on smaller subsets have been carried out to further address hypotheses. The description of the experiment could have been more careful: What are the precise instructions, how are the object/images presented (it is well known that relative positions, asymmetry, etc can play an important role), are there any temporal/learning effects (how clear is the task to the workers?).\\nThe modeling work is basic and contains a number of steps that have unknown influence on the final outcome. For example model dimension: Is you claim that \\\"D=49\\\" is a law of human nature? Model predictive performance seems excellent, that is interesting! But we do not know how robust this is to the many heuristics\\n\\nClarity, \\nThe presentation of the inference process is clear. Not so clear what the uncertainties are\\n\\nOriginality \\nLimited. Mainly related to scale. But the data quality is unclear. The modeling approach involves a number of untested heuristics (non-negative, exponentiation etc). \\n\\nSignificance \\nMostly related to the data. I did not understand if it is planned to release the data.\\n\\nPros and cons \\n\\n+Large scale experiment\\n+simple model, seem to have good accuracy\\n\\n-experiment needs more careful description\\n-too many heuristics in model and inference, unclear how general the conclusions are\", \"other_comments\": \"References have many issues\\n\\nThe authors have done a good job in the revision and have clarified points that were unclear in the first version. \\nI have remaining reservations on significance, but move rating up a notch to reflect the extensive improvements and the authors' confirmation that they will release the data.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
SkGNrnC9FQ | Manifold Alignment via Feature Correspondence | [
"Jay S. Stanley III",
"Guy Wolf",
"Smita Krishnaswamy"
] | We propose a novel framework for combining datasets via alignment of their associated intrinsic dimensions. Our approach assumes that the two datasets are sampled from a common latent space, i.e., they measure equivalent systems. Thus, we expect there to exist a natural (albeit unknown) alignment of the data manifolds associated with the intrinsic geometry of these datasets, which are perturbed by measurement artifacts in the sampling process. Importantly, we do not assume any individual correspondence (partial or complete) between data points. Instead, we rely on our assumption that a subset of data features have correspondence across datasets. We leverage this assumption to estimate relations between intrinsic manifold dimensions, which are given by diffusion map coordinates over each of the datasets. We compute a correlation matrix between diffusion coordinates of the datasets by considering graph (or manifold) Fourier coefficients of corresponding data features. We then orthogonalize this correlation matrix to form an isometric transformation between the diffusion maps of the datasets. Finally, we apply this transformation to the diffusion coordinates and construct a unified diffusion geometry of the datasets together. We show that this approach successfully corrects misalignment artifacts, and allows for integrated data. | [
"graph signal processing",
"graph alignment",
"manifold alignment",
"spectral graph wavelet transform",
"diffusion geometry",
"harmonic analysis"
] | https://openreview.net/pdf?id=SkGNrnC9FQ | https://openreview.net/forum?id=SkGNrnC9FQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"HJleTe3Nl4",
"rJxIvXajhm",
"r1gFHAhF2Q",
"HyxNF2Jr2Q"
],
"note_type": [
"meta_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1545023672080,
1541292894300,
1541160512524,
1540844668179
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1535/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1535/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1535/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1535/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"The diffusion maps framework is used to embed a given collection of datasets into diffusion coordinates that capture intrinsic geometry. Then a correspondence map is constructed between datasets by finding rotations that align these coordinates. The approach is interesting. The reviewers, however, found the empirical analysis somewhat simplistic with inadequate comparisons to other correspondence construction methods in the literature.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"empirical analysis somewhat simplistic with inadequate comparisons to other correspondence construction methods\"}",
"{\"title\": \"An Interesting and Straightforward Proposal on Variation Alignment but the Argument and Evidence is Not Strong Enough.\", \"review\": \"The authors pointed out that the measurements in biology and natural science suffer from batch effects such as the variations between batches of data measured at different times or by different sensors. In order to analyze different batches of data, an alignment or a calibration is frequently needed. The authors propose to use that though there is variation among different batches, these batches all share an underlying intrinsic manifold structure, which may admit a set of alignable coordinates.\\n\\nTechnically, the authors propose to choose the diffusion kernel method, which is one of the spectral methods, to extract the harmonic like eigenfunctions defined on the manifold, for each of the batches of data. Using these harmonic-like coordinates, the authors assume there exists an isometric rotation in the between each pair of batches such that their coordinates can be aligned under this orthogonal rotation.\", \"comments\": \"Overall I think the problems pointed out do exist and this is an interesting proposal to use the manifold structure to align the data. But there are some weak points in this proposal:\\n1. It's well known that spectral methods are frequently sensitive to perturbations of the datasets. At the beginning of section 2.2 the authors propose to use a normalization to construct the kernel, however, I don't quite understand how this would solve the instability to perturbations. \\n\\n2. In my opinion, the equation (1) is the most interesting construction in this paper. This motivation for this tensors construction is not strong enough and I would suggest put more detail into this construction. My understanding is the window functions g introduced here serve for an invariance purpose such that when the frequency slightly shift (or rotate), the correlation computed should be stable. But the tradeoff of choosing a proper window should be discussed carefully, potentially with different dataset since different dataset may have a different sensitivity to perturbations across different batches. \\n\\n3. In section 3.1, the first motivating example is quite confusing. The authors demonstrated the alignment of two rotated MNIST digits, 3. For each digit, the underlying manifold is S1. S1 is diffeomorphic to its rotation. So I'm not so sure what's the underlying manifold geometry used to align them. My understanding is that this alignment doesn't come from the S1 manifold but comes from some additional structure in the image signal. Fig1(b) is also a little confusing that I couldn't figure out what's drawn there.\\n\\nOverall, to use the underlying manifold structure to align data batches is an interesting and straightforward proposal, but I hope the authors can address these question carefully and make the argument stronger.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Could benefit from a revision including more comprehensive experiments.\", \"review\": \"This paper tackles the problem of noisy measurements collected from same samples that may result in different experimental data collection scenarios, especially in biology. Given S different data batches of the same samples, this paper aims to perform manifold alignment between each batch using feature correspondences.\\nThe motivation is that even though data points from each experiments may differ due to noise coming from different factors, they should at least exhibit some correlations in the feature space. Such correlation is exploited by embedding each batch of data in a space represented by diffusion coordinates computed using an anisotropic kernel. Inter-batch correlations are computed between diffusion coordinates of each batch, which are exploited to construct an isometric transformation between each pair of diffusion coordinates of batch datasets. The later transformation is used to construct an aligned graph Laplacian where each batch have similar representations. \\n\\nThis paper tackles an important problem using a novel approach where instead of aligning each pair of data-points it is attempting to align geometries of batch specific manifolds. The authors show through a toy experiment on MNIST that the proposed algorithm indeed is able to align manifolds accurately. Moreover it is also able to perform manifold denoising and achieves superior classification results compared to two existing approaches. Finally, the proposed algorithm is applied to a practical biological case and shows that it is indeed able to align data from two different immune profiles.\\n\\nAlthough the paper tackles efficiently an important problem, I am concerned about the experimental section and think it would be improved by taking the following points into account: \\n\\n\\u2022\\tThe proposed approach is compared only with two other algorithms. For example, one could compare the denoising ability to [Hein and Maier 2006: manifold denoising] or [Cui et al: Generalized unsupervised Manifold Alignment] for manifold alignment. Furthermore, it would be informative to see how the proposed approach compares to recent domain adaptation approaches as they attempt to map data from different domains into a shared \\nrepresentation which is rather similar to what the proposed algorithm is doing.\\n\\u2022\\tExperiments are all performed using rather simple datasets. It would be interesting to see how the algorithms would perform on slightly more complicated images such as Cifar 10 for example. \\n\\u2022\\tIt is not clear what are the next steps to perform after obtaining Eq. 2\\n\\u2022\\tIt is not clear what are the number of filters in Figure 2 a).\\n\\u2022\\tFigure 1 needs to be clarified further: What are DM1, DM2, DM3 mean? What are the columns and rows in Figure1-b (bottom)?\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Manifold Alignment via Feature correspondence - Review\", \"review\": \"The paper proposes an alignment of two manifolds that is performed in a low-dimensional parameter space corresponding to a low-pass \\\"filtering\\\" of the Graph Fourier transform obtained from the underlying data graphs for the two manifolds. The numerical results show the quality of the alignment for some toy image datasets and a biological dataset.\\n\\nThe derivation of the technical details of the approach is not clear - see the comments below on Pages 5,6 and 9 in particular. The paper is not clear enough for acceptance at this point.\", \"detailed_comments\": \"\", \"page_2\": \"Grammar error \\\"that is invariant batch effects\\\". When denoising is discussed, can you explain whether this is denoising or simply regularization? When is the selected subspace a good approximation for the \\\"signal subspace\\\"?\", \"page_3\": \"Should X^(S) be X^(s)? When W and W(s) are defined, do they also rely on a neighborhood graph? It appears that in the definition of psi_j the eigenvectors phi_j should be obtained from W, not P (which is how they are defined earlier in the page).\", \"page_4\": \"There is an abuse of notation on f, used both as a linear function on X(s) and an element of X(s).\", \"page_5\": \"Typos \\\"exlpained\\\", \\\"along the along the\\\". It is not clear what applying a window to eigenvalues means, or what the notation g_xi(lambda) means. The construction of the filters described here needs to be more explicit. h_xi is undefined. How is H in (1) defined when i = 1?\", \"page_6\": \"M should be M(s1,s2). Typesetting error in Lambdabar(s). Which matrix is referred to in \\\"the laplacian eigenvalues of each view\\\"? What is the source and target of the embedding E? How is the embedding applied to data x(s1), x(s2)?\", \"page_7\": \"Figure 1a appears to have an error in the orientation of one of the blue \\\"3\\\"s. The text on the arrow between the manifold embeddings does not agree with the notation in the paper. In Figure 1b, it is not clear which image is the original point and which images are the neighbors, or why some images are smaller than others. Results for the other algorithms are missing (why no comparison?). Typo \\\"Wang&Mahadevan\\\". Can you be more specific as to why that algorithm was \\\"unable to recover k-neighborhoods\\\" in certain cases?\", \"page_8\": \"Why no comparison with Wang & Mahadevan in Figure 2?\", \"page_9\": \"There is little description as to how manifold learning is applied in the biological data example. What is the ambient dimensionality and the dimension of the manifolds? How are the \\\"abundances\\\" extracted from the data?\\n\\\"Which we explore in 4\\\" -> \\\"Which we explore in Fig. 4\\\"\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
HylVB3AqYm | ProxylessNAS: Direct Neural Architecture Search on Target Task and Hardware | [
"Han Cai",
"Ligeng Zhu",
"Song Han"
] | Neural architecture search (NAS) has a great impact by automatically designing effective neural network architectures. However, the prohibitive computational demand of conventional NAS algorithms (e.g. 10 4 GPU hours) makes it difficult to directly search the architectures on large-scale tasks (e.g. ImageNet). Differentiable NAS can reduce the cost of GPU hours via a continuous representation of network architecture but suffers from the high GPU memory consumption issue (grow linearly w.r.t. candidate set size). As a result, they need to utilize proxy tasks, such as training on a smaller dataset, or learning with only a few blocks, or training just for a few epochs. These architectures optimized on proxy tasks are not guaranteed to be optimal on the target task. In this paper, we present ProxylessNAS that can directly learn the architectures for large-scale target tasks and target hardware platforms. We address the high memory consumption issue of differentiable NAS and reduce the computational cost (GPU hours and GPU memory) to the same level of regular training while still allowing a large candidate set. Experiments on CIFAR-10 and ImageNet demonstrate the effectiveness of directness and specialization. On CIFAR-10, our model achieves 2.08% test error with only 5.7M parameters, better than the previous state-of-the-art architecture AmoebaNet-B, while using 6× fewer parameters. On ImageNet, our model achieves 3.1% better top-1 accuracy than MobileNetV2, while being 1.2× faster with measured GPU latency. We also apply ProxylessNAS to specialize neural architectures for hardware with direct hardware metrics (e.g. latency) and provide insights for efficient CNN architecture design. | [
"Neural Architecture Search",
"Efficient Neural Networks"
] | https://openreview.net/pdf?id=HylVB3AqYm | https://openreview.net/forum?id=HylVB3AqYm | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"5sXAs8IfH2",
"ByxWo8xmgH",
"rygMExlwAN",
"SJgnTVbQK4",
"Bke8s59qEE",
"r1ehBY95e4",
"rJe_QI3wlV",
"Byxc9SDlgE",
"rkx-525OJ4",
"Hke4jK9OkE",
"rJe1QITpR7",
"HyxWxsxaCQ",
"HJelAtlcRm",
"rJl4Qn5KAQ",
"S1x0oTiHR7",
"B1lXIuW-CQ",
"SkeDbIW-CX",
"H1x6eyDe0X",
"H1eNgOayAX",
"HyxqxNT5aQ",
"BygWA7OK6X",
"rkle4Pwda7",
"S1xBSKTN6m",
"SJlO7KpVp7",
"BkxbEO0XpX",
"rJxuErCmTm",
"S1lC-BtXpX",
"HkxRDmY7am",
"HkxEMyl33m",
"rJl-2uUshQ",
"BklS-ur9h7",
"rJeFYtGK27"
],
"note_type": [
"comment",
"comment",
"comment",
"comment",
"comment",
"comment",
"comment",
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"official_comment",
"official_comment",
"comment",
"official_comment",
"comment",
"comment",
"comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"official_review",
"official_review",
"official_review",
"official_comment"
],
"note_created": [
1580691143760,
1561687704891,
1559851050151,
1554351299923,
1549605533797,
1545410883738,
1545221664303,
1544742290423,
1544232072568,
1544231323525,
1543521814785,
1543469801096,
1543272903870,
1543248924077,
1542991269916,
1542686795089,
1542686206855,
1542643445325,
1542604779767,
1542276081939,
1542190025201,
1542121256288,
1541884220880,
1541884192270,
1541822505053,
1541821744163,
1541801222125,
1541800805533,
1541304076025,
1541265576991,
1541195772732,
1541118337406
],
"note_signatures": [
[
"(anonymous)"
],
[
"(anonymous)"
],
[
"(anonymous)"
],
[
"~Miao_Zhang1"
],
[
"(anonymous)"
],
[
"~Robin_Tibor_Schirrmeister1"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1534/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1534/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1534/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1534/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1534/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1534/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1534/AnonReviewer3"
],
[
"~Robin_Tibor_Schirrmeister1"
],
[
"ICLR.cc/2019/Conference/Paper1534/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1534/Authors"
],
[
"~Robin_Tibor_Schirrmeister1"
],
[
"ICLR.cc/2019/Conference/Paper1534/Authors"
],
[
"~Robin_Tibor_Schirrmeister1"
],
[
"~Robin_Tibor_Schirrmeister1"
],
[
"~Robin_Tibor_Schirrmeister1"
],
[
"ICLR.cc/2019/Conference/Paper1534/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1534/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1534/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1534/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1534/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1534/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1534/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1534/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1534/Authors"
]
],
"structured_content_str": [
"{\"comment\": \"Hi,\\nCan you please release the architecture search code for the CIFAR10 dataset also?\\n\\nThanks\", \"title\": \"Architecture search code for CIFAR10 dataset\"}",
"{\"comment\": \"When we train the architecture parameters, is in every layer we sample two paths to update or only a single layer\\u2019s two path\\u2019s parameters will be update?\\n\\n If only one layer\\u2019s parameters update, how to choose paths of the rest layers? The highest-weight one or a random one?\", \"title\": \"How to choose paths while training in ProxylessNAS\\uff1f\"}",
"{\"comment\": \"It would be really helpful if you could release the full code for this project. Since you define a new search space that's good for one-shot methods, it could become a new benchmark if it's easy to use your code to do further experiments in this space.\", \"title\": \"+1, please release code\"}",
"{\"comment\": \"Dear authors,\\nI am working NAS also, and I am very interested in this paper. However, I found that you just release pretrained models and evaluation code, is it possible to release the main search code in the future?\\n\\nSincerely\", \"title\": \"is it possible to release search code?\"}",
"{\"comment\": \"Dear Authors,\\n\\nAs the paper has now been accepted, I kindly request you to release the training code for the paper. Thank you.\", \"title\": \"Code for training\"}",
"{\"comment\": \"Thanks for the answers! Congratulations on acceptance!\", \"title\": \"Thanks!\"}",
"{\"comment\": \"Dear the authors,\\n\\nI want to echo with the reviewers/public readers that releasing your detailed training pipeline is quite crucial given the good performances reported in the paper. Furthermore, only evaluation code/model ckpts is definitely not enough since people have various unreasonable ways to obtain a good ckpt only on the test set (I'm not meaning you are doing this and sorry for possible offense here in advance).\\n\\nBest\", \"title\": \"Only releasing training code is meaningful\"}",
"{\"metareview\": \"This paper integrates a bunch of existing approaches for neural architecture search, including OneShot/DARTS, BinaryConnect, REINFORCE, etc. Although the novelty of the paper may be limited, empirical performance seems impressive. The source code is not available. I think this is a borderline paper but maybe good enough for acceptance.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Good empirical results. Novelty is limited.\"}",
"{\"title\": \"Re: Further questions\", \"comment\": \"Thanks for your questions. Please see responses below.\\n\\n>>> \\u201cWhat was the search time on CIFAR-10 in GPU hours? For Proxyless-R and Proxyless-G?\\u201d\\n\\nThe search time depends on the size of the backbone architectures (e.g., number of blocks). For example, when searching with 54 blocks, it takes around 4 days on a single GPU for both Proxyless-R and Proxyless-G. When searching with fewer blocks (e.g. 8 blocks), it takes less than 1 day. \\n\\n>>> \\u201cIs Batch Normalization in training or evaluation mode when optimizing architecture parameters?\\u201d\\n\\nThe batch normalization is in the training mode.\\n\\n>>> \\u201cFor REINFORCE, what do you use as optimization metric on validation set for architecture parameters on CIFAR-10? Normal loss, like cross entropy or actually misclassification rate?\\u201d\\n\\nWe use the misclassification rate. Normal loss, like cross entropy, may also be a feasible optimization metric\\n\\n>>> \\u201cFor REINFORCE, do you use any kind of baselining? Do you use multiple architecture samples per update?\\u201d\\n\\nThe baseline is the moving average of previous mean metrics with a decay of 0.99. And we update every 8 samples.\"}",
"{\"title\": \"Re: replacement=True or False?\", \"comment\": \"Apologize for the mistake. The correct one is setting \\\"replacement=False\\\". Beta2 is set to be the default value in Pytorch (i.e., 0.999). As for network parameters, we use SGD optimizer with Nesterov momentum 0.9 and cosine learning rate schedule.\"}",
"{\"title\": \"Thanks for your helpful feedback.\", \"comment\": \"Thank you for your helpful feedback. We have revised our paper according to your suggestion.\\n\\n>>> \\u201cin the new mobile phone results you have presented there is a network that actually has better latency with slightly worse accuracy, which makes it hard to compare\\u201d\\n\\n2.6% top-1 accuracy improvement on ImageNet is significant. To achieve the same accuracy, MobileNetV2 needs 2x latency (143ms v.s. 78ms). Please see Figure 4.\\n\\n>>> \\u201cIt would be nice to actually have a table showing the strengths/weaknesses along these axes for all of these methods\\u201d\\n\\nThanks for your suggestion. We will add the table to our paper. \\n\\nModel\\t Top-1\\t Top-5\\tLatency\\tHardware-Aware\\t No-Proxy\\tNo-Repeating\\tTime\\tMemory\\nMobilenetV1\\t 70.6\\t 89.5\\t 113ms\\t -\\t -\\t No\\t -\\t -\\nMobilenetV2\\t 72.0\\t 91.0\\t 75ms\\t -\\t -\\t No\\t - -\\nNASNet-A\\t 74.0\\t 91.3\\t 183ms\\t No\\t No No 10^4 \\t 10^1\\nAmoebaNet-A\\t 74.5\\t 92.0\\t 190ms\\t No\\t No\\t No\\t 10^4 10^1\\nDarts\\t 73.1\\t 91.0\\t -\\t No\\t No\\t No\\t 10^2\\t 10^2\\nMnasNet\\t 74.0\\t 91.8\\t 79ms\\t Yes\\t No\\t No\\t 10^4 \\t 10^1\\nProxylessNAS (mobile) 74.6\\t 92.2\\t 78ms\\t Yes\\t Yes\\t Yes 10^2 \\t 10^1\\n\\n>>> \\u201cprecisely define what is novel about the method\\u201d and \\u201cemphasize exactly the empirical contribution\\u201d\", \"we_summarize_our_contributions_as_follows\": \"> Methodologically,\\na) We provided a new path-level pruning perspective for NAS.\\n\\nb) We proposed a gradient-based approach (Section 3.3.1) to handle non-differentiable hardware objectives (e.g. latency), making them differentiable by introducing regularization loss.\\n\\nc) We proposed a path-level binarization approach to address the high memory consumption issue of differentiable NAS. Notably, different from BinaryConnect that binarizes each weight, our path-level binarization approach binarizes the entire path.\\n\\n> Empirically,\\na) We significantly reduced the cost of memory/compute for the training of large over-parameterized networks and thereby scaled to large-scale datasets (ImageNet) without proxy and repeating blocks.\\n\\nb) We studied specialized neural network architectures for different hardware architectures and showed its advantage, raising people\\u2019s awareness of specializing neural network architectures for hardware.\\n\\nc) We achieved strong empirical results on both CIFAR-10 and ImageNet. On different hardware platforms (GPU, CPU and mobile phone), our models not only significantly outperform previous state-of-the-arts, but also peer submissions.\\n\\nWe sincerely thank your feedback and hopefully have cleared your concerns.\"}",
"{\"title\": \"Thanks for your further feedback. We have revised the paper accordingly.\", \"comment\": \"Thank you for your reply and detailed suggestion. We have uploaded a revision of our paper and removed the number of search space size.\"}",
"{\"title\": \"Thank you for your response\", \"comment\": \"Thank you for your response. I particularly appreciate the release of the source code, while I did not have time to dig into it, it definitely increases the trust from the reader.\\n\\nRegarding the limited experiments, consider it a criticism towards the sub-field in general, not to this paper in particular. It just seems a bit counter to the narrative of automatically selecting architectures if only a very limited amount of architectures are found.\\n\\nI do appreciate how this paper is searching a slightly more varied architecture search compared to some previous methods, but I do not think the search space absolute size (10^547) says much in this regard, it would be easy to artificially come up with large search spaces with little variety as well as small search spaces with a lot of variety. My personal opinion is that it would be better to omit the number, mist giving the impression that it has more meaning than it has, but consider it a very minor point :)\"}",
"{\"title\": \"Thank you for the response\", \"comment\": \"Thanks for the detailed response. Please see comments below. \\n\\n> a) Our proxy-less NAS is the first NAS algorithm that directly learns architectures on the large-scale dataset (e.g. ImageNet) without any proxy. \\n\\nI agree but this is not a method/algorithmic contribution but an empirical one. The way you achieve this is by combining existing methods (which I listed in the original review), which allows the reduction of memory usage/computation compared to One-Shot/DART. I should emphasize that there is nothing particularly wrong with combining methods (especially across areas/fields) but just makes the empirical contribution and thoroughness of the analysis more important. However, the method/algorithmic contributions should be made clear in a precise manner, rather than making large general statements. \\n\\n> b) Our proxy-less NAS is the first NAS algorithm that breaks the convention of repeating blocks in neural architecture design. \\n\\n I am not sure this is the case. Neuroevolution methods (which you should cite more heavily) do not necessarily require this, e.g. [1]. However, I agree that within the regime of training over-parameterized networks or methods scalable. Again, please state your advantages explicitly; you seem to mention one axis/dimension at a time (e.g. scalability, no proxy, no repeating cell structure) yet your advantages are really at the combination of these. It would be nice to actually have a table showing the strengths/weaknesses along these axes for all of these methods, which would make it more clear.\\n\\n[1] Large-Scale Evolution of Image Classifiers, Esteban Real, Sherry Moore, Andrew Selle, Saurabh Saxena, Yutaka Leon Suematsu, Jie Tan, Quoc Le, Alex Kurakin, https://arxiv.org/abs/1703.01041\\n\\n> The new interesting design patterns, found by our method, can provide new insights for efficient neural architecture design.\\n\\nI agree with this and mentioned it in the review.\\n\\n> c) Our method builds upon methods from two communities (one-shot architecture search from NAS community and Pruning/BinaryConnect from model compression community). \\n\\nAgain, I agree but this means that it *is* a combination of methods (which contradicts your rebuttal title). \\n\\n> With latency constraints, our optimized models also achieved state-of-the-art results (3.1% higher top-1 accuracy while being 1.2x faster on GPU and 2.6% higher top-1 accuracy with similar latency on mobile phone, compared to MobileNetV2). \\n> Besides, we directly optimize the latency, rather than an inaccurate proxy (i.e. FLOPs). \\n\\nI agree it's interesting to optimize for these non-differentiable objectives. However, it seems to me that given that you are optimizing directly for them, the actual gains are not that large. For example, in the new mobile phone results you have presented there is a network that actually has better latency with slightly worse accuracy, which makes it hard to compare:\\n\\nMobileNet V2\\t\\t72.0\\t\\t91.0\\t\\t75ms\\nProxyless NAS (ours)\\t74.6\\t\\t92.2\\t\\t78ms\\n\\nIn all, it would be great for the authors to precisely define what is novel about the method (if it is not a combination of existing methods, as you claim in the rebuttal title). If it is a combination of methods (which again should not necessarily be seen as a bad thing), then it would be great to emphasize exactly the empirical contribution (the largest of which seems to be the reduction of memory/compute for training of large over-parameterized networks, scaled to ImageNet-sized datasets). The optimization of a non-differentiable objective can also be a smaller contribution, but is common to RL-based methods. Again, I think this paper presents some nice results, but it is important to be precise and not make more general claims than warranted.\"}",
"{\"comment\": \"Thanks for answering the questions so far, I also have some further questions.\\n\\n1. What was the search time on CIFAR-10 in GPU hours? For Proxyless-R and Proxyless-G?\\n2. Is Batch Normalization in training or evaluation mode when optimizing architecture parameters?\\n3. For REINFORCE, what do you use as optimization metric on validation set for architecture parameters on CIFAR-10? Normal loss, like cross entropy or actually misclassification rate?\\n4. For REINFORCE, do you use any kind of baselining? Do you use multiple architecture samples per update? For example, right now I sample 10 architectures for each validation data batch and also subtract the mean metric/reward/loss before I compute the gradients.\", \"title\": \"Further questions, also regarding REINFORCE\"}",
"{\"title\": \"We have added the results for Proxyless-G on ImageNet. And we also include a new differentiable approach to handle non-differentiable objectives (i.e. latency).\", \"comment\": \"We have added the results for Proxyless-G on ImageNet to the paper (please see Table 6 in Appendix D). We find that without taking latency as a direct objective, Proxyless-G has no incentive to choose computation-cheap operations. Consequently, it designs a very slow network that has 158ms latency on mobile phone. After rescaling the network using depth multiplier [1, 2], the latency of the network reduces to 83ms. However, this model can only achieve 71.8% top-1 accuracy on ImageNet which is 2.8% lower than Proxyless-R. Therefore, as discussed in our previous responses, it is essential to take latency which is non-differentiable as a direct optimization objective. And REINFORCE-based approach provides a solution to this problem.\\n\\nBeside REINFORCE, we have recently designed a differentiable approach to handle the non-differentiable objectives (please see Appendix D). Specifically, we propose the latency regularization loss based on our proposed latency prediction model (please see Appendix C). The key to the latency regularization loss is an observation that the expected latency of a mixed operation is actually differentiable w.r.t. architecture parameters. Therefore, by incorporating the expected latency into the loss function as a regularization term, we are able to directly optimize the trade-off between accuracy and latency. Further details are provided in Appendix D. \\n\\n[1] Sandler, Mark, et al. \\\"MobileNetV2: Inverted Residuals and Linear Bottlenecks.\\\" Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018.\\n[2] Tan, Mingxing, et al. \\\"Mnasnet: Platform-aware neural architecture search for mobile.\\\" arXiv preprint arXiv:1807.11626 (2018).\"}",
"{\"title\": \"Paper revision: new methods and new experiment results\", \"comment\": \"Hi all,\", \"we_have_uploaded_a_revision_of_our_paper_with_the_following_new_methods_and_stronger_experiment_results\": \"a) \\u201cEconomical alternative to mobile farm\\u201d. In Appendix C, we introduce an accurate latency prediction model and remove the need of building an expensive mobile farm infrastructure [1] when learning specialized neural network architectures for mobile phone. We add new experiment results on the mobile setting, where our model achieves state-of-the-art top-1 accuracy on ImageNet under mobile latency constraints. \\n\\nb) \\u201cMake latency differentiable\\u201d. In Appendix D, we present a *differentiable* approach to handle the non-differentiable objectives (i.e. latency in our case). Specifically, we propose the latency regularization loss based on our proposed latency prediction model. By incorporating the predicted latency of the network into the loss function as a regularization term, we are able to directly optimize the trade-off between accuracy and latency. We also add new experiments on ImageNet to justify the effectiveness of the proposed latency regularization loss. \\n\\n[1] Tan, Mingxing, et al. \\\"Mnasnet: Platform-aware neural architecture search for mobile.\\\" arXiv preprint arXiv:1807.11626 (2018).\"}",
"{\"comment\": \"Thanks for your answers! I assume you man replacement=False, right?\\nbeta1 is set zero, and what value do you use for beta2?\\nAnd for the network parameters, what is your optimizer and hyperparameters , including learning rate schedule (for CIFAR-10)?\", \"title\": \"replacement=True or False?\"}",
"{\"title\": \"Responses to implementation questions\", \"comment\": \"Hi Robin,\\n\\nThanks for your interest in our work and your detailed questions. \\n\\n>>> Response to \\\"Rescaling architecture parameters\\\" \\nYour understanding of the gradient-based updates is correct. \\nAs for sampling two paths according to the multinomial distribution, we use \\\"torch.multinomial()\\\". And by setting \\\"replacement=False\\\", the same path will not be chosen twice. \\n\\n>>> Response to \\\"Adam optimizer for architecture parameters\\\" \\nWe also consider it would be problematic to use the adaptive gradient averages for this case where most of the paths are not chosen. So we set beta1 to be 0 in the Adam optimizer. Sampling multiple times before making an Adam update step is a nice idea. We will try it later. Thanks for your suggestion.\"}",
"{\"comment\": \"So, on further thought I assume you might have meant rescaling probabilities of sampled operations by a factor such that probabilities of unsampled operations stay the same. And update the corresponding alphas for the sampled operations such that this matches.\", \"i_have_tried_to_do_this_here\": \"\", \"https\": \"//gist.github.com/robintibor/83064d708cdcb311e4b453a28b8dfdca\\n\\nDoes this look correct to you?\", \"title\": \"Rescaling code\"}",
"{\"comment\": \"Let me expand a little bit on the question and just write my understanding and open questions regarding the Gradient-Based Updates from section 3.1.\\n\\nSo, given a_i's as architecture weights, I am implementing it as follows:\\n1. Compute p_i's from a_i's using softmax\\n2. Use computed p_i's as sampling probabilities for the multinomial distribution to select two operations. [Possibly resample, if same operation chosen twice?]\\n3. Recompute p_i's of the chosen a_i's by only pushing the two chosen a_is through softmax? Let's call them pnew_i's\\n4. Use pnew_i's as input to binarize function, which will select one operation as active and one as inactive\\n5. Compute outputs for both chosen operations, let's call them o_1, o_2, with o_1 the active operation according to the binarize function computed before\\n6. Compute overall output as g_1(=1)*o_1 + g_2(=0)*o_2 (g_1, g_2 from binarize)\\n7. Compute gradient on chosen a_i's as (gradient of loss wrt g_i) * (gradient of pnew_i wrt a_i) [or using full softmax, i.e. (gradient of loss wrt g_i) * (gradient of p_i wrt a_i)?]\\n8. Make update step on a_i's with optimizer\\n9. Multiply updated and chosen a_is by a factor that keeps probabilities p_is of unchosen operations identical to before [or see update below]\\n\\nWhat is correct, what is not?\\n\\nAlso, you use Adam for the architecture parameters, do you think it can be a problem for the adaptive gradient averages that in a single update, most operations are not chosen? Or do you sample multiple times before you make an Adam update step?\", \"title\": \"Further questions\"}",
"{\"comment\": \"Thanks for the fascinating research work.\", \"i_am_trying_to_reimplement_your_method_and_have_a_question_regarding\": \"\\\"Finally, as path weights are computed by applying softmax to the architecture parameters, we need to rescale the value of these two updated architecture parameters by multiplying a ratio to keep the path weights of unsampled paths unchanged.\\\"\\n\\nI am not sure how to do this correctly, can you provide the formula for this ratio or code? I am a bit stuck there, how to compute the ratio :)\", \"another_question_regarding\": \"\\\"Following this idea, within an update step of the architecture parameters, we first sample two paths according to the multinomial distribution (p1,\\u00b7\\u00b7\\u00b7,pN) and mask all the other paths as if they do not exist.\\\"\\n\\nCould this sampling result in the same path being chosen twice? And do you handle that in some way?\", \"title\": \"Implementation question regarding rescaling of architecture parameters\"}",
"{\"title\": \"We made Apple-to-Apple comparison. Our advantage on memory saving is clear.\", \"comment\": \">>> Response to \\u201ccomparison with One Shot and DARTS\\u201d: \\nApologize for the unclear explanation for this experiment. We will revise this part to make it more clear. \\n\\nAll of three methods are evaluated under the same condition except DARTS [3]. Same as the original paper, DARTS *has to* use a smaller scale setting for learning architectures due to the high memory consumption. So for DARTS, the first cell structure setting is chosen to fit the network into a single GPU to learn cell structure. Then we evaluated the learned cell structure on two larger settings by repeatedly stacking it, same as the original DARTS paper [3]. \\n\\nFor our method, since we solved the high memory consumption issue via binarized path, our method can directly learn architectures under both small-scale and large-scale settings with *limited* GPU memory. As it is one of the key advantages of our method over previous NAS methods, we consider it reasonable to keep such differences. \\n\\n>>> Response to \\u201cadd results for Proxyless-G on ImageNet\\u201d: \\nThanks for suggesting this new experiment. We have launched this experiment and will add the results to the paper.\\n\\nHowever, it is important to take latency as a *direct* objective when learning specialized neural network architectures for a platform. Otherwise, NAS would fail to make a good trade-off between accuracy and latency. For example, NASNet-A [1] and AmoebaNet-A [2] has shown compelling accuracy results compared to MobileNetV2 1.4 with similar number of parameters and FLOPs. But they are optimized without the awareness of the latency, their measured latencies on mobile phone are much worse than MobileNetV2 1.4 (see below). Therefore, we employ REINFORCE to directly optimize the non-differentiable objective (i.e. latency).\\n\\nModel\\t\\t\\t\\tParams\\t FLOPS\\t Top-1\\tMobile latency\\nMobileNet V2 1.4\\t\\t6.9M\\t\\t585M\\t\\t74.7\\t\\t143ms\\nNASNet-A\\t\\t\\t5.3M\\t\\t564M\\t\\t74.0\\t\\t183ms\\nAmeobaNet-A\\t\\t5.1M\\t\\t555M\\t\\t74.5\\t\\t190ms\\n\\n[1] Zoph B, Vasudevan V, Shlens J, Le QV. Learning transferable architectures for scalable image recognition. CVPR 2018.\\n[2] Real E, Aggarwal A, Huang Y, Le QV. Regularized evolution for image classifier architecture search. arXiv preprint arXiv:1802.01548. 2018.\\n[3] Liu H, Simonyan K, Yang Y. Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055. 2018.\\n[4] Bender G, Kindermans PJ, Zoph B, Vasudevan V, Le Q. Understanding and simplifying one-shot architecture search. ICML 2018.\"}",
"{\"title\": \"proxy-less NAS is an important contribution that breaks many conventions and stereotypes of neural architecture design. It's not a combination of existing methods.\", \"comment\": \"We sincerely thank you for your comprehensive comments and constructive advices.\\n\\n>>> Response to \\u201ccombination of existing methods\\u201d: \\nThanks for your kind advice on organizing the paper to make our contributions more clear. Here, we would like to emphasize our contributions:\\n\\na) Our proxy-less NAS is the first NAS algorithm that directly learns architectures on the large-scale dataset (e.g. ImageNet) without any proxy. We also solved an important problem improving the computation efficiency of NAS as we reduced the computational cost (GPU hours and GPU memory) of NAS to the same level as normal training. Moreover, the GPU memory requirement of our method keeps at O(1) complexity rather than grows linearly with the number of candidate operations O(N) [3, 4]. Therefore, our method can easily support a large candidate set while DARTS and One-Shot cannot. \\t\\n\\nb) Our proxy-less NAS is the first NAS algorithm that breaks the convention of repeating blocks in neural architecture design. From Alexnet and VGG to ResNet and MobileNet, manually designed CNNs used to repeat blocks within the same stage. Previous NAS works keep the tradition as otherwise the searching cost will be unaffordable. Our work breaks the constraints, and we found this is actually a stereotype that needs to be corrected. \\n\\nThe new interesting design patterns, found by our method, can provide new insights for efficient neural architecture design. For example, people used to stack multiple 3x3 convs to replace a single large kernel conv, as this uses fewer parameters while keeping a similar receptive field. But we found this pattern may not be proper for designing efficient (low latency) networks: Two 3x3 depthwise separable convs actually run slower than a single 5x5 depthwise separable conv. Our GPU model, shown in Figure 4, incorporates large kernel convs and aggressively pools at early stages to shrink network depth. Then the model chooses computation-expensive operations at low-resolution stages. It also tends to choose computation-expensive operations in the first block within each stage where the feature map is downsampled. As a consequence, our GPU model can outperform previous SOTA efficient architectures in accuracy performances (e.g. 3.1% higher top-1 than MobileNetV2), while running faster than them (e.g. 1.2x faster than MobileNetV2). Such patterns cannot be found by previous NAS, as they optimize on proxy task and force blocks to share structures.\\n\\nc) Our method builds upon methods from two communities (one-shot architecture search from NAS community and Pruning/BinaryConnect from model compression community). It is the first time to incorporate ideas from the model compression community to the NAS community and we also provide a new path-level pruning perspective for one-shot architecture search. Moreover, we provide a unified framework for both gradient-based updates and REINFORCE-based updates. \\n\\nd) Our proxy-less NAS achieved very strong empirical results on two most representative benchmarks (i.e. CIFAR and ImageNet). On CIFAR-10, our optimized model reached 2.08% error rate with only 5.7M parameters, outperforming previous state-of-the-art architecture (AmeobaNet-B with 34.9M parameters). On ImageNet, we searched specialized neural network architectures for three different platforms (GPU, CPU and mobile phone). With latency constraints, our optimized models also achieved state-of-the-art results (3.1% higher top-1 accuracy while being 1.2x faster on GPU and 2.6% higher top-1 accuracy with similar latency on mobile phone, compared to MobileNetV2). \\n\\nBesides, we directly optimize the latency, rather than an inaccurate proxy (i.e. FLOPs). It\\u2019s an important concept that low FLOPs doesn\\u2019t translate to low latency. All our speedup numbers are reported with real measured latency. We believe both our efficient search methodology and the resulting efficient models have big industry impact.\"}",
"{\"title\": \"Models on all platforms have been open sourced. Reproducible experiment verified on 3 different platforms.\", \"comment\": \"We sincerely thanks for the detailed feedback. Our pre-trained models and the evaluation code are provided in the following anonymous link for verifying our results: https://goo.gl/QU3GhA. We have also made a video to visualize the architecture search process: https://goo.gl/VAzGJs. We would like to release the entire codebase upon publication.\\n\\n>>> Response to \\u201cperformances are too good to be true\\u201d: \\nWe consider the comment as a compliment rather than a drawback. There are several reasons for our good results:\\na) Our proxy-less NAS *directly* learns on the *target* task while previous NAS methods *indirectly* learn on *proxy* tasks. For example, on CIFAR-10, DARTS [1] conducted architecture search experiments with 8 blocks due to their high memory consumption and then transferred the learned block structure to a much larger network with 20 blocks. This indirect optimization scheme would lead to suboptimal results while our proxy-less NAS does not suffer from this problem. \\n\\nb) We broke the convention in neural architecture design by *not* repeating the same building block structure. Our method explores a much larger architecture space compared to previous NAS methods (10^547 vs 10^18). Furthermore, our method has much larger block diversity and is able to learn preferences at different positions in the architecture.\\n \\nFor example, our optimized neural network architectures for GPU, CPU and mobile phone prefer to choose more computation-expensive operations (e.g. 7x7 MBConv6) for the last few stages where the resolution of feature map is low. They also prefer to choose more computation-expensive operations in the first block within each stage where the feature map is downsampled. We consider the ability to learn such patterns which are absent in previous NAS papers also helps to improve our results.\\n\\n>>> Response to \\u201cDPP-Net and NAO citations\\u201d: \\nApologize for the typo and missing a relevant paper in our reference part. We have fixed typo and added a reference to \\u201cNeural Architecture Optimization\\u201d. Thanks for pointing out our mistakes.\\n\\n[1] Liu H, Simonyan K, Yang Y. Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055. 2018.\"}",
"{\"title\": \"Proxyless NAS enables efficient and direct search on different tasks and hardware platforms\", \"comment\": \"We sincerely thank you for the detailed comments on our paper. We have revised the paper and fixed the typos accordingly.\\n\\n>>> Response to \\u201climited amount of tested settings\\u201d: \\nAs our proxy-less NAS has reduced the cost to the same level of normal training (100x more efficient on ImageNet), it is of great interest for us to apply proxy-less NAS to more settings and datasets. However, for this work, considering the resource constraints and time limits, we have strong reasons to believe that our experiment settings are sufficient:\\n\\na) Our experiments are conducted on two most representative benchmarks (CIFAR and ImageNet). It is in line with previous NAS papers and also makes it possible to compare our method with previous NAS methods. We also experimented with 3 different hardware platforms and observed consistent latency improvement over previous work. \\n\\nb) Moreover, on the challenging ImageNet classification task, we have conducted architecture search experiments under three different settings (GPU, CPU and Mobile) while previous NAS papers mainly transfer learned architectures from CIFAR-10 to ImageNet without conducting architecture search experiments on ImageNet [1, 2]. \\n\\n>>> Response to \\u201cno source code available\\u201d: \\nReviewer 2 also has similar requests, based on the concern on our strong empirical results. Our pre-trained models and the evaluation code are provided in the following anonymous link: https://goo.gl/QU3GhA. Besides, we have also uploaded the video visualizing the architecture search process: https://goo.gl/VAzGJs. We plan to open source our project upon publication.\\n\\n>>> Response to \\u201cthe size of the search space is not a very meaningful metric\\u201d: \\nThis might be a misunderstanding. We do not intend to use the size of our search space as a metric for comparison; instead, it is an important reason why our accuracy is much better than previous NAS methods. Previous NAS methods forced different blocks to share the same structure and only explored a limited architecture space (e.g. 10^18 in [2] and 10^10 in [3]). Our method, breaking the constraints, allows all of the blocks to be specified and has much larger search space (i.e. 10^547).\\n\\n[1] Zoph B, Vasudevan V, Shlens J, Le QV. Learning transferable architectures for scalable image recognition. CVPR 2018.\\n[2] Liu H, Simonyan K, Yang Y. Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055. 2018.\\n[3] Bender G, Kindermans PJ, Zoph B, Vasudevan V, Le Q. Understanding and simplifying one-shot architecture search. ICML 2018.\"}",
"{\"title\": \"We have uploaded the evaluation code and pretrained models\", \"comment\": \"Thanks for your interest in our work. The evaluation code and pretrained models are accessible at https://goo.gl/QU3GhA. We also made a video to visualize the architecture search process at https://goo.gl/VAzGJs . You are welcome to validate the performance. The entire codebase will be released upon publication.\\n\\nOur implementation is repeatable and reproducible. We used the same code base to search CPU/GPU/Mobile models. On all three platforms the performance consistently outperformed previous work, thanks to our Proxyless NAS enables searching over a large design space efficiently.\"}",
"{\"comment\": \"Dear authors, can you release your source code for readers to validate your experiment?\", \"title\": \"can you release code?\"}",
"{\"title\": \"Interesting combination of existing methods and good performance\", \"review\": [\"This paper addresses the problem of architecture search, and specifically seeks to do this without having to train on \\\"proxy\\\" tasks where the problem is simplified through more limited optimization, architectural complexity, or dataset size. The paper puts together a set of existing complementary methods towards this end, specifically 1) Training \\\"cumbersome\\\" networks as in One Shot and DARTS, 2) Path binarization to address memory requirements (optimized using ideas in BinaryConnect), and 3) optimizing a non-differentiable architecture using REINFORCE. The end result is that this method is able to find efficient architectures that achieve state of art performance with fewer parameters, can be optimized for non-differentiable objectives such as latency, and can do so with smaller amounts of GPU memory and computation.\", \"Strengths\", \"The paper is in general well-written and provides a clear description of the methods.\", \"Different choices made are well-justified in terms of the challenge they seek to address (e.g. non-differentiable objectives, etc.)\", \"The results achieve state of art while being able to trade off other objectives such as latency\", \"There are some interesting findings such as the need for specialized blocks rather than repeating blocks, comparison of architectures for CPUs vs. GPUs, etc.\", \"Weaknesses\", \"In the end, the method is really a combination of existing methods (One Shot/DART, BinaryConnect, use of RL/REINFORCE, etc.). One novel aspect seems to be factorizing the choice out of N candidates by making it a binary selection. In general, it would be good for the paper to make clear which aspects were already done by other approaches (or if it's a modification what exactly was modified/added in comparison) and highlight the novel elements.\", \"The comparison with One Shot and DARTS seems strange, as there are limitations place on those methods (e.g. cell structure settings) that the authors state they chose \\\"to save time\\\". While that consideration has some validity, the authors should explicitly state why they think these differences don't unfairly bias the experiments towards the proposed approach.\", \"It's not clear that the REINFORCE aspect is adding much; it achieves slightly higher parameters when compared against Proxyless-G, and while I understand the motivation to optimize a non-differentiable function in this case the latency example (on ImageNet) is never compared to Proxyless-G. It could be that optimized the normal differentiable objective achieves similar latency with the smaller number of parameters. Please show results for Proxyless-G in Table 4.\", \"There were several typos throughout the paper (\\\"great impact BY automatically designing\\\", \\\"Fo example\\\", \\\"is build upon\\\", etc.)\", \"In summary, the paper presents work on an interesting topic. The set of methods seem to be largely pulled from work that already exists, but is able to achieve good results in a manner that uses less GPU memory and compute, while supporting non-differentiable objectives. Some of the methodological issues mentioned above should be addressed though in order to strengthen the argument that all parts of the the method (especially REINFORCE) are necessary.\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Solid work with convincing results\", \"review\": \"It seems the authors propose an efficient method to search platform-aware network architecture aiming at high recognition accuracy and low latency. Their results on CIFAR-10 and ImageNet are surprisingly good. But it is still hard to believe that the author can achieve 2.08% error rate with only 5.7M parameter on CIFAR10 and 74.5% top-1 accuracy on ImageNet with less GPU hours/memories than prior arts.\\n\\nGiven my concerns above, the author must release their code and detail pipelines since NAS papers are difficult to be reproduced.\", \"there_is_a_small_typo_in_reference_part\": \"Jing-Dong Dong's work should be DPP-Net instead of PPP-Net (https://eccv2018.org/openaccess/content_ECCV_2018/papers/Jin-Dong_Dong_DPP-Net_Device-aware_Progressive_ECCV_2018_paper.pdf)\\nand I think this paper \\\"Neural Architecture Optimization\\\" shoud be cited.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"Interesting idea for efficient NAS that gives state-of-the-art results (on limited datasets)\", \"review\": [\"The algorithm described in this paper is part of the one-shot family of architecture search algorithms. In practice this means training an over-parameterized architecture, of which the architectures being searched for are sub-graphs. Once this bigger network is trained it is pruned into the desired sub-graph. The algorithm is similar to DARTS in that it it has weights that determine how important the various possible nodes are, but the interpretation here is stochastic, in that the weight indicates the probability of the component being active. Two methods to train those weights are being suggested, using REINFORCE and using BinaryConnect, both having different trade offs.\", \"(minor) *cumbersome* network seems the wrong term, maybe over-parameterized network?\", \"(minor) I do not think that the size of the search space a very meaningful metric\"], \"pros\": [\"Good exposition\", \"Interesting and fairly elegant idea\", \"Good experimental results\", \"Cons\", \"tested on a limited amount of settings, for something that claims that helps to automate the creation of architecture. I think this is the main shortcoming, although shared by many NAS papers\", \"No source code available\"], \"some_typos\": [\"Fo example, when proxy strategy -> Fo*r* example\", \"normal training in following ways. -> in *the* following ways\", \"we can then derive optimized compact architecture.\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"New experiment results on mobile phone\", \"comment\": \"Hi all,\\n\\nOur efficient algorithm allows us to specialize neural network architectures for different devices easily. Recently, we extended our proxyless NAS to the mobile setting and achieved SOTA result with mobile latency constraint (< 80ms latency on Pixel 1 phone) as well. The following is our current results on ImageNet (Device: Pixel 1. Batch size: 1. Framework: TF-Lite):\\n\\nModel\\t\\t\\t\\tTop-1\\tTop-5\\tMobile latency\\nMobileNet V1\\t\\t70.6\\t\\t89.5\\t\\t113ms\\nMobileNet V2\\t\\t72.0\\t\\t91.0\\t\\t75ms\\nNASNet-A\\t\\t\\t74.0\\t\\t91.3\\t\\t183ms\\nAmeobaNet-A\\t\\t74.5\\t\\t92.0\\t\\t190ms\\nMnasNet\\t\\t\\t74.0\\t\\t91.8\\t\\t76ms\\nMnasNet (our impl.)\\t74.0\\t\\t91.8\\t\\t79ms\\nProxyless NAS (ours)\\t74.6\\t\\t92.2\\t\\t78ms\", \"the_detailed_architectures_of_our_searched_models_and_their_learning_process_are_provided_in_the_following_anonymous_link\": \"\", \"https\": \"//drive.google.com/open?id=1nut1owvACc9yz1ZPqcbqoJLS2XrVPp1Q\"}"
]
} |
|
r1x4BnCqKX | A Generative Model For Electron Paths | [
"John Bradshaw",
"Matt J. Kusner",
"Brooks Paige",
"Marwin H. S. Segler",
"José Miguel Hernández-Lobato"
] | Chemical reactions can be described as the stepwise redistribution of electrons in molecules. As such, reactions are often depicted using "arrow-pushing" diagrams which show this movement as a sequence of arrows. We propose an electron path prediction model (ELECTRO) to learn these sequences directly from raw reaction data. Instead of predicting product molecules directly from reactant molecules in one shot, learning a model of electron movement has the benefits of (a) being easy for chemists to interpret, (b) incorporating constraints of chemistry, such as balanced atom counts before and after the reaction, and (c) naturally encoding the sparsity of chemical reactions, which usually involve changes in only a small number of atoms in the reactants. We design a method to extract approximate reaction paths from any dataset of atom-mapped reaction SMILES strings. Our model achieves excellent performance on an important subset of the USPTO reaction dataset, comparing favorably to the strongest baselines. Furthermore, we show that our model recovers a basic knowledge of chemistry without being explicitly trained to do so. | [
"Molecules",
"Reaction Prediction",
"Graph Neural Networks",
"Deep Generative Models"
] | https://openreview.net/pdf?id=r1x4BnCqKX | https://openreview.net/forum?id=r1x4BnCqKX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"BkeWWT7WxV",
"r1xA3anNRQ",
"BJgVD33V0X",
"BJl6GhhNRX",
"BkgUnF3V0Q",
"rygRtthVCQ",
"HJeXgB2VCX",
"rklaBviq3m",
"BkxM4Zo9nQ",
"Byxatx75nQ",
"H1ljey75h7",
"SJlofDCyn7"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_review",
"official_review"
],
"note_created": [
1544793336604,
1542929845721,
1542929499632,
1542929429415,
1542928814238,
1542928774146,
1542927594947,
1541220165292,
1541218601803,
1541185669182,
1541185267064,
1540511507019
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1533/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1533/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1533/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1533/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1533/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1533/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1533/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1533/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1533/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1533/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1533/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1533/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper presents a graph neural network that represents the movements of electrons during chemical reactions, trained from a dataset to predict reactions outcomes.\\n\\nThe paper is clearly written, the comparisons are sensical. There are some concerns by reviewer 3 about the experimental results: in particular the lack of a simpler baseline, and the experimental variance. I think the some of the important concerns from reviewer 3 were addressed in the rebuttal, and I hope the authors will update the manuscript accordingly.\\n\\nOverall, this is fitting for publication at ICLR 2019.\", \"confidence\": \"3: The area chair is somewhat confident\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Interesting application of graph neural networks\"}",
"{\"title\": \"We have run K-fold CV with ELECTRO-LITE and provide further details on a comment on AnonReviewer3's initial review\", \"comment\": \"We have responded to this comment on our answer to your review and provide a detailed response there. We hope this addresses your concerns. In summary, we show on ELECTRO-LITE using K-fold cross validation that the variation within a method is on the order of tenths of a percent, whereas the difference between the baseline methods we analyse is much greater than this.\"}",
"{\"title\": \"Thank you for your review, in answer to your questions (part 2)\", \"comment\": \"(previous comment continued to answer the remaining questions)\\n\\n## Questions\\n\\n### Questions 1. Is splitting up the electron movement model into bond \\u201cremoval\\u201d and \\u201caddition\\u201d steps just a matter of parameterization or is that physically how the movements work? \\n\\nIt\\u2019s physically how the movements work, the LEF class of reactions consists of movements which effectively remove and then add bonds. However, we note that representing the molecules as graphs is an abstraction, although powerful, and there are subtleties not contained in our model, such as conformational information.\\n\\n### Questions 2. It appears that Jin et al reports Top 6/8/10 whereas this work reports Top 1/3/5 accuracy on the USPTO dataset. It would be nice if there was overlap :-). Do your Top 6/8/10 results with the WLDN model agree with the Jin et al paper?\\n\\nIt looks like we both report top 1/3/5 accuracy but that perhaps confusion is arising from comparing different tables?\", \"one_can_think_of_the_jin_et_al_model_consisting_of_two_parts\": \"(1) The reaction centre predictor, which generates pairs of atoms for which bonds may change (2) The candidate ranker, which evaluates enumerated configuration changes between the pairs. This second stage comes up with the final product.\\n\\nIn table 1a Jin et al report the coverage of the true reaction bonds when including more reaction pairs. This they do after filtering for the top 6/8/10 candidates, and this is perhaps what you are referring to? We do not have the same pipeline of filter, enumerate, rank and so we cannot (and it does not make sense for us) to run this experiment. \\n\\nIn table 1b they report top 1/3/5 accuracy for the reaction prediction task (given that stage 1 is fixed to give 6 pairs). And hence we use the same accuracies in this work.\", \"hope_this_clears_up_the_confusion\": \").\\n\\n## Nits\\nThanks for picking up these typos and other areas for small improvement. We shall fix/describe these things!\"}",
"{\"title\": \"Thank you for your review, in answer to your questions (part 1)\", \"comment\": \"Thank you for your thoughtful and encouraging review! We go through your comments and questions below.\\n\\n## Marginalisation and Symmetries\\nThank you for your suggestion to marginalise out over the different paths when optimizing solely for reaction prediction. We think this is an interesting suggestion and we would like to explore this in future work. A challenge remains on how best to perform the marginalisation as the action space can be very large, so we would probably have to sample.\\n\\nAlthough we do not account for symmetries when extracting a supervision signal, as we are using graph neural networks to parameterize our functions, the probabilities we predict for the two (or more) symmetrical actions should be equal. Constructing the automorphism group of a graph is computationally at least as hard as the graph isomorphism problem so we have avoided doing this during training so far, however we agree it would make for interesting future work.\\n\\n\\n## Improvements\\nThank you for the suggested improvements. We will add a discussion on these points to our paper, as we agree that it will improve the paper. We also briefly discuss these points below.\\n\\n### Improvements 1. Motivate machine learning approach to learn arrow pushing models. Limitations to arrow-pushing models\\n\\nArrow pushing models (and more generally methods that model molecules as graphs) abstract away the details about the electronic structure, and conformational information, ie information about how the molecule shape changes in 3D. This information is crucial in some cases. That said the arrow-pushing abstraction is extremely powerful. It allows chemists to make very quick, but accurate predictions without doing any quantum simulations, just using pencil and paper, and to understand relations between reaction classes, which is often not possible using quantum mechanics alone.\\n\\nWe believe that using ML to learn arrow pushing models is a sensible and beneficial approach. Currently this task is done by expert chemists. We are building off a simplification and abstraction that chemists have shown to be useful and powerful. Using ML to learn reactions in this way makes our model interpretable and easy to query.\\n\\n### Improvements 2. LEF reactions\\n> Why only \\u2018heterolytic\\u2019 LEF reactions? Are there other types of LEF reactions?\\nThere are also \\u2018homolytic\\u2019 LEF reactions which involve a single electron moving (instead of a pair). We will clarify this in the manuscript! However, we will leave their treatment for future work.\\n\\n> Challenges of extending the model on the modelling front versus the data collection front?\\nYes we believe that the model could be extended in future work to deal with a greater class of reactions. For instance some reactions can be broken down into multiple electron paths, where several pairs of electrons get shifted at the same time, which could be modelled by simply running ELECTRO multiple times. However, yes extracting paths to train on (if they overlap) remains an outstanding challenge, perhaps requiring some human supervision or quantum mechanical calculations for creating any training set. Also, having access to more fine-grained datasets, which not only feature the reactants and products of reactions, but also identifiable (stable) intermediates, would likely allow better predictions.\\nHaving said that, the LEF reactions that ELECTRO currently can handle are very common (they make up over 70% of the reactions in the USPTO dataset) and we hope that the heuristics and trends the model learns on this set will also be of use when making predictions for other reaction types.\"}",
"{\"title\": \"Thank you for your review, in answer to your questions (part 2)\", \"comment\": \"(comment continued)\\n\\n## End-to-end models and how we can differentiate through our model end-to-end subject to chemical constraints\\nBy end-to-end we mean that our full model can be trained from input to output purely using gradient-based techniques. In our approach this manifests itself by training on each action of the model simultaneously. At train time this is possible by conditioning on the correct previous actions from previous time steps when predicting the actions at latter time steps. \\n\\nThis contrasts with previous approaches to this problem (e.g. Jin et al., 2017 or Kayala & Baldi, 2011, 2012) which break down then problem into several steps and separate models. These models have to be trained in stages. These approaches often can be broken down into three stages (i) ML based filtering of \\u2018reaction sites\\u2019 (ii) manual enumeration of all possible changes that can occur at these \\u2018reaction sites\\u2019, (iii) ML based ranking of these enumerated options. As these approaches split the problem down into separate processes, they cannot leverage the power of state-of-the-art gradient-based techniques to solve the full problem. This means their solutions are likely suboptimal local minima when composed. On the other hand, our model adjusts all parameters simultaneously to solve the goal of reaction prediction. As shown in our experimental results, this end-to-end approach allows us to improve upon prior work, with the added benefit of approaching the problem in a chemist-interopratible way.\\n\\nThe chemical constraints are encoded as masked out operations in our model. These disallowed operations are shown as red crosses in Figure 2 and are represented by the $\\\\beta$ terms in eqns 2 through 4. Note that we never have to differentiate through a masked out operation during training, as by definition these do not happen, thus our model can still be end-to-end.\\n\\n\\n## Mechanism Prediction Baseline\\nSorry for the confusion here. The previous mechanism prediction work has used a private, expert-curated training set. These datasets include expert information about electron sources and sinks as well as reaction conditions such as temperature and anion/action solvation potential (Kayala and Baldi, 2011; Section 2). This has meant these datasets are often small (Fooshee et al, 2018 (Section 2.3) has a dataset size of around 11 000). \\n\\nYou are correct, we could use our approximate reaction mechanism extraction method to label sources and sinks. However, we would still not be able to provide the full reaction conditions data, as this data does not currently exist in the patent dataset (Lowe, 2012, Section 4.11.8).\\n\\nMoreover, a separate issue is that these previous mechanism methods also need expert-curated features. This includes molecular orbital data and steric information among others. These features are hard to encode, indeed in the earlier work and until Fooshee et al (2018) the chemical model used for their reaction predictor could not handle the elements Sc, Ti, Zn, As, or Se. As well as requiring these features, extra expert-encoded constraints are required, such as the number of bonds particular elements can form.\\n\\nIt is due to these requirements that we have described these methods as needing \\u2018expert-curated\\u2019\\u2019 training sets. However, we shall make this clearer in our paper.\\n\\nThis so far has described why we cannot run their methods on our dataset. Alternatively, we cannot run our method on their dataset as their data is currently private (they were also unable to release it via email). \\n\\nWe think the reaction predictor proposed in these previous works is an interesting model and are disappointed that we are unable to compare against it currently on any benchmark task. We hope to open source our code in the future so that comparisons to ELECTRO can be made by others. \\n\\n\\n## \\u2018Random guessing\\u2019 accuracies on our dataset\\nIt is easiest to compare against a random baseline on the reaction mechanism task where the exact steps, including how many there are, is known. We consider a random guessing model, which assigns equal weight to each atom. However, we keep the masking we use with ELECTRO so that the random model is restricted to chemically plausible options.\\nOn the 29360 test set the random baseline would get the correct answer of a reaction with a mean probability of 5x10^-5. This is exceedingly low and is due to the very large number of actions the model can take on the initial select and add steps.\"}",
"{\"title\": \"Thank you for your review, in answer to your questions (part 1)\", \"comment\": \"Thank you for taking the time to review our paper. We go through your concerns in more details below. As a brief summary we:\\nShow on ELECTRO-LITE using K-fold cross validation that the variation within a method is on the order of tenths of a percent, whereas the difference between the baseline methods we analyse is much greater than this.\\nExplain how we can encode chemical restraints whilst being able to compute the gradient of the parameters of our end-to-end model\\nProvide the results for a random baseline on the mechanism prediction task. The top-1 accuracy of such a baseline would be less than one percent, due to the large number of possible actions at each step.\\n\\n\\n## K-fold cross validation\\nWhile we agree that K-fold cross validation would be ideal, practically, as mentioned by the other reviewer\\u2019s comment, it would be difficult; it would require considerable computational resources. This is particularly true for the seq2seq model, which is compute hungry while training (to be fair to other methods we would need to cross validate these methods too, as this has not been done by these works).\", \"we_also_wish_to_make_it_clear_that\": \"We used the relevant LEF subsets of the same pre-defined train, validation and test sets used by Jin et al (2017) and Schwaller et al (2018). \\nThis methodology, following the common task framework (Donoho D (2015), Section 6), of developing and testing on pre-defined splits of a dataset is used throughout ML from machine translation to image classification, such as when evaluating models using the ML benchmarks: ImageNet, CIFAR-10 or even MNIST.\\nAll development of the algorithm was done using only the training and the validation sets, with the test set only being used for the final evaluation to get the numbers reported in this paper.\\n\\nNote that we are using a large test set of 29360 items.Treating the probability of success on each test set item as a Bernoulli variable, our top-1 reported mean product prediction accuracies for ELECTRO and ELECTRO-LITE have standard deviations less than 0.25% (0.24% for ELECTRO-LITE and 0.20% for ELECTRO).\\n\\nFurthermore, we also tried to rule out that significant variance could occur due to training/test set differences by performing 3-fold cross validation with the ELECTRO-LITE model. In order to do this we first merged the current training, validation and test sets. For the mechanism prediction task (Table 2) we report the results of these runs in the Table below:\\n\\n Accuracies (%) \\n top-1 | top-3 | top-5 \\n------------------------------------------------ \\nFold 1 | 69.8 | 87.2 | 91.6 \\nFold 2 | 70.0 | 87.2 | 91.5\\nFold 3 | 69.5 | 87.0 | 91.3\\n\\nNote that these figures are not directly comparable to the previous work as the training/test set sizes have changed. In particular, the training set has got smaller and the test set larger. However, this suggests the variation is on the order of tenths of a percent, whereas the difference between the different methods we analyse is much greater than this.\\n\\n\\nDonoho D (2015) 50 years of Data Science. URL http://courses. csail. mit. edu/18 337: 2015.\\n\\n(comment continued separately due to space)\"}",
"{\"title\": \"Thank you for your review, in answer to your questions\", \"comment\": \"Thank you for your thoughtful and encouraging review! We go through your questions below.\\n\\n## 1. 1st order Markovian\\nAlthough the quantum-mechanical reaction mechanism is Markov, it\\u2019s true that this does not necessarily hold when considering models with a graph state, as the graph representation does not fully capture all the details of the electronic structure in some corner cases (see e.g. the textbook Gasteiger - Chemoinformatics). However, we believe the graph abstraction of molecules and their reactions to be a powerful and useful representation, due to its employment in previous machine learning approaches (eg Jin et al, 2017) and its widespread use by chemists. Therefore, we believe using a Markovian model on the molecular graph is a sensible assumption and one that is validated by our strong results. In practice we do not notice ELECTRO undoing its own work or stalling. \\n\\nWe agree that an exciting future direction is to extend the model to cover a greater class of reactions. For this, exploring the non-Markovian structure you describe in a chemically-reasonable model would definitely be a sensible thing to do.\\n\\n\\n## 2. Difference to previous mechanism prediction work\\nSorry for the confusion here. The previous mechanism prediction work has used a private, expert-curated training set. These datasets include expert-curated information about electron sources and sinks as well as reaction conditions such as temperature and anion/action solvation potential (Kayala and Baldi, 2011; Section 2). This has meant these datasets are often small (Fooshee et al, 2018 (Section 2.3) has a dataset size of around 11 000). \\n\\nYou are correct, we could use our approximate reaction mechanism extraction method to label sources and sinks. However, we would still not be able to provide the full reaction conditions data, as this data does not currently exist in the patent dataset (Lowe, 2012, Section 4.11.8).\\n\\nMoreover, a separate issue is that these previous mechanism methods also need expert-defined features. These features include molecular orbital data and steric information among others. These features are hard to encode, indeed in the earlier work and until Fooshee et al (2018) the chemical model used for their reaction predictor could not handle the elements Sc, Ti, Zn, As, or Se. As well as requiring these features, extra expert-encoded constraints are required, such as the number of bonds particular elements can form.\\n\\nIt is due to these requirements that we have described these methods as needing \\u2018expert-curated\\u2019\\u2019 training sets. However, we shall make this clearer in our paper.\\n\\n\\n## 3. Why reagents are not passed in at later steps.\\nIt is indeed largely for computational reasons that we separate reactants and reagents, and pass reagents to just the starting network. We found that choosing the first entry in the electron path is often the most challenging decision, and that action steps after this have access to the previous atom as context, making it an easier task.\\nQualitatively, when running Electro-lite on the separate validation set we would see that the model often had the most errors on the first step, and that after picking this first step the next stages would often be correctly predicted.\\nWe have tested a version of Electro where reagent information is fed in as context at each step. \\nOn the mechanism prediction task (Table 2) this gets a slightly improved top-1 accuracy of 78.4% (77.8% before) but a similar top-5 accuracy of 94.6% (94.7% before). On the reaction product prediction task (Table 3) we get 87.5%, 94.4% and 96.0% top-1, 3 and 5 accuracies (87.0%, 94.5% and 95.9% before). The tradeoff is this model is somewhat more complicated and requires a greater number of parameters.\"}",
"{\"title\": \"practical restrictions?\", \"comment\": \"Your point is well taken and I personally also think this tradition can be problematic, but probably due to the practical computational cost, evaluations on a single shot training/test split would be standard and often considered as acceptable (for example, consider ImageNet cases) when we use quite complicated neural networks trained with large datasets. We can't get SDs/CIs from one-shot evaluations...\\n\\nUSPTO seems also quite a large dataset each representing a set of graphs, and even for LEF reactions (349,898 reactions, of which 29,360 form the held-out test set as in p.7).\"}",
"{\"title\": \"A quite interesting contribution that also brings more clearer interpretations on what is learned\", \"review\": \"Summary:\\nThe paper presents a novel method for predicting organic chemical reactions, in particular, for learning (Robinson-Ingold's) ''arrow pushing\\\" mechanisms in an end-to-end manner. Organic molecules consist of covalent bonds (that's why we can model them as molecular graphs), and organic reactions are recombinations of these bonds. As seen in organic chemistry textbooks, traditional chemists would qualitatively understand organic reactions as an alternating series of electron movements by bond breaking (bond cleavage) and bond forming (bond formation). Though now quantum chemistry calculations can give accurate quantitative predictions, these qualitative understanding of organic reactions still also gives strong foundations to consider and develop organic reactions. The proposed method tries to learn these series of bond changes directly through differentiable architectures consisting of three graph neural networks: 1) the one for determining the initial atom where electron movements start, 2) the one for representing state transitions from the previous bond change to the next, and 3) the one for determining when the electron movements end. Experimental evaluations illustrate the quantitative improvement in final product prediction against existing methods, as well as give chemical intuitions that the proposed method can detect a class of LEFs (linear electron flows).\", \"comment\": \"- This study is a quite interesting contribution because many existing efforts focus on improving differentiable architecture design for graph transformation and test it using chemical reaction data without considering what is learned after all. In contrast, this paper gives the clear meaning to predict \\\"arrow pushing\\\" mechanism from chemical reaction data and also makes sure the limitation to LEFs that are heterolytic. Traditional graph rewrite systems or some recent methods directly borrowing ideas from NLP do not always give such clear interpretations even though it can somehow predict some outputs.\\n\\n- The presentation of the paper is clear and in very details, and also provides intuitive illustrative examples, and appendix details on data, implementations, and related knowledge. \\n\\n- The architecture is based on graph neural networks, and seem natural enough. Basically, I liked overall ideas and quite enjoyed them but several points also remained unclear though I'm not sure at all about chemical points of view.\\n\\n1) the state transition by eq (2)-(4) seems to assume 1-st order Markovian, but the electron flow can have longer dependence intuitively. Any hidden states are not considered and shared between these networks, but is this OK with the original chemical motivations to somehow model electron movements? The proposed masking heuristics to prevent stalling would be enough practically...? (LEF limitations might come from this or not...?)\\n\\n2) One thing that confuses me is the difference from approaches a couple of work described at the beginning of section 'Mechanism prediction (p.3)', i.e. Fooshee et al 2018; Kayala and Baldi, 2011, 2012; Kayala et al, 2011. I don't know much about these studies, but the paper describes as \\\"they require expert-curated training sets, for which organic chemists have to hand-code every electron pushing step\\\". But for \\\"Training\\\" (p.6) of the proposed method, it also describes \\\"this is evaluated by using a known electron path and intermediate products extracted from training data\\\". Does this also mean that the proposed method also needs a correct arrow pushing annotations for supervised learning?? Sounds a bit contradicting statements?\\n\\n3) Is it just for computational efficiency why we need to separate reactants and reagents? The reagent info M_e is only used for the network for \\\"starting location\\\", but it can affect any intermediate step of elementary transitions intuitively (to break through the highest energy barrier at some point of elementary transitions?). Don't we need to also pass M_e to other networks, in particular, the one for \\\"electron movement\\\"?\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"impossible to tell if \\\"improves performance\\\" without some variance measure\", \"comment\": \"In response to \\\"the biggest selling point is that it improves performance in predicting the ultimate reaction outcome\\\" -- in fact it is impossible to tell if there is any significant improvement in prediction accuracy, because the paper reports no measure of variance (confidence interval or standard deviation). The results section needs to be improved by adding these details, (use K-fold cross-validation and compute mean/sd of prediction accuracy over the K test sets) so the reader can indeed determine whether or not there is any significant difference in prediction accuracy.\"}",
"{\"title\": \"Potentially interesting and novel ideas but impossible to tell if they are significant due to low-quality results section\", \"review\": \"Review of \\\"A Generative Model for Electron Paths\\\"\", \"paper_summary\": \"The paper proposes a new model for predicting arrow-pushing chemical\\nreaction diagrams from raw reaction data.\", \"section_1_summarizes_the_motivation\": \"whereas other models only predict\\nchemical reaction products from reactants, the proposed model attempts\\nto also predict the reaction mechanism.\\n\\nSection 2 provides a background on related work. Previous models for\\nmechanism prediction are limited to work which require expert-curated\\ntraining sets. The proposed model is designed for a subset of\\nreactions called \\\"linear electron flow\\\" (LEF) which is\\nexplained. Contributions of this paper are an end-to-end model, a\\ntechnique for identifying LEF reactions/mechanisms from\\nreaction/product data, and an empirical study of how the model learns\\nchemical knowledge.\\n\\nSection 3 explains the proposed generative model, which represents a molecule\\nusing a graph (nodes are atoms and edges are bonds). It is proposed to\\nlearn a series of electron actions that transform the reactants into\\nthe products. The total probability is factorized into three parts:\\nstarting location, electron movement, and reaction\\ncontinuation. Figure 2 and Algorithm 1 are helpful.\\n\\nSection 4 explains the proposed method for creating mechanism data\\nfrom chemical reactant/product databases. Figure 3 is helpful.\\n\\nSection 5 discusses results of predicting mechanisms and products on\\nthe USPTO data set.\", \"comments\": \"\", \"strong_points_of_the_paper\": \"(1) it is very well written and easy to\\nunderstand, (2) the chemical figures are very well done and helpful,\\nand (3) the method for predicting mechanisms seems to be new.\\n\\nThe major weak point of the paper is the results section, which needs\\nto be improved before publication.\\n\\nIn particular Tables 2-3 (comparison of prediction accuracy) need to\\nshow some measure of variance (standard deviation or confidence\\ninterval) so the reader can judge if there is any significant\\ndifference between models. Please use K-fold cross-validation, and\\nreport mean/sd of test accuracy over the K test folds.\\n\\nThe term \\\"end-to-end\\\" should be defined. In section 2.2 it is written\\n\\\"End-to-End: There are many complex chemical constraints that limit\\nthe space of all possible reactions. How can we differentiate through\\na model subject to these constraints?\\\" which should be clarified using\\nan explicit definition of \\\"end-to-end.\\\"\\n\\nAlso there needs to be some comparison with baseline methods for\\npredicting mechanisms. It is claimed that no comparison can be made\\nagainst the previous methods for mechanism prediction (Section 2.2),\\nbecause \\\"they require expert-curated training sets, for which organic\\nchemists have to hand-code every electron pushing step.\\\" However the\\ncurrent paper proposes a method for generating such steps/data for LEF\\nreactions. So why not use those data to train those baseline models,\\nand compare with them? That would make for a much stronger paper. Please\\nadd at least one of the methods discussed in section 2.2 to your\\naccuracy comparison in Table 2.\\n\\nIt would additionally be helpful to know what the \\\"uninformed\\nbaseline\\\" / \\\"ignore the inputs\\\" / \\\"random guessing\\\" accuracy rates are\\non your data set. For example in classification the uninformed\\nbaseline always predicts the class which is most frequent in the\\ntraining data, and in regression it predicts the mean of the\\nlabels/outputs in the training data. What would the analogy be for\\nyour two problems? (product and mechanism prediction)\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"good paper, nice contribution\", \"review\": \"The paper presents a novel end-to-end mechanistic generative model of electron flow in a particular type of chemical reaction (\\u201cLinear Electron Flow\\u201d reactions) . Interestingly, modeling the flow of electrons aids in the prediction of the final product of a chemical reaction over and above problems which attack this \\u201cproduct prediction problem\\u201d directly. The method is also shown to generalize well to held-out reactions (e.g. from a chemistry textbook).\\n\\nGeneral Impressions\\n\\n+ For me the biggest selling point is that it improves performance in predicting the ultimate reaction outcome. It should do because it provides strictly more supervision, but it\\u2019s great that it actually does. \\n+ Because it models the reaction mechanism the model is interpretable, and it\\u2019s possible to enforce constraints, e.g. that dynamics are physically possible.\\n+ Generalises outside of the dataset to textbook problems :-)\\n+ Well-founded modeling choices and neural network architectures.\\n- Only applies to a very particular type of reaction (heterolytic LEF). \\n- Requires supervision on the level of electron paths. This seems to inhibit applying the model to more datasets or extending it to other types of reactions.\\n- Furthermore the supervision extraction does not seem take advantage of symmetries noted in the section(s) about difficulty evaluating inference. \\n- It would be nice to margin out the electron flow model and just maximize the marginal likelihood for the product prediction problem.\\n\\nNovelty\\nI\\u2019m not an expert on the literature of applying machine learning to the problems of reaction {product, mechanism} prediction but the paper appears to conduct a thorough review of the relevant methods and occupy new territory in terms of the modeling strategy while improving over SOTA performance.\\n\\nClarity\\nThe writing/exposition is in general extremely clear. Nicely done. There are some suggestions/questions which I think if addressed would improve clarity.\\n\\nWays to improve the paper\\n1. Better motivate the use of machine learning on this problem. What are the limitations of the arrow-pushing models? \\n\\n2. Explain more about the Linear Electron Flow reactions, especially:\\n- Why does the work only consider \\u201cheterolytic\\u201d LEF reactions, what other types of LEF reactions are omitted?\\n- Is the main blocker to extending the model on the modeling front or the difficulties of extracting ground-truth targets? It appears to be the latter but this could be made more clear. Also that seems to be a pretty severe limitation to making the algorithm more general. Could you comment on this?\\n\\nQuestions\\n1. Is splitting up the electron movement model into bond \\u201cremoval\\u201d and \\u201caddition\\u201d steps just a matter of parameterization or is that physically how the movements work? \\n\\n2. It appears that Jin et al reports Top 6/8/10 whereas this work reports Top 1/3/5 accuracy on the USPTO dataset. It would be nice if there was overlap :-). Do your Top 6/8/10 results with the WLDN model agree with the Jin et al paper?\\n\\n\\nNits\\nSection 2.3, first paragraph \\u201c...(LEF) topology is by far the most important\\u201d: Could you briefly say why? It\\u2019s already noted that they\\u2019re the most common in the database. Why?\\n\\nSection 3.ElectionMovement, first paragraph. \\u201cObserver that since LEF reactions are a single path of electrons\\u2026\\u201d. Actually, it\\u2019s not super clear what this means from the brief description of LEF. Can you explain these reactions in slightly more detail?\\n\\nSection 3.ElectionMovement, second paragraph. \\u201cDifferently, the above distribution can be split\\u2026\\u201d. Awkward phrasing. How about \\u201cIn contrast, the above distribution can be split\\u2026\\u201d. \\n\\nSection 3.Training, last sentence \\u201c...minibatches of size one reaction\\u201d. Slightly awkward phrasing. Maybe \\u201c...minibatches consisting of a single reaction\\u201d?\\n\\nSection 5.2, second sentence. \\u201cHowever, underestimates the model\\u2019s actual predictive accuracy\\u2026\\u201d. It looks like a word accidentally got deleted here or something.\\n\\nSection 5.2, paragraph 4. \\u201cTo evaluate if our model predicts the same major project\\u201d... Did you mean \\u201cthe same major product\\u201d?\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
rylNH20qFQ | Learning to Infer and Execute 3D Shape Programs | [
"Yonglong Tian",
"Andrew Luo",
"Xingyuan Sun",
"Kevin Ellis",
"William T. Freeman",
"Joshua B. Tenenbaum",
"Jiajun Wu"
] | Human perception of 3D shapes goes beyond reconstructing them as a set of points or a composition of geometric primitives: we also effortlessly understand higher-level shape structure such as the repetition and reflective symmetry of object parts. In contrast, recent advances in 3D shape sensing focus more on low-level geometry but less on these higher-level relationships. In this paper, we propose 3D shape programs, integrating bottom-up recognition systems with top-down, symbolic program structure to capture both low-level geometry and high-level structural priors for 3D shapes. Because there are no annotations of shape programs for real shapes, we develop neural modules that not only learn to infer 3D shape programs from raw, unannotated shapes, but also to execute these programs for shape reconstruction. After initial bootstrapping, our end-to-end differentiable model learns 3D shape programs by reconstructing shapes in a self-supervised manner. Experiments demonstrate that our model accurately infers and executes 3D shape programs for highly complex shapes from various categories. It can also be integrated with an image-to-shape module to infer 3D shape programs directly from an RGB image, leading to 3D shape reconstructions that are both more accurate and more physically plausible. | [
"Program Synthesis",
"3D Shape Modeling",
"Self-supervised Learning"
] | https://openreview.net/pdf?id=rylNH20qFQ | https://openreview.net/forum?id=rylNH20qFQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"r1lYxWLeg4",
"B1eBI3bX1V",
"rkeONhbm1V",
"B1x25TsACQ",
"HyeEz-wc0X",
"SJe4Qiuia7",
"H1gYAq_jpm",
"BJgsqq_j6X",
"BkeHU5ujT7",
"B1exRtOoaX",
"SJxturVUp7",
"H1x-SEpr67",
"HJx2dMfz6Q",
"BJeOVf6bpm",
"SkxcaAs33X",
"BkgGJKp937",
"r1eC44O53X",
"r1x2ILEgiX",
"S1gRbI4ei7",
"BkgRxd6j5Q"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_comment",
"official_comment",
"comment"
],
"note_created": [
1544737008582,
1543867468589,
1543867440304,
1543581076025,
1543299340284,
1542322971698,
1542322897499,
1542322834777,
1542322765140,
1542322631789,
1541977456938,
1541948472589,
1541706356070,
1541685808510,
1541353153852,
1541228761662,
1541207094347,
1539487315846,
1539487237653,
1539196917933
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1532/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1532/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1532/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1532/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1532/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1532/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1532/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1532/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1532/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1532/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1532/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1532/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1532/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1532/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1532/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1532/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1532/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1532/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1532/Authors"
],
[
"(anonymous)"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper presents a method whereby a model learns to describe 3D shapes as programs which generate said shapes. Beyond introducing some new techniques in neural program synthesis through the use of loops, this method also produces disentangled representations of the shapes by deconstructing them into the program that produced them, thereby introducing an interesting and useful level of abstraction that could be exploited by models, agents, and other learning algorithms.\\n\\nDespite some slightly aggressive anonymous comments by a third party, the reviewers agree that this paper is solid and publishable, and I have no qualms in recommending it from inclusion in the proceedings.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Good paper\"}",
"{\"title\": \"Revision Uploaded\", \"comment\": \"Dear Reviewer 2,\\n\\nThanks again for your constructive comments. We have made substantial changes in the revision according to the reviews. In particular, we have compared our model with three additional baselines, including CSGNet, in Table 2 and Sec 5.2. We\\u2019ve also discussed the design of DSL and search-based models (Sec 6). \\n\\nAs the discussion period is about to end, please don\\u2019t hesitate to let us know if there are any additional clarifications that we can offer. Thanks!\"}",
"{\"title\": \"Revision Uploaded\", \"comment\": \"Dear Reviewer 1,\\n\\nWe would like to thank you again for your supportive response. Your comments have helped us improve the quality of the paper significantly.\"}",
"{\"title\": \"Please respond to author comments and discussion\", \"comment\": \"Reviewer 3,\\n\\nThe authors have made substantial changes to their paper in response to your and others' reviews, as well as responded to your comments. Please take some time, in the last week of the discussion period, to consider their response, engage in discussion if needed, and either explain why you stand by your assessment or reconsider your score.\"}",
"{\"title\": \"Summary of Revision\", \"comment\": \"Dear Reviewers and AC,\\n\\nThank you for your constructive comments. We have revised our paper accordingly. The main changes include:\\n\\n1) We have added more baselines, including the original CSGNet, the augmented CSGNet, and Nearest Neighbours (Section 5.2 and Table 2).\\n2) We have analyzed the intermediate representation of the shape generator (Section 6 and Figure 8).\\n3) We have included discussion on the design of the DSL, structure search v.s. amortized inference, and future work (Section 6).\\n4) We have revised the paper to better explain the end-to-end differentiability of our model (Section 4.2 and A.2) and the role of the initial programs (Section 5.2).\\n\\nPlease don\\u2019t hesitate to let us know for any additional feedback. Thanks!\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"Thank you very much for the constructive comments.\\n\\n1. Baselines\\nWe agree that it\\u2019s important to add more baselines. In the revision, we will include comparisons with the following three algorithms:\\n1) Nearest neighbors. For a given test shape, we search its nearest neighbor in the training set.\\n2) CSGNet-original (the original model released by the authors of CSGNet)\\n3) CSGNet-augmented (the augmented CSGNet model trained on our dataset with additional shape primitives we introduced).\\n\\nEvaluating on shape segmentation is definitely an interesting direction. We\\u2019ve started working on it. As data processing takes additional time, we\\u2019ll either include the results into the revision by Nov 23 or, if it\\u2019s not done by then, into a later revision.\\n\\n2. Specific Questions\\n(1) Initial programs\\nThe initial synthetic programs provide supervised bootstrapping to initialize the program synthesis network. These programs are essential: we observe that without bootstrapping the model cannot converge to a meaningful point. They, however, can be very simple: e.g., 10 simple table templates (Fig. A1) are sufficient to initialize the model, which later achieves good performance under execution-guided adaptation. \\n\\n(2) Interaction\\nThanks! We agree that the graphs are a more general representation for object parts and can be important next steps. We\\u2019ll include this into discussion as suggested.\\n\\n(3) Shapes vs scenes\\nCompared with scenes, 3D shapes more frequently have program-like regularities, such as repetition and symmetry. An interesting future direction is to explore how programs can be used to explain scenes. Our current model requires a front and up-right orientation.\\n\\n(4) Visualization\\nAs suggested, we will manipulate the representation after the LSTM to see how different dimensions affect the generated shape primitives. \\n\\nWe have also listed all other planned changes in our general response above. Please don\\u2019t hesitate to let us know for any additional comments on the paper or on the planned changes.\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"Thank you for the very constructive comments.\\n\\n1. DSL\\nThe current DSL is designed to represent furnitures. Representing all ShapeNet objects needs a richer set of primitives, e.g., curved cylinders for mug handles. When we design such DSL, the main challenge is on semantics. For humans, some semantics are shared across different object categories, e.g., \\u201ctop\\u201d can be shared by tables and bed, while some are just category-specific, \\u201carmrest\\u201d is mainly for chairs. Following this spirit, we include both category-specific and shared semantics for the instantialization of furnitures. Learning a primitive library from data is a natural research direction, and we are working on it as follow-up.\\n\\n2. Baselines \\nWe agree that it\\u2019s important to add more baselines. In the revision, we will include comparisons with the following three algorithms:\\n1) Nearest neighbors. For a given test shape, we search its nearest neighbor in the training set.\\n2) CSGNet-original (the original model released by the authors of CSGNet)\\n3) CSGNet-augmented (the augmented CSGNet model trained on our dataset with additional shape primitives we introduced).\\n\\nAmortized inference is essential for our task due to its large search space. Our model takes 5 ms to infer a shape program with a Titan X GPU. There are two possible approaches for a structured search over the space of programs, both of which will be too slow for our task:\\n1) Constraint solving: we would have to use an SMT solver. Ellis et al [1] used SMT solvers to infer 2D graphics programs, and takes on the order of 5-20 minutes per program. As 3D shapes have a much larger search space, such an approach would not be able to find a solution in reasonable time.\\n2) Stochastic search: Here the problem would be at least as tough as doing inverse graphics, so we can safely assume that this would work no better than MCMC for inverse graphics. In Picture (Kulkarni et al. [2]), their approach takes minutes for a 2D image with simple contours.\\n\\nWe have contacted the authors of these two papers, who confirmed our estimates of the efficiency of their methods.\\n\\n[1] Ellis, Kevin, Armando Solar-Lezama, and Josh Tenenbaum. \\\"Unsupervised learning by program synthesis.\\\" NIPS 2015.\\n[2] Kulkarni, Tejas D., et al. \\\"Picture: A probabilistic programming language for scene perception.\\\" CVPR 2015.\\n\\n3. Decomposition \\nThanks for the positive comment on the decomposition. The results just correspond to top-1 predictions. \\n\\n4. Interpreter\\nOur semantic operators correspond to simple geometric primitives. Therefore, it\\u2019s quite straightforward to write an interpreter for them. The programs in our DSL are tokenized vectors and can be directly feed into the neural program executor. Adding new semantic operator to the DSL is thus easy. We just need to re-train or finetune the current program executor with the new semantic operator included.\\n\\nWe have also listed all other planned changes in our general response above. Please don\\u2019t hesitate to let us know for any additional comments on the paper or on the planned changes.\"}",
"{\"title\": \"Our General Response\", \"comment\": \"We thank all reviewers for their comments. In addition to the specific response below, here we summarize the changes planned to be included in the revision.\\n\\nAs suggested by reviewers, we plan to include the following changes in the revision by Nov. 26 (the new official revision deadline, extended from Nov. 23):\\n- We will cite and discuss the suggested related work.\\n- We will discuss more about the design of DSL, structure search v.s. amortized inference, etc.\\n- We will add more baselines, including:\\n 1) Nearest neighbors. For a given test shape, we search its nearest neighbor in the training set.\\n 2) CSGNet-original (the original model released by the authors of CSGNet)\\n 3) CSGNet-augmented (the augmented CSGNet model trained on our dataset with additional shape primitives we introduced).\\n- We will visualize the intermediate representation of neural shape generator (neural program executor).\\n\\nPlease don\\u2019t hesitate to let us know for any additional comments on the paper or on the planned changes.\"}",
"{\"title\": \"Response to Reviewer 3 (Part 1)\", \"comment\": \"Thank you for the thoughtful review.\\n\\n1. Baselines and structured search\\nThanks for the suggestion! We agree that it\\u2019s important to add more baselines. We clarify that the current result from Tulsiani is already from a re-trained model. In the revision, we will include additional comparisons with the following three algorithms:\\n1) Nearest neighbors. For a given test shape, we search its nearest neighbor in the training set.\\n2) CSGNet-original (the original model released by the authors of CSGNet)\\n3) CSGNet-augmented (the augmented CSGNet model trained on our dataset with additional shape primitives we introduced).\\n\\nAmortized inference is essential for our task due to its large search space. Our model takes 5 ms to infer a shape program with a Titan X GPU. There are two possible approaches for a structured search over the space of programs, both of which would be too slow for our task:\\n1) Constraint solving: we would have to use an SMT solver. Ellis et al [1] used SMT solvers to infer 2D graphics programs, and takes on the order of 5-20 minutes per program. As 3D shapes have a much larger search space, such an approach would not be able to find a solution in reasonable time.\\n2) Stochastic search: Here the problem would be at least as tough as doing inverse graphics, so we can safely assume that this would work no better than MCMC for inverse graphics. In Picture (Kulkarni et al. [2]), their approach takes minutes for a 2D image with simple contours.\\n\\nWe have contacted the authors of these two papers, who confirmed our estimates of the efficiency of their methods.\\n\\n[1] Ellis, Kevin, Armando Solar-Lezama, and Josh Tenenbaum. \\\"Unsupervised learning by program synthesis.\\\" NIPS 2015.\\n[2] Kulkarni, Tejas D., et al. \\\"Picture: A probabilistic programming language for scene perception.\\\" CVPR 2015.\\n\\n2. DSL\", \"we_agree_that_a_dsl_with_semantics_has_advantages_and_disadvantages\": \"on one hand, it offers semantic correspondence and enables better in-class reconstructions; on the other hand, it may limits the ability to generalize to shapes outside training classes. Our current instantialization focuses on the semantics of furnitures (which can be viewed as a superclass, whose subclasses share similar semantics). Within this superclass, our model generalizes well: trained on chairs and tables, it generalize to new categories such as \\u201cbed\\u201d, \\u2018\\u201cbench\\u201d, \\u201csofa\\u201d and \\u201ccabinet\\u201d (Sect. 5.4). We\\u2019ll include a discussion on the choice of DSL in the revision.\\n\\n3. Neural program executor\\nThanks for the comments on the neural program executor. We\\u2019ll include the following discussion into the revision to improve the clarity of the paper.\\n\\nA) Automatic differentiation\\nOur program executor takes as input a tokenized program and produces a voxelized 3D primitive. Due to the use of high-level program sentences such as `for\\u2019, there is no explicit differentiable formula for such process. We therefore use a neural network to approximate it.\\n\\nB) End-to-end training\\nThe output of the program inference mode is continuous (continuous probability over tokenized programs and continuous parameters). After getting the output of the program inference model, a real execution engine (not the neural executor) contains two steps (1) discretization such output and (2) execute the discretized program to generate the voxel. Our neural executor is leaned to jointly approximate both steps, thus the whole pipeline can be differentiable end-to-end. We apply max-pooling over all of the blocks; therefore, the system can handle a variable number of blocks and still be differentiable.\\n\\nC) Reliability\\nWe agree that a typical concern regarding a neural executor is on their generalizability to input outside the training distribution. This is also the underlying motivation behind our design---we train a program executor that operates on block-level programs, not full shape programs. While it\\u2019s hard to cover all possible shape programs in training, covering the distribution possible block-level programs is easy (e.g., tables with many legs), as they have a smaller degree of freedom. In training the executor, we are no longer concerned about the possible combination of different blocks. Such a decomposition allows the executor to guide the program synthesizer/generator to generalize to new programs that are not in the training distribution: while the synthetic tables only contains 10 different combinations of block programs, the guided adaptation with the extensively learned neural executor allows our model generalize to other unseen combinations of block programs. In fact, Fig 5 (c),(d) are newly learned templates beyond the pre-trained templates shown in Fig A1.\"}",
"{\"title\": \"Response to Reviewer 3 (Part 2)\", \"comment\": \"4. Data-efficiency, initialization, and robustness\\nOur model is data-efficient. It\\u2019s trained on 100K chairs and tables, but without supervision. The only supervision it requires is the small number of shape templates, which are used for initializing the program generator. We agree with the reviewer that such initialization is essential: we observe that without bootstrapping the model cannot converge to a meaningful point. They however can be very simple: e.g., 10 simple table templates (Fig. A1) are sufficient to initialize the model, which later achieves good performance under execution-guided adaptation. Our model is also robust: it works well after pre-training on these 10 simple templates, with and without the semantic meaning of DSL. It also generalizes to shapes from unseen categories, as shown in Sec 5.4. \\n\\nWe have also listed all other planned changes in our general response above. Please don\\u2019t hesitate to let us know for any additional comments on the paper or on the planned changes.\"}",
"{\"title\": \"Thanks again.\", \"comment\": \"Thanks again, AC. We've updated the title of the comment and added a note at the top.\"}",
"{\"title\": \"I see\", \"comment\": \"I understand that this was in response to an earlier public comment. It would have been better to respond directly to that comment to avoid confusion. While the timestamp should make clear that this is not a response to reviewers, there is still scope for confusion. I recommend editing this comment to add, at the top, that it's a reply to the comment below.\"}",
"{\"title\": \"Thanks for the suggestion\", \"comment\": \"Thank you, AC. We agree and will follow your suggestion. The response here was posted a while ago, and is actually not to the official reviews, but to the earlier public comment. We're still working on the response to official reviews and will post them separately once they are ready.\"}",
"{\"title\": \"Notifications\", \"comment\": \"Just FYI, it is better to separately (or in addition to your current response) reply to the reviewer's reviews, so they get a notification that there is activity on their thread. You may want to leave a quick comment for each reviewer that you've produced a combined response.\"}",
"{\"title\": \"Addresses an important problem; well written; but missing baselines and some discussions\", \"review\": \"This paper presents an approach to infer shape programs given 3D models. The programs include placing and arranging predefined primitives in layouts and can be written as a program over a domain-specific language (DSL).\\n\\nThe architecture consists of a recurrent network that encodes a 3D shape represented as a voxel grid and outputs the instructions using a LSTM decoder. The generation is two-step where the first step predicts a program ID and the second step predicts instructions within the program ID. This aspect wasn't completely clear to me, see questions below. A second module that renders the program to 3D is also implemented as a neural network in order to optimize the model parameter in a end-to-end manner by minimizing a reconstruction loss. \\n\\nThe method is evaluated on 3D shape reconstruction tasks for chairs and tables categories of the ShapeNet dataset. The approach compares favorably to Tulsiani et al., which considers a shape to be composed of a fixed number of cuboids.\\n\\nThe paper is well written and investigates an important problem. But it is hard to tease of the contributions and the relative importance of various steps in the paper:\\n\\n1. Structure search vs. prediction. How does the model perform relative to a search-based approach for program generation. That would be slower but perhaps more accurate. The prediction model can be thought of an amortized inference procedure for search problems. What advantages does the approach offer?\\n\\n2. Choice of the DSL. Compared to CSG modeling instructions of Sharma et al. the proposed DSL is more targeted to the shape categories. While this restricts the space of programs (e.g., no intersection, subtraction operations are used) leading to better generation of chairs and tables, it also limits the range and generalization of the learned models to new categories. Some discussion and comparison with the choice of DSL would be useful. \\n\\n3. Is the neural render necessary -- Wouldn't it be easier to simply use automatic differentiation to compute gradients of the rendering engine? \\n\\n4. It is also not clear to me how having a differentiable renderer allows training in an end-to-end manner since the output space is discrete and variable length. In CSGNet (Sharma et al.) policy-gradient techniques were used to optimize the LSTM parameters. The details of the guided adaptation were unclear to me (Section 4.3).\\n\\n5. Is the neural renderer reliable -- Is is not clear if the neural renderer can provide accurate gradients when the generated programs are incorrect since the model is trained on a clean samples. In practice this means that the encoder has to initialized well. Since the renderer is also learned, would it generalize to new programs within the same DSL but different distribution over primitives -- e.g., a set of tables that have many more legs. Some visualizations of the generated shapes from execution traces could be added, sampling programs from within and outside the program distributions used to train.\\n\\n6. All the above points give an impression that the choice of DSL and careful initialization are important to get the model to work. Some discussion on how robust the model is to these choices would be useful. In other words how meaningful is the generalization from the supervised training set of templates chairs and tables? \\n\\n7. Missing baselines: The model is trained on 100,000 chairs and tables with full supervision. What is the performance of a nearest neighbor prediction algorithm? This is an important baseline that is missing. A comparison with a simplified CSGNet with shape primitives and union operations is also important. Tulsiani et al. consider unions but constrain that all instances have the same number of primitives which can lead to poor reconstruction results. Furthermore the training sets are likely different making evaluations unclear. I suggest training the following decoders on the same training set used in this approach (1) fixed set of cuboids (e.g., Tulsiani et al.), (2) A recurrent decoder with cuboids, (3) CSGNet (different primitives and operations), (4) a nearest neighbor predictor with the Hamming or Chamfer distance metric.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Good paper!\", \"review\": \"This paper introduces a high-level semantic description for 3D shapes. The description is given by the so-called ShapeProgram, Each shape program consists of several program statements. A program statement can be either Draw, which describes a shape primitive as well as its geometric and semantic attributes, or For, which contains a sub-program and parameters specifying how the sub-program should be repeatedly executed. The ShapeProgram is connected with an input through two networks, the program generator (encoder) and a neural program executor (decoder). Both encoder/decoder are implemented using LSTM. The key ML contribution is on the decoder, which leverages a parametrization to make the decoder differentiable. The major advantage of the proposed technique is that it does not need to specify the ShapeProgram in advance. In the same spriit of training an auto-encoder. It can be learned in a semi-supervised manner. However, in practice, one has to start with a reasonably good initial program. In the paper, this initial program was learned from synthetic data.\\n\\nThe paper presents many experimental results, including evaluation on synthetic datasets, guided adaptation on ShapeNet, analysis of stability, connectivity measurement, and generalization, and application in shape completion. The presented evaluations, from the perspective of proposed experiments, is satisfactory. \\n\\nOn the downside, this paper does not present any baseline evaluation, party due to the fact that the proposed problem is new. In fact, existing inverse procedural modeling techniques require the users to specify the program. However, the proposed approach could be even more convincing if it evaluates the performance of semantic understanding. For example, would it be possible to evaluate the performance on shape segmentation?\", \"additional_comments\": \"1. How important is the initial program? \\n\\n2. The interactions among shape parts usually form a graph, not necessarily hierarchical. This should be discussed.\\n\\n3. What is the difference between 3D shapes and 3D scenes? Does this approach require a front/up-right orientation?\\n\\n4. It would be interesting to visualize/analyze the intermediate representations of the neural shape generator. Does it encode meaningful distributions among shape parts?\\n\\nOverall, it is a good paper, and I would like to see it at ICLR 2019.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Elegant synthesis approach to a new interesting domain of representing 3D shapes\", \"review\": \"This paper presents a methodology to infer shape programs that can describe 3D objects. The key intuition of the shape programs is to integrate bottom-up low-level feature recognition with symbolic high-level program structure, which allows the shape programs to capture both high-level structure and the low-level geometry of the shapes. The paper proposes a domain-specific language for 3D shapes that consists of \\u201cFor\\u201d loops for capturing high-level regularity, and associates objects with both their geometric and semantic attributes. It then proposes an end-to-end differentiable architecture to learn such 3D programs from shapes using an interesting self-supervised mechanism. The neural program generator proposes a program in the DSL that is executed by a neural program execution module to render the corresponding output shape, which is then compared with the original shape and the difference loss is back-propagated to improve the program distribution. The technique is evaluated on both synthetic and ShapeNet tasks, and leads to significant improvements compared to Tulsiani et al. that embed a prior structure on learning shape representations as a composition of primitive abstractions. In addition, the technique is also paired with MarrNet to allow for a better 3D reconstruction from 2D images.\\n\\nOverall, this paper presents an elegant idea to describe 3D shapes as a DSL program that captures both geometric and spatial abstractions, and at the same time captures regularities using loops. CSGNet [Sharma et al. 2018] also uses programs to describe 2D and 3D shapes, but the DSL used here is richer as it captures more high-level regularities using loops and also semantic relationships such as top, support etc. The idea of training a neural program executor and using it for self-supervised training is quite elegant. I also liked the idea of guided adaption to make the program generator generalize beyond the synthetic template programs. Finally, the results show impressive improvements and generalization capability of the model.\\n\\nCan the authors comment on some notion of completeness of the proposed DSL? In other words, is this the only set of operators, shapes, and semantics needed to represent all of ShapeNet objects? Also, it might be interesting to comment more on how this particular DSL was derived. Some of the semantics operator such as \\u201cSupport\\u201d, \\u201cLocker\\u201d, etc. look overly specific to chair and tables. Is there a way to possibly learn such abstractions automatically?\\n\\nWhat is the total search space of programs in this DSL? How would a naive random search perform in this synthesis task?\\n\\nI also particularly liked the decomposition of programs into draw and compound statements, and the corresponding program generator decomposition into 2 steps BlockLSTM and StepLSTM. At inference time, does the model use some form of beam search to sample block programs or are the results corresponding to top-1 prediction?\\n\\nWould it be possible to compare the results to the technique presented in CSGNet [Sharma et al. 2018]? There are some key differences in terms of using lower-level DSL primitives and using REINFORCE for training the program generator, but it would be good to measure how well having higher-level primitives improve the results.\\n\\nI presume the neural program executor module was trained using a manually-written shape program interpreter. How difficult is it to write such an interpreter? Also, how easy/difficult is to extend the DSL with new semantics operator and then write the corresponding interpreter extension?\", \"minor_typos\": \"\", \"page_3\": \"consists a variable \\u2192 consists of a variable\", \"page_5\": \"We executes \\u2192 We execute\", \"page_6\": \"synthetica dataset \\u2192 synthetic dataset\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Our response\", \"comment\": \"Thank you for the feedback. Please see our response above.\"}",
"{\"title\": \"Our response to the earlier reader's comment and some general thoughts\", \"comment\": \"[Note: This is a reply to the reader's comment below.] We thank the anonymous reader for the feedback, which actually revealed the gap between the views of researchers from different communities. Here we take the chance to reply to these comments in specific, but also present our observation of the gap in general.\\n\\nMost importantly, the paper is about introducing a new 3D shape representation---shape programs---not about a new model for program synthesis or execution. Modeling 3D shapes is a classic and central problem in computer graphics and computational geometry, where the community have been working on it for decades, introducing various representations such as point clouds, voxels, splines, meshes, and primitives. However, as we emphasized in the abstract and intro, these representations do not capture high-level shape regularities such as repetition and symmetry explicitly, while human perception rely heavily on these cues.\\n\\nThe key contribution of our paper is therefore on proposing shape programs as a new 3D shape representation, along with a practical framework for learning them. The main challenge of introducing a new shape representation is the lack of annotated data. On 2D hand drawings, Ellis et al. solved the problem by having neural nets discover low-level traces for an off-the-shelf program synthesizer, but their approach failed to discover 3D shape programs due to the much larger search space. We instead propose to learn a simple, fast, approximate neural program executor and use it to guide the training of the neural program synthesizer. Having the neural executor in the loop allows fast adaptation to shapes outside training distribution. This includes general shapes without program annotations, as well as shapes from a different category.\\n\\nWe showed that the new shape program representation and the learning paradigm work together to reconstruct shapes well, and capture important shape properties such as stability better than those using representations like voxels or primitives. The specific network architectures used for inference and execution are components of the framework, and can be extended or replaced with more advanced ones without affecting the main message of the paper.\\n\\n3D shapes are complex; modeling 3D shapes is challenging. Developing an approach that works with the range of 3D object shapes we address here is nontrivial. The reader\\u2019s suggestion that 2D methods \\u2018can be easily transferable to 3D\\u2019 is unjustified and does not fit with the reality in computer vision and computer graphics, where many researchers have spent their careers working on these problems. In particular, the furniture object classes we study are among the largest categories in the main public 3D shape repository, ShapeNet, and have been very widely studied in the computer vision and graphics community due to their complexity (Parsing IKEA Objects, ICCV\\u201913; Joint Embeddings of Shapes and Images, ACM TOG\\u201915; and many others). By computer vision community standards, we consider a range of complex chairs and tables (e.g. Fig A1(b)), and we have also included results on generalizing to new shape categories such as beds, benches, cabinets, and sofas.\\n\\nRegarding comparison with alternative methods, we have focused on comparisons with state-of-the-art 3D shape reconstruction methods, because our goal has been to show the value of learning and inferring shape programs for 3D shape perception and understanding. We thus compared with state-of-the-art methods of Tulsiani et al (CVPR\\u201917) and Wu et al (NIPS\\u201917), and we have evaluated our model on the latest most challenging benchmark of real world images and shapes (Sun et al, CVPR\\u201918). Building models that work well on real, in-the-wild images is challenging, and its significance should not be undervalued. We will also include a comparison to CSGNet (Sharma et al, CVPR\\u201918) in the revision.\\n\\nWe also recognize the point that it would be valuable to compare with other general-purpose neural program learning approaches, although as discussed above, it is unlikely that any general approach could be applied simply out of the box, without some adaptation to the specifics of 3D shapes. In our revision, we will highlight ways that our particular approach to representing and learning shape programs is well suited to the challenges of 3D shape modeling, relative to previous methods. In particular, in addition to the idea of \\u2018execution-guided learning\\u2019, we want a recognition model that exploits the fact that (a) objects are made of parts, and (b) parts have program-like regularities in their geometry and their relative arrangement. Before the submission deadline, we had contacted the authors of neural program interpreters a few times for their implementation, but did not receive a reply. If there is any specific algorithm that reviewers think we should compare with, especially if code is available, please let us know and we will try to include a comparison in the revision.\"}",
"{\"comment\": \"The problem studied (3d shape programs) is interesting. However, I think the work presented in this paper may be incremental and lack of theoretical justification and comparison with related works.\\n\\nThere are many existing works out there for solving neural program inference/execution problems (e.g. for 2d shapes, hand-drawings, function execution and inference). They have achieved good performance in their respective experiments, e.g. neural program interpreter (Reed et al). Why is your proposed method better than existing models? There is little or no justification or insight in the paper. The use of LSTM and 3D convolution is fairly standard, so it is fair to say no new paradigm or framework is provided. With the standard architectures/components, as a reader, I would really like to see \\n\\n1) why is your proposed method better than other related models? Can you provide theoretical justifications and insights? I understand different papers may target slightly different applications, however, the general concept and framework should be transferable to slightly different tasks. (e.g. 2d shape programs should be easily transferable to 3d) Therefore, as a reader, I think theoretical motivations are very important here. Otherwise, it is not clear how difficult the problem is, and what models are really effective for solving it. \\n\\n2) Lack of experimental comparison. Only performance of the proposed method is shown. Again, can you empirically show that other related models would not perform well in this setting? \\n\\n\\n3) The shapes in the experiments are somewhat too simple. Only shapes like tables and chairs are considered, and these shapes tend to have fairly simple and clean structures. Can you show that the proposed method can generalize to more complex and diverse shapes?\", \"title\": \"Lack of justification/motivation, no comparison with related works\"}"
]
} |
|
rJe4ShAcF7 | Music Transformer: Generating Music with Long-Term Structure | [
"Cheng-Zhi Anna Huang",
"Ashish Vaswani",
"Jakob Uszkoreit",
"Ian Simon",
"Curtis Hawthorne",
"Noam Shazeer",
"Andrew M. Dai",
"Matthew D. Hoffman",
"Monica Dinculescu",
"Douglas Eck"
] | Music relies heavily on repetition to build structure and meaning. Self-reference occurs on multiple timescales, from motifs to phrases to reusing of entire sections of music, such as in pieces with ABA structure. The Transformer (Vaswani et al., 2017), a sequence model based on self-attention, has achieved compelling results in many generation tasks that require maintaining long-range coherence. This suggests that self-attention might also be well-suited to modeling music. In musical composition and performance, however, relative timing is critically important. Existing approaches for representing relative positional information in the Transformer modulate attention based on pairwise distance (Shaw et al., 2018). This is impractical for long sequences such as musical compositions since their memory complexity is quadratic in the sequence length. We propose an algorithm that reduces the intermediate memory requirements to linear in the sequence length. This enables us to demonstrate that a Transformer with our modified relative attention mechanism can generate minute-long (thousands of steps) compositions with compelling structure, generate continuations that coherently elaborate on a given motif, and in a seq2seq setup generate accompaniments conditioned on melodies. We evaluate the Transformer with our relative attention mechanism on two datasets, JSB Chorales and Piano-e-competition, and obtain state-of-the-art results on the latter. | [
"music generation"
] | https://openreview.net/pdf?id=rJe4ShAcF7 | https://openreview.net/forum?id=rJe4ShAcF7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"B1lO6ZMexN",
"rklASVMAJE",
"BJgstbaU1V",
"SklsDZT8JV",
"B1xRERh814",
"S1lzsOPLyN",
"H1gc4r04yV",
"HygT5k5hCQ",
"HJx0wWs9Rm",
"BkxdMZi90Q",
"r1gelbs5Cm",
"BJlPfxo9AQ",
"rylg1ejqAm",
"BJxpLWBLp7",
"rkeg8XwNaX",
"rylqizmohQ",
"rJlnOWEc2X"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544720832367,
1544590406212,
1544110467371,
1544110435478,
1544109622130,
1544087706303,
1543984434181,
1543442325037,
1543315813606,
1543315727539,
1543315687783,
1543315471050,
1543315416238,
1541980501318,
1541858120511,
1541251746372,
1541190004500
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1531/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1531/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1531/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1531/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1531/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1531/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1531/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1531/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1531/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1531/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1531/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1531/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1531/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1531/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1531/AnonReviewer4"
],
[
"ICLR.cc/2019/Conference/Paper1531/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1531/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"1. Describe the strengths of the paper. As pointed out by the reviewers and based on your expert opinion.\\n\\n- improvements to a transformer model originally designed for machine translation\\n- application of this model to a different task: music generation\\n- compelling generated samples and user study.\\n\\n2. Describe the weaknesses of the paper. As pointed out by the reviewers and based on your expert opinion. Be sure to indicate which weaknesses are seen as salient for the decision (i.e., potential critical flaws), as opposed to weaknesses that the authors can likely fix in a revision.\\n\\n- lack of clarity at times (much improved in the revised version)\\n\\n3. Discuss any major points of contention. As raised by the authors or reviewers in the discussion, and how these might have influenced the decision. If the authors provide a rebuttal to a potential reviewer concern, it\\u2019s a good idea to acknowledge this and note whether it influenced the final decision or not. This makes sure that author responses are addressed adequately.\\n\\nThe main contention was novelty. Some reviewers felt that adapting an existing transformer model to music generation and achieving SOTA results and minute-long music sequences was not sufficient novelty. The final decision aligns with the reviewers who felt that the novelty was sufficient.\\n\\n4. If consensus was reached, say so. Otherwise, explain what the source of reviewer disagreement was and why the decision on the paper aligns with one set of reviewers or another.\\n\\nA consensus was not reached. The final decision is aligned with the positive reviews for the reason mentioned above.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"successful adaptation of transformer networks to generating long coherent music sequences\"}",
"{\"title\": \"First application of transformers to music generation\", \"comment\": \"Apologies for my mistake about prior work on applying transformer networks to music: while reading other papers on music generation, I had encountered a few citations of a 2018 paper that directly applied transformer networks to music generation. After going back and inspecting, I found that the paper being cited was in fact the arxiv version of your paper, effectively blowing my mind!\\n\\nThis changes my opinion. Originally I felt that even as an application paper, the technical novelty was thin since transformers had been applied to music in the past. But given that these results are in fact the first on applying transformers to music, I think they do make sense at ICLR. I have changed my rating accordingly.\\n\\nFurther, thank you for your diplomatic response!\"}",
"{\"title\": \"We have included additional statistical test showing that our NLL improvements are statistically significant.\", \"comment\": \"We had previously answered reviewer 1 on the statistical significance of our NLL improvements, we wanted to point you to their thread in case you had similar questions. Our analysis showed that both NLL improvements on JSB Chorales and Piano-e-Competition are statistically significant.\\n\\nWith our previous and current comments, we hope we have addressed your concerns. Could you give an updated impression of the paper?\"}",
"{\"title\": \"We have included additional statistical test showing that our NLL improvements are statistically significant.\", \"comment\": \"We had previously answered reviewer 1 on the statistical significance of our NLL improvements, we wanted to point you to their thread in case you had similar questions. Our analysis showed that both NLL improvements on JSB Chorales and Piano-e-Competition are statistically significant.\\n\\nWith our previous and current comments, we hope we have addressed your concerns. Could you give an updated impression of the paper?\"}",
"{\"title\": \"Our improvements in NLL are statistically significant. Please also consider the impact of this work on modeling long sequences and long-term structure, which are important problems in ML.\", \"comment\": \"Thank you for your response. Another reviewer also recently asked about significance test for results in Tables 2 and 3. Since we responded under their comment, we are also copying it here below. The NLL improvements on both the JSB Chorales and Piano-e-Competition are statistically significant.\\n\\nWe also hope you consider the impact of this work on modeling long sequences, allowing us to move from studying sequences of length 650 to 3500. \\n\\nFurthermore, music is an important domain for studying long-term dependencies, as it involves repetition and self-reference on multiple timescales. Generative modeling in music is a complex real-world problem that extends traditional synthetic tasks such as copying memory (Hochreiter and Schmidhuber 1997), along with other canonical tasks in text, images, speech and video. As music possesses a different kind of of long-term structure, as we move forward in research on long-term dependencies, studying a wider range of tasks will allow us to develop better techniques. \\n\\n\\nBelow we are quoting our earlier responses on our statistical test on NLL improvements.\\n\\\"\\\"\\\"\\nThe seemingly small numbers in the improvement is because the unit of evaluation is small, being an attribute (such as loudness, pitch etc) of a note, analogous to sub-pixel level autoregressive evaluation. For Table 3 (Piano-e-Competition), our sequences are of length 2048, a 0.023 nats improvement on the sub-note level is a 47.10 nats on the sequence level. Similarly for Table 2 (JSB Chorales), the unit is on a discretized grid of 16th notes and sequences are of length 1024. An improvement of 0.03 nats per token corresponds to a 30.72 nats improvement per sequence.\\n\\nWe show below a statistical analysis of the results, and we find that the perplexity improvements on both datasets, JSB Chorales and Piano-e-Competition, to be statistically significant. \\n\\nFirst, for Piano-e-Competition the test set results for the last four rows of Table 3 are Transformer baseline 1.852 nats/token, local attention 1.840, relative attention (ours) 1.803, relative local attention (ours) 1.817.\\n\\nTo compare a pair of models, we perform the post-hoc Wilcoxon signed-rank test, which allows us to determine if there is a statistical difference in the negative loglikelihoods (NLL) under the models. The Wilcoxon signed-rank test is a standard non-parametric test that tests for paired differences. In our case, each sequence in the test set is evaluated by a pair of models, and within-pair differences in NLL are calculated to perform the test. \\n\\nFor the Piano-e-Competition dataset, we report the pairs of model that pertain to our model improvements to strongest baseline, where N = 125, the number of sequences in the test set. Between local attention (with mean = 1.840 nats/token) and relative local attention (ours) (with mean = 1.817), p-value = 1.69e-19 < 0.01. \\nBetween local attention (with mean=1.840 nats/token) and relative attention (ours) (with mean = 1.803), p-value = 4.00e-21 < 0.01. \\nThe extreme low p-values reflect that the relative improvement between the models are large. For sanity check, we see that between relative attention (ours) (with mean = 1.803) and relative_local (ours) (with mean = 1.817), which have much closer NLL (improvements under the test set is 0.014 nats/token, the validation set is 0.005 based on Table 3), the difference is still statistically significant, but with a much larger p-value, where p-value=6.88e-05 < 0.01.\\n\\nSimilarly for the JSB Chorale dataset, we report the test set results for the Transformer baseline and Transformer with relative attention, which are 0.407 nats/token and 0.357 respectively, corresponding to the top rows of the bottom two row groups in Table 2. We were sadly not able to rerun our best model for this dataset (bottom row of Table 2) due to changing code bases since nearly a year ago, and will fix that for the next version. For now, we show the statistical test on the Transformer baseline and Transformer with relative attention aforementioned, which shows a test set improvement of 0.05. The Wilcoxon signed-rank test with N = 77, the number of sequences in the test set, on the paired differences give a p-value of 2.46e-11 < 0.01, showing that the difference between Transformer baseline and Transformer with relative attention is statistically significant.\\n\\\"\\\"\\\"\"}",
"{\"title\": \"Not enough contribution for a machine learning conference like ICLR.\", \"comment\": \"After reading the revised paper and the responses, I agree that the improvement in the Piano-e-Competition set over the presented baselines also constitutes a contribution of the paper in the music generation domain. The presented samples are compelling, and show that the model actually can produce longer sequences than the compared baselines.\\n\\nWhile the human evaluation clearly show preference towards the proposed model in comparison with the baselines (two transformer, and one LSTM-based), the same improvement is not so clear in the automatic evaluation with NLL. The authors could include significance test for their results in tables 2 and 3.\\n\\nConsidering the above comments, I still have reservations on the novelty and contributions of the paper for a conference like ICLR. I would surely accept this paper in a conference of computational music domain or to ICLR workshop.\"}",
"{\"title\": \"The perplexity improvements are statistically significant.\", \"comment\": \"Thank you for looking over our revised draft. We\\u2019re glad the clarifications were helpful.\\n\\nThe seemingly small numbers in the improvement is because the unit of evaluation is small, being an attribute (such as loudness, pitch etc) of a note, analogous to sub-pixel level autoregressive evaluation. For Table 3 (Piano-e-Competition), our sequences are of length 2048, a 0.023 nats improvement on the sub-note level is a 47.10 nats on the sequence level. Similarly for Table 2 (JSB Chorales), the unit is on a discretized grid of 16th notes and sequences are of length 1024. An improvement of 0.03 nats per token corresponds to a 30.72 nats improvement per sequence.\\n\\nWe show below a statistical analysis of the results, and we find that the perplexity improvements on both datasets, JSB Chorales and Piano-e-Competition, to be statistically significant. \\n\\nFirst, for Piano-e-Competition the test set results for the last four rows of Table 3 are Transformer baseline 1.852 nats/token, local attention 1.840, relative attention (ours) 1.803, relative local attention (ours) 1.817.\\n\\nTo compare a pair of models, we perform the post-hoc Wilcoxon signed-rank test, which allows us to determine if there is a statistical difference in the negative loglikelihoods (NLL) under the models. The Wilcoxon signed-rank test is a standard non-parametric test that tests for paired differences. In our case, each sequence in the test set is evaluated by a pair of models, and within-pair differences in NLL are calculated to perform the test. \\n\\nFor the Piano-e-Competition dataset, we report the pairs of model that pertain to our model improvements to strongest baseline, where N = 125, the number of sequences in the test set. Between local attention (with mean = 1.840 nats/token) and relative local attention (ours) (with mean = 1.817), p-value = 1.69e-19 < 0.01. \\nBetween local attention (with mean=1.840 nats/token) and relative attention (ours) (with mean = 1.803), p-value = 4.00e-21 < 0.01. \\nThe extreme low p-values reflect that the relative improvement between the models are large. For sanity check, we see that between relative attention (ours) (with mean = 1.803) and relative_local (ours) (with mean = 1.817), which have much closer NLL (improvements under the test set is 0.014 nats/token, the validation set is 0.005 based on Table 3), the difference is still statistically significant, but with a much larger p-value, where p-value=6.88e-05 < 0.01.\\n\\nSimilarly for the JSB Chorale dataset, we report the test set results for the Transformer baseline and Transformer with relative attention, which are 0.407 nats/token and 0.357 respectively, corresponding to the top rows of the bottom two row groups in Table 2. We were sadly not able to rerun our best model for this dataset (bottom row of Table 2) due to changing code bases since nearly a year ago, and will fix that for the next version. For now, we show the statistical test on the Transformer baseline and Transformer with relative attention aforementioned, which shows a test set improvement of 0.05. The Wilcoxon signed-rank test with N = 77, the number of sequences in the test set, on the paired differences give a p-value of 2.46e-11 < 0.01, showing that the difference between Transformer baseline and Transformer with relative attention is statistically significant.\"}",
"{\"title\": \"Clarifications were much appreciated\", \"comment\": \"I've looked over the revised draft, and it is definitely easier to understand now. Figures 1 and 2 are much clearer now. Table 1 also puts the algorithmic contributions more into perspective here, as the asymptotic notation for memory complexity does tend to sweep quite a bit under the rug.\\n\\nThe examples are indeed compelling, and do demonstrate a qualitative improvement over the prior work (at least in the cases included here).\\n\\nI agree that perplexity and listening tests are different kinds of evaluation, and that one shoudn't necessarily expect monotonic agreement between the two. However, since perplexity scores are not directly interpretable, and the listening test results can help to anchor the perplexity scores. Still, I find it difficult to parse how meaningful a difference of 0.023 nats is (for example) in table 3, or whether the difference of 0.03 in table 2 is indeed \\\"drastic\\\".\"}",
"{\"title\": \"To all reviewers: Please listen to a new Jazz sample and revisit the previous samples. We have also revised the title of our paper to be \\\"Music Transformer: Generating music with long-term structure\\\" to highlight our domain contributions.\", \"comment\": \"Many of the reviewers seemed to think that our main contribution was a more memory-efficient implementation of relative attention. We want to emphasize that this is primarily an application paper; our main contribution is in advancing the state-of-the-art in generative modeling of music, specifically sequences that capture at once a musical composition and an expressive performance of that composition on the piano. This required getting some details right with the Transformer architecture, which motivated us to explore a more memory-efficient formulation of relative attention. According to the ICLR call for papers, the conference explicitly cites \\u201capplications in vision, audio, speech, natural language processing, robotics, neuroscience, computational biology, or any other field\\u201d as a relevant topic, so we feel that ICLR is an appropriate venue for this work.\\n\\nIn addition to looking at perplexity and human eval scores, we urge you to put on headphones and listen to the samples. We realize the genre of virtuosic classical piano may not be the easiest to resonate with. We have added a Jazz sample to https://storage.googleapis.com/music-transformer/index.html trained on additional performances. We have also included the samples posted by prior work (Oore et al., 2018) for direct comparison. We believe our samples represent a significant advance in quality especially with respect to long-term coherence.\"}",
"{\"title\": \"Our work is the first application of Transformers to music generation, with significant advancement to state-of-the-art, also works well on small datasets\", \"comment\": \"Thank you for reviewing our paper.\\n\\nAs far as we know, our work is the first to apply the Transformer architecture to music, and to model complex music sequences at lengths much longer then previously attempted. Do you have a reference to the previous application of Transformer to music? \\n\\nBefore our work, LSTMs were used at time scales of 15s (~ 500 tokens) on the Piano-e-Competition dataset (Oore et al., 2018). Our work shows that Transformers not only model these complex expressive piano performances better, and can also do this at scales of 60s (~2000 tokens) with remarkable long-term coherence. We invite you to listen to our samples at https://storage.googleapis.com/music-transformer/index.html to see if you agree. We have also included samples from prior work (Oore et al., 2018) for direct comparison. Our use of Transformer for music is not only novel but demonstrates a significant advance in the state-of-the-art for music generation. To achieve this, we had to develop a new algorithm that significantly lowers the space complexity of previous work on relative attention (from x to y) while keeping the computational complexity the same.\\n\\nIt seems the review was cut off at the end, hinting at Transformers requiring larger datasets.\\n\\nOur work also shows that with relative attention, Transformers can perform extremely well on small datasets such as JSB Chorales, which consists of only 382 pieces, a total of 370 thousand tokens at the 16th note resolution, with an average length of about a thousand per piece. Without relative attention, the Transformer did not have the right inductive bias to capture longer term structure even though it has the capacity to, and without our work one may suspect that Transformers do not work well for music (which we suspected too initially!).\"}",
"{\"title\": \"Our main contribution is in music generation, and we have added a table in the paper to provide a deeper analysis of the memory footprint.\", \"comment\": \"Thank you for your detailed review.\\n\\nThe main contribution of our paper is in generating music with long-term structure, at the timescale of 60s. We have modified our title and also contribution section in the paper to highlight this point. Prior work only aimed to generate 15s of expressive piano music, which is ~500 tokens (Oore et al., 2018). Even with a large GPU with 16GB of memory, the relative attention formulation by Shaw et al. (2018) can only fit ~650 tokens. With our memory-efficient formulation, we can fit 5x (~3500 tokens), and hence allowing us to experiment with generating minute-long music (~ 2000 token per minute). \\n\\nWe have added a table in the paper to show the memory requirements for the maximal length under each of the formulations, and we also summarize it here. Assuming hidden size D=512, the memory requirements at 650 tokens is 865 MB for prior method of complexity O(L^2D) and 1.3MB for ours with complexity O(LD). At 3500 tokens, prior is 25GB, ours is 7.2MB. \\n\\nEven though the time complexity of both methods are the same O(L^2D), in practice because prior work requires more memory, at length 650 our method is 6x faster. As memory grows quadratically with length, for longer sequences such as 3500 the difference would be even greater if the comparison was possible. \\n\\nWe titled the paper music transformer because we are the first to apply transformers to music and with our reformulation we were able to use it to significantly advance the state-of-the-art in generating long-scale music. We also casted music harmonization as a seq2seq task, leveraging the encoder-decoder structure of transformers. You can hear samples here: https://storage.googleapis.com/music-transformer/index.html. We agree our contribution can also be useful for other domains that have long sequences and carry long-range dependencies.\", \"clarifications_on_the_listening_test\": \"We generated 10 samples for each model, and each model was compared to 3 other models, hence each model was involved in 30 pairwise comparisons. In other words, since there are 4 models, hence 6 pairs, each pair of models comparing their 10 samples, yielding 60 pairwise comparisons. Each was rated by 3 different participants, resulting in a total of 180 pairwise comparisons. In the appendix, we have added the win, tie, loss counts for all 6 pairs, and the details of the statistical tests. \\n\\nIn the paper, whenever we refer to \\u201crelative transformer\\u201d we have added clarification whether it is our formulation or Shaw et al.\\u2019s (2018). Thank you for catching our typos and suggesting better ways of formatting. We have updated them accordingly.\"}",
"{\"title\": \"We introduce novel use of the Transformer for several musical tasks, and provide state-of-the-art empirical results.\", \"comment\": \"Thank you for your review.\\n\\nWe would like to point out that the major contribution of this paper is empirical, where we are the first to successfully adapt a self-attention based model to generate minute-long (~2000 tokens) sequences of music that sound realistic to human listeners. This is a very difficult problem because of the complicated grammar of music. In particular, we are modeling both music composition and the performance of it at once, which involves modeling relationships simultaneously at timescales ranging 4 orders of magnitude, from 10 milliseconds to 100s. Before our work, the state-of-the-art was to use LSTMs to generate 15s of music (Oore et al., 2018). With our results, we hope that the music community will adopt relative self-attention for modeling music. \\n\\nWe have shown novel use of the Transformer on a range of musical tasks, which yielded novel findings that are useful beyond the music domain. For example, we see for conditioned generation, when given an initial motif, relative transformer is able to reuse it in a coherent fashion to generate continuations. This was not possible with LSTMs because it favours recency and soon forgets the initial motifs. In contrast, transformers can directly look back to \\u201ccopy\\u201d past motifs, however without relative attention the inductive bias was not strong enough for this to happen over longer timescales. Furthermore, relative transformer was able to generalize beyond the lengths it was trained on. This was not possible for baseline Transformer. Both phenomena are shown in Figure 4 and can also be heard clearly in the accompanying audio clips at https://storage.googleapis.com/music-transformer/index.html. \\n\\nWe also show a novel formulation of the harmonization task, given a melody generate an accompaniment, as a seq2seq problem. The benefit is that even though the accompaniment can only see its own past, it always has full access to the entire melody, allowing it to attend to and account for the future directly. From the link above, you can hear the model\\u2019s accompaniment to \\u201ctwinkle twinkle little star\\u201d. The accompanying styles of piano playing differs across samples, yet maintains consistency within. \\n\\nIn additional to our domain contributions, we hope that the reviewer will find our algorithmic contributions that reduce the memory footprint from L^2D (8.5 GB per attention layer) to LD (4.2 MB per attention layer) to be useful. This is critical for applying relative transformer to other tasks with long sequences, such as autoregressive models of images that use self-attention (Parmal et al., 2018) and for modeling long sequences in dialogue and summarization.\"}",
"{\"title\": \"We revised the prose on describing our skewing procedure (sections 3.4 and 3.5) and made new figures for the paper to make the explanations more intuitive.\", \"comment\": \"Thank you for your review and suggestions.\\n\\nWe first clarify that we are not reducing the memory requirements of the Transformer architecture from Vaswani et al. (2017), which is O(L^2). Relative attention as proposed by Shaw et al. (2018) involves instantiating an additional intermediate relative embedding that requires O(DL^2). With our new formulation, we reduce this component to O(DL). The overall relative attention memory complexity is still O(L^2), but with the added benefit of incorporating relational information which improves perplexity and generation. \\n\\nPerplexity and listening tests evaluate different objectives. We do not know if between different model classes, perplexity and listening evaluations correlate monotonically. However, when comparing baseline Transformer and our relative Transformer, the latter performs better both in perplexity and listening tests. Figure 4 shows that samples from relative attention exhibit a lot more structure and better generalization (i.e. maintaining coherence over twice the length it was trained on), while both is not true for baseline Transformer. One can also clearly hear the difference from the music samples that was included in the link below:\", \"https\": \"//storage.googleapis.com/music-transformer/index.html\\n\\nFrom the link above, you can also hear and contrast unconditioned samples from our relative Transformer and samples taken from prior work (Oore et al., 2018). We believe you will hear there is a difference. Before our work, LSTMs were used at time scales of 15s (~ 500 tokens) on the Piano-e-Competition dataset. Our work shows that Transformers not only model these complex expressive piano performances better, and can also do this at scales of 60s (~2000 tokens) with remarkable long-term coherence. \\n\\nWe have revised sections 3.4 and 3.5 and made new figures (with axes labels) to make the explanations more intuitive. We agree the previous Figures 1 and 2 were harder to read, even though they did bear the same coloring scheme, with gray indicating positions that were either masked out or padded. In the new figures we added additional color coding for the different relative distances to make it easier to see the correspondances. We also added an equation to describe how the array indices map before and after skewing. Before, we have an absolute-by-relative (i_q, r) indexed matrix, and after skewing we have an absolute-by-absolute (i_q, j_k) indexed matrix, where j_k = r - (L-1) + i_q.\"}",
"{\"title\": \"Implementation trick to reduce memory footprint of transformer / experiments on music generation\", \"review\": \"This paper presents an implementation trick to reduce the memory footprint of relative attention within a transformer network. Specifically, the paper points out redudant computation and storage in the traditional implementation and re-orders matrix operations and indexing schemes to optimize. As an appllication, the paper applies the new implementation to music modeling and generation. By reducing the memory footprint, the paper is able to train the transformer with relative attention on longer musical sequences and larger corpora. The experimental results are compelling -- the transformer with relative attention outperforms baselines in terms of perplexity on development data (though test performance is not reported) and by manual evaluation in a user study.\\n\\nOverall, I'm uncomfortable accepting this paper in its current form because I'm not sure it constitutes a large enough unit of novel work. The novelty here, as far as I can tell, is essentially an implementation trick rather than an algorithm or model. Transformer networks have been applied to music in past work -- the only difference here is that because of the superior implementation the model can be trained from larger musical sequences. All that said, I do think the proposed implementation is useful and that the experimental results are compelling. Clearly, when trained from sufficient data, transformer networks have something to offer that is different from past techniques.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Cool idea, memory usage could be analysed deeper\", \"review\": \"The authors address the problem raised by applying a fully attentional network (FAN) to model music.\\nThey argue clearly for the need of relational positional embedding in that problem (instead of absolute positional as in vanilla FAN), and highlight the quadratic memory footprint of the current solution (Shaw et al. 2018).\\n\\nThe main contribution of the paper is a solution to this, consisting in a smart idea (sect 3.4.1 and 3.4.2) which allows them to compute relative embeddings without quadratic overhead.\\nThe model performs indeed better than Shaw et al.'s on the single data-set they compared both. On the other one, the argument is that Shaw et al. 2018 cannot be applied because the sequences are too long.\", \"i_have_two_concerns_with_the_paper\": \"1/ it is very hard to read at times. In particular, the main contribution took me several passes the understand. I list below a few recommendations for improvement \\n\\t2/ the main argument is that the model requires less memory and is faster. However, the only empirical evidence in that direction is given in the introduction (Sect 1.1., second paragraph).\", \"the_following_points_remain_unclear_to_me\": \"a) why can't the Relative Transformer be applied to Piano-e composition. What is the maximal length that is possible?\\n\\t\\t\\tb) how much faster / less memory is the relative music transformers? The only data-point is in Sect 1.1., which seems indeed impressive (but then one wonders why this is not exploited further). A deeper analysis of the comparative memory footprint would greatly strengthen the paper in my opinion.\\n\\t\\t\\t\\nWhy \\\"music\\\" relative transformers? Nothing in the model restrict it to that use case. The use of FAN over audio has been explored with limited success, one of the reasons being that - similarly to this use-case here - audio sequences tend to be longer than text.\", \"minor_comments\": [\"abstract, ln9: there seems to be a verb missing\", \"p1,ln-2: \\\"dramatic\\\" improvements seems to be exaggerated\", \"p2,ln11: \\\"too long\\\". too long for what?\", \"p4,ln15: (Table 1). is one sentence by itself. Also, a clear explanation of that table is missing\", \"p5,item 2: an explanation in formula would be helpful for those not familiar with reshaping\", \"Fig3: it seems very anecdotical. Similar green bloxes might be placed on the left plot\", \"sect4.1.1,ln3. that sentence does not parse\", \"Table 2: what is cpsi?\", \"$l$ is nicer formatted as $\\\\ell$\", \"care should be taken to render the Figures more readable (notably the quality of Fig 4, and labels of Fig 7)\", \"footnotes in Figures are not displayed (Table 2 and 4)\", \"the description of the human evaluation leaves some open questions. I could not come up with 180 ratings (shouldn't it be 180 * 3 ratings?). Also, at least the values of Relative Transformer vs other 3 models should be shown (or all 6 comparisons). Here you call \\\"relative transformer\\\" your model, previously you used that term to refer to (Shaw et al. 2018).\", \"when reporting statistical significance, there are some omissions which should be clarified.\", \"(Shaw et al. 2018) has been published at NAACL. For such an important citation, you should update the reference from the arxiv version.\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"An application of transformer to music generation\", \"review\": \"In this paper the authors propose an algorithm to reduce the memory\\nrequirements for calculating relative position vectors in a\\nself-attention (transformer) network, based on the work of [Vaswani et\\nal., 2017; Shaw et al. 2018]. The authors applied their model to a music\\ngeneration task, and evaluated it on two datasets (J.S. Bach Chorales\\nand Piano-e-Competition). Their model obtained improvements over the\\nstate-of-the-art in the Piano-e-Competition set in terms of\\nlog-likelihoods. Additionally, they performed human evaluation on the\\nPiano-e-Competition set showing preference of the participants for their\\nmethod over the state-of-the-art.\\n\\nThe application of the transformer network seems suitable for the task,\\nand the authors fairly justify their motivations and choices. They show\\nimprovements over the-state-of-the-art for one data-set and explained\\ntheir results. They also show an interesting application of\\nsequence-to-sequence models for generating complete pieces of music\\nbased on a given melody.\\n\\nMy main concern is the novelty of the paper. The authors use the model\\nproposed by [Shaw et al. 2018] with an additional modification to manage\\nvery long sequences proposed by [Liu et al., 2018; Parmar et al., 2018],\\n(chunking the input sequences in non-overlaping blocks and calculating\\nattention only on the current and the previous blocks). Their main\\ncontribution is to reduce the memory requirement for matrix operations\\nfor calculating the relative position vectors of the self-attention\\nfunction, which was sub-optimal in [Shaw et al. 2018]. The memory\\nreduction is from O(L^2D+L^2) to O(LD+L^2). I would qualify this as an\\noptimization in the implementation of the existing method rather than a\\nnew approach.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Improved efficiency of transformer on long sequences, but a bit difficult to follow\", \"review\": \"This paper describes a method for improving the (sequence-length) scalability of the Transformer architecture, with applications to modeling long-range interactions in musical sequences. The proposed improvement is applied to both global and local relative attention formulations of self-attention, and consists of a clever re-use (and re-shaping) of intermediate calculations. The result shaves a factor of L (sequence length) from the (relative) memory consumption, facilitating efficient training of long sequences. The method is evaluated on MIDI(-like) data of Bach chorales and piano performances, and compares favorably to prior work in terms of perplexity and a human listener evaluation.\\n\\nThe results in this paper seem promising, though difficult to interpret. The quantitative evaluation consists of perplexity\\nscores (Tables 2 and 3), and the qualitative listening study is analyzed by pairwise comparisons between methods. While the proposed method achieves the highest win-rate in the listening study, other results in the study (LSTM vs Transformer) run contrary to the ranking given by the perplexity scores in Table 3. This immediately raises the question of how perceptually relevant the (small) differences in perplexity might be, which in turn clouds the overall interpretation of the results. Of course, perplexity is not the whole story here: the focus of the paper seems to be on efficiency, not necessarily accuracy, but one might expect improved efficiency to afford higher model capacity and improve on accuracy.\\n\\n\\nThe core contributions of this work are described in sections 3.4 and 3.5, and while I get the general flavor of the idea, I find the exposition here both terse and difficult to follow. Figures 1 and 2 should illustrate the core concept, but they lack axis labels (and generally sufficient detail to decode properly), and seem to use the opposite color schemes from each-other to convey the same ideas. Concrete image maps using real data (internal feature activations) may have been easier to read here, along with an equation that describes how the array indices map after skewing.\\n\\nThe description in 3.4 of the improved memory enhancement is also somewhat difficult to follow. The claim is a reduction from O(DL^2) to O(DL), but table 1 lists this as O(DL^2) to O(DL + L^2). In general, I would expect L to dominate D, which still leaves the memory usage in quadratic space, so it's not clear how or why this constitutes an improvement. The improvement due to moving from global to local attention is clear, but this does not appear to be a contribution of this work.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
H1f7S3C9YQ | SynonymNet: Multi-context Bilateral Matching for Entity Synonyms | [
"Chenwei Zhang",
"Yaliang Li",
"Nan Du",
"Wei Fan",
"Philip S. Yu"
] | Being able to automatically discover synonymous entities from a large free-text corpus has transformative effects on structured knowledge discovery. Existing works either require structured annotations, or fail to incorporate context information effectively, which lower the efficiency of information usage. In this paper, we propose a framework for synonym discovery from free-text corpus without structured annotation. As one of the key components in synonym discovery, we introduce a novel neural network model SynonymNet to determine whether or not two given entities are synonym with each other. Instead of using entities features, SynonymNet makes use of multiple pieces of contexts in which the entity is mentioned, and compares the context-level similarity via a bilateral matching schema to determine synonymity. Experimental results demonstrate that the proposed model achieves state-of-the-art results on both generic and domain-specific synonym datasets: Wiki+Freebase, PubMed+UMLS and MedBook+MKG, with up to 4.16% improvement in terms of Area Under the Curve (AUC) and 3.19% in terms of Mean Average Precision (MAP) compare to the best baseline method. | [
"deep learning",
"entity synonym"
] | https://openreview.net/pdf?id=H1f7S3C9YQ | https://openreview.net/forum?id=H1f7S3C9YQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"r1eejX0lgE",
"ryx-GS_u07",
"rJgca4uuC7",
"BJlwNEO_AX",
"rJxebEdOCX",
"S1l6EuIAnm",
"S1gffzlahX",
"Ske--iHV3m"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544770455682,
1543173385189,
1543173313957,
1543173167044,
1543173111817,
1541462069180,
1541370378023,
1540803321238
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1530/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1530/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1530/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1530/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1530/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1530/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1530/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1530/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper presents a model to identify entity mentions that are synonymous. This could have utility in practical scenarios that handle entities.\\n\\nThe main criticism of the paper is regarding the baselines used. Most of the baselines that are compared against are extremely simple. There is a significant body of literature that models paraphrase and entailment and many of those baselines are missing (decomposable attention, DIIN, other cross-attention mechanisms). Adding those experiments would make the experimental setup stronger.\\n\\nThere is a bit of a disagreement between reviewers, but I agree with the two reviewers who point out the weakness of the experimental setup, and fixing those issues could improve the paper significantly.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Meta Review\"}",
"{\"title\": \"Response\", \"comment\": \"We thank the reviewer for the thorough review and constructive feedback.\\n\\nWe first would like to thank the reviewer for the positive feedback on our work.\\n\\nFor the part that concerned the reviewer, we elaborate point by point as shown below:\\n\\n(W1) For the experiment setting, the proposed model can work with various word embeddings. The contribution of our work does not lie in the choice of word embeddings, but the proposed architecture that utilizes entity representations for bilateral matching among multiple pieces of contexts. Our model is independent of the choice of word embeddings, and we adopt Word2vec as a base case. We aim to experiment the modeling ability of different model architectures given the same word representation information for synonym discovery. With sophisticated word embedding methods such as Elmo or BERT, which achieve decent performances on various NLP tasks, we do expect that both baselines and our model will get better performance.\\n\\n(W2) We\\u2019ve added the significance testing in the experiment and update Table 2 with discussions. A single-tailed t-test is performed to see whether or not the proposed model can outperform other baselines with significant improvements.\\n\\n(W3) For the missing key details, the contexts are randomly selected from all contexts in which each entity is mentioned. Due to limitations on computing resources, we are only able to verify the performance of up to 20 pieces of randomly chosen contexts in which each entity is mentioned. For Wiki+Freebase and PubMed+UMLS, the datasets come with entity mentions annotated. While in MedBook+MKG, we apply existing NER model [1] with contextualized embeddings [2] to obtain the annotated entities from the text. We clarified the claim about the annotation: the proposed model does not require additional structured annotations on the free-text corpus, such as entity ontologies, dependency parsing results during training and inference. The inference stage for synonym discovery is also designed to be data-driven so that we do not need pre-specified candidate entity pairs prepared by domain experts to be verified by the model, which further alleviates annotation efforts. We added these details in the revised version.\"}",
"{\"title\": \"Response (Cont'd)\", \"comment\": \"(W4) Thanks for the insights on these concerns.\\n(a)\\tThe complexity of method: the proposed model compares each piece of context of one entity h_p with each context g_q of another entity. Thus P*Q comparisons are needed for each entity pair. We write equations for each comparison for clarity. Regarding the implementations, the bilateral matching can be easily written in a matrix form, where a matrix multiplication is used H*W_BM*G^T, and the matching score matrix M can be obtained by taking softmax on the M matrix over certain axis (over 0-axis for M_{p->q}, 1-axis for M_{p<-q}). The context aggregation can also be done using a simple max-pooling operator. Thus, the proposed matching system is computational efficient via matrix multiplication, sum, softmax, and pooling. Moreover, the matching itself does not introduce additional model parameters except a domain-dependent context vector l and a bi-linear weight matrix W_BM.\\n(b)\\tWith redundant pieces removed, the contexts are randomly sampled, matched, and aggregated in the proposed matching system. As noisy contexts are prevalently observed, and contexts are randomly sampled, we propose to deal with uninformative contexts by the proposed bilateral matching with leaky units. How high-quality, complimentary, and informative contexts can be retrieved and collectively fused itself is an open and challenging research problem, which we would like to explore in-depth in our future works. \\n(c)\\tThe intuition behind such design is that when two entities are synonym with each other, we would like to have a low loss if a high similarity score is learned; when two entities are not synonym with each other, we would like to have a low loss when the similarity score is low. A similar loss function has been used in previous works such as in the SRN (Neculoiu et al., 2016) model. We updated descriptions with examples in Section 2.5. \\n\\n(W5) In the training for knowledge graph completion tasks where the objective aims to discover new entity pairs of certain relationships, classifiers are trained to determine the rationality of candidate entity pairs. It is routine to obtain corrupt correct triples (h, r, t) \\u2208 S by replacing entities, and construct incorrect triples as negative samples for training [3][4][5]. Experiment results show that this learning schema is effective on classification and link prediction tasks on knowledge graphs such as Freebase (Bollacker et al. 2008) and WordNet (Miller 1995). Similarly, we adopt this learning schema and obtain negative samples by replacing existing entities in synonym entity pairs with random ones on our synonym discovery task. There is a small probability that the randomly generated negative samples could be rational synonym pairs, but we found this learning schema effective, which is in accordance with the situations for knowledge graph completion tasks where precision and missing pairs also ubiquitously observed.\\n\\n[1] Peters, Matthew, et al. \\\"Semi-supervised sequence tagging with bidirectional language models.\\\" Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Vol. 1. 2017.\\n[2] Peters, Matthew, et al. \\\"Deep Contextualized Word Representations.\\\" Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers). Vol. 1. 2018.\\n[3] Socher, Richard, et al. \\\"Reasoning with neural tensor networks for knowledge base completion.\\\" Advances in neural information processing systems. 2013.\\n[4] Wang, Zhen, et al. \\\"Knowledge Graph Embedding by Translating on Hyperplanes.\\\" AAAI. Vol. 14. 2014.\\n[5] Lin, Yankai, et al. \\\"Learning entity and relation embeddings for knowledge graph completion.\\\" AAAI. Vol. 15. 2015.\"}",
"{\"title\": \"Response\", \"comment\": \"We thank the reviewer for the appreciation and the supportive comments.\"}",
"{\"title\": \"Response\", \"comment\": \"We thank the reviewer for the comments and suggestions.\\n\\nFor the mentioned related works, Snow et a.l 2005, Sun and Grishman 2010, Liao et al. 2017, Cambria et al. 2018, they are not designed for synonym discovery task, so we do not compare with them in the experiments. The mentioned related works introduce different ways to incorporate the context information when we are able to obtain external knowledge such as the entity ontologies, dependency parsing results. Snow et al. model the context by dependency path features extracted from parse trees. Their model aims to extract the hypernym (is-a) entity pairs from the sentence. Sun and Grishman use the dependency parsing results to devise an unsupervised model that clusters local contexts. The contexts are used to discover patterns expressing relationships between entities. Liao et al. propose to annotate entity mentions from the sentence using limited contexts in short search queries. Cambria et al. learn concept primitives for sentiment analysis. The model encodes the left context and right context separately while neglecting the target word for context modeling. A neural tensor layer is used to model the interactions between left/right context.\\n\\nThe models mentioned above inspire us to devise the context encoder in SynonymNet that both 1) explicitly models the entity information using its contexts, and 2) does not use additional structured annotations for context modeling.\\n\\nFor the datasets, we verify the performance of the proposed model on both generic and domain-specific datasets in English and Chinese. We updated the dataset descriptions in Section 3.1. Wiki+Freebase contains generic entities and their contexts from Wikipedia. The PubMed+UMLS and MedBook+MKG contain medical entities and their related context in medical literature. Both Wiki+Freebase and PubMed+UMLS are pre-existing English datasets that are publicly available and adopted in previous synonym discovery works [1][2]. The MedBooK+MKG is a Chinese dataset collected by the authors, which is complementary to the existing English datasets, and will be made publicly available. For other datasets, CoNLL-YAGO (Hoffart et al. 2011) lacks a large enough training set for our model. ACE 2004 (NIST, 2004; Ratinov et al. 2011) and ACE 2005 (NIST, 2005; Bentivogli et al. 2010) are less accessible due to copyright issues. The Wiki+Freebase dataset we used shares the same source with Wikipedia (Ratinov et al. 2011).\\n\\nFor Gupta et al., this work mainly harnesses morphological regularities to deal with analogies like king \\u2013 queen = man \\u2013 woman. This is different from the studied task, i.e., synonym discovery.\\n\\nThanks for the suggestions. We updated Section 2 and Section 3.1. Word embedding methods such as the skip-gram learn word representations effectively from a large-scale unannotated corpus. The semantics in the pre-trained embeddings make them suitable to initialize word embeddings for our model. The embeddings are updated as we train the model. \\nAlso, the learned word embeddings are suitable to search for candidate entities. Although the embeddings may involve noisy entities, they significantly narrow down the candidate searching space during inference phase: not all entities need to be verified with a target entity. The noisy candidates introduced by the initial word embeddings are further pruned away by the matching layer in the SynonymNet model.\\n\\nThanks for your suggestions! We removed the reference for entities according to your suggestions for simplicity. To further clarify the technical novelty of the proposed algorithm, we have rephrased the contributions in Section 1. \\n\\nThanks again, and the paper has been proofread for grammar errors.\\n\\n[1] Qu, Meng, Xiang Ren, and Jiawei Han. \\\"Automatic synonym discovery with knowledge bases.\\\" Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, 2017.\\n[2] https://github.com/mnqu/DPE\"}",
"{\"title\": \"Interesting paper though there is room for improvement\", \"review\": \"This paper studies the problem of identifying (discovering) synonymous entities. The paper proposes using the \\\"contexts\\\" of the entities as they occur in associated text corpora (e.g. Wiki) in the proposed neural-network based embedding approach for this task. The key novelties of the approach lie in the \\\"matching\\\" system used, where contexts of one entity are matched with that for the other entity to see how well they align with each other (which effectively determines the similarity of the two entities). Experiments are conducted on three different datasets to show the efficacy of the proposed approach.\\n\\nOverall I found the paper to be an interesting read with some nice ideas mixed in. However I also had some concerns which are highlighted later down below, which I believe if addressed would lead to a very strong work.\", \"quality\": \"Above average\\n\\nIn general the method seems to work somewhat better than the baselines and the method does have a couple of interesting ideas.\", \"clarity\": \"Average\\n\\nI found a few key details to be missing and also felt the paper could have been better written.\", \"originality\": \"Average\\n\\nThe matching approach and use of the leaky units was interesting tidbits. Outside of that the work is largely about the application of such Siamese RNNs based networks to this specific problem. (The use of context of entities has already been looked at in previous works albeit in a slightly more limited manner)\", \"significance\": [\"Slightly below average\", \"I am not entirely sold on the use of this approach for this problem given its complexity and unclear empirical gains vs more sophisticated baselines. The matching aspect may have some use in other problems but nothing immediately jumps out as an obvious application.\", \"----\", \"Strengths / Things I liked about the paper:\", \"In general the method is fairly intuitive and simple to follow which I liked.\", \"The matching approach was an interesting touch.\", \"Similarly for the \\\"leaky\\\" unit.\", \"Experiments conducted on multiple datasets.\", \"The results indicate improvements over the baselines considered on all the three datasets.\", \"Weaknesses / Things that concerned me:\", \"(W1) Slightly unfair baselines? One of the first things that struck me in the experimental results was how competitive word2vec by itself was across all three datasets. This made me wonder what would happen if we were to use a more powerful embedding approach say FastText, Elmo, Cove or the recently proposed BERT? (The proposed method itself uses bidirectional LSTMs)\", \"Furthermore all of them are equally capable of capturing the contexts as well. An even more competitive (and fair) set of baselines could have taken the contexts as well and use their embeddings as well. Currently the word2vec baseline is only using the embedding of the entity (text), whereas the proposed approach is also provided the different contexts at inference time. The paper says using the semantic structure and the diverse contexts are weaknesses of approaches using the contexts, but I don't see any method that uses the context in an embedding manner -- say the Cove context vectors. If the claim is that they won't add any additional value above what is already captured by the entity it would be good to empirically demonstrate this.\", \"(W2) Significance testing: On the topic of experimentation, I was concerned that significance testing / error estimates weren't provided for the main emprical results. The performance gaps seem to be quite small and to me it is unclear how significant these gaps are. Given how important significance testing is as an empirical practice this seems like a notable oversight which I would urge the authors to address.\", \"(W3) Missing key details: There were some key aspects of the work that I thought were not detailed. Chief among these was the selection of the contexts for the entities. How was this? How were the 20 contexts identified? Some of these entities are likely far more common than just 20 sentences and hence I wonder how these were selected?\"], \"another_key_aspect_i_did_not_see_addressed\": \"How were the entities identified in the text (to be able to find the contexts for them)? The paper claims that they would like to learn from minimal human annotations but I don't understand how these entity annotations in the text were obtained. This again seems like a notable oversight.\\n\\n- (W4) Concerns about the method: I had two major concerns about the method: \\n\\n(a) Complexity of method : I don't see an analysis of the computational cost of the proposed method (which scales quadratically with P the number of contexts); \\n\\n(b) Effect of redundant \\\"informative\\\" contexts: Imagine you have a number of highly informative contexts for an entity but they are all very similar to each other. Due to the way the matching scores are aggregated, these scores are made to sum to 1 and hence no individual score would be very high. Given that this is the final coefficient for the associated context, this seems like a significant issue right?\\n\\nUnless the contexts are selected to be maximally diverse, it seems like this can essentially end up hurting an entity which occurs in similar contexts repeatedly. I would like to see have seen the rationale for this better explained.\\n\\n(c) A smaller concern was understanding the reasoning behind the different loss functions in the siamese loss function with a different loss for the positive and the negative, one using a margin and one which doesn't. One which scales to 1/4, the other scaling to (1-m)^2. This seems pretty arbitrary and I'd like to understand this.\\n\\n-(W5) Eval setting : My last concern was with the overall evaluation setup. Knowledge bases like Freebase are optimized for precision rather than recall, which is why \\\"discovery\\\" of new relations is important. However if you treat all missing relationships as negative examples then how exactly are you measuring the true ability of a method? Thus overall I'm pretty skeptical about all the given numbers simply because we know the KBs are incomplete, but are penalizing methods that may potentially discover relations not in the KB.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Nice approach for automatically discovering synonymous entities\", \"review\": \"The paper presents a neural network model (SYNONYMNET) for automatically discovering synonymous entities from a large free-text corpus with minimal human annotation. The solution is fairly natural in the form of a siamese network, a class of neural network architectures that contain two or more identical subnetworks, which are an obvious approach for such a task, even though this task's SotA does not cover such architectures. even though the abstract consists the word novel, the chosen architecture is not a novel one but attached to this task, it can be considered as if.\\n\\n# Paper discussion:\\n\\nThe introduction and the related work are well explained and the article is well structured. The authors mark very well the utility of automatically discovering synonyms.\\n\\nSection 2 presents the SynonymNet, mainly the bi-LSTM applied on the contexts and the bilateral matching with leaky unit and the context aggregation for each entity, along with training objectives and the inference phase.\\n\\nThe novelty does not consist in the model since the model derives basically from a siamese network, but more in the approach, mainly the bilateral matching: one input is a context for an entity, the other input is a context for the synonym entity, and the output is the consensus information from multiple pieces of contexts via a bilateral matching schema with leaky unit (highest matched score with its counterpart as the relative informativeness score) and the context aggregation. The inference phase is a natural step afterward. Also, the usage of the leaky unit is clearly stated.\\n\\nSection 3 presents the experimental phase, which is correct. The choice of LSTMs is understandable but other experiments could have been done in order to make clearer why it has been chosen. Regarding also the word embeddings choice, other experiments could have been completed (word2vec and GloVe have been competing with many other embeddings recently).\", \"one_noticed_misspelling\": \"GolVe (Pennington et al., 2014)\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"This paper presents a neural network model that detect synonymous entities based on contextual information without supervision.\", \"review\": [\"Strengths:\", \"clear explanation of the problem\", \"clear explanation of the model and its application (pseudocode)\", \"clear explanation of training and resulting hyperparameters\"], \"weaknesses\": [\"weak experimental settings:\", \"-- (a) comparison against 'easy to beat' baselines. The comparison should also include as baselines the very relevant methods listed in the last paragraph of the related work section (Snow et a.l 2005, Sun and Grishman 2010, Liao et al. 2017, Cambria et al. 2018).\", \"-- (b) unclear dataset selection: it is not clear which datasets are collected by the authors and which are pre-existing datasets that have been used in other work too. It is not clear if the datasets that are indeed collected by the authors are publicly available. Furthermore, no justification is given as to why well-known publicly available datasets for this task are not used (such as CoNLL-YAGO (Hoffart et al. 2011), ACE 2004 (NIST, 2004; Ratinov et al. 2011), ACE 2005 (NIST, 2005; Bentivogli et al. 2010), and Wikipedia (Ratinov et al. 2011)).\", \"the coverage of prior work ignores the relevant work of Gupta et al. 2017 EMNLP. This should also be included as a baseline.\", \"Section 2 criticises Mikolov et al.'s skip-gram model on the grounds that it introduces noisy entities because it ignores context structure. Yet, the skip-gram model is used in the preprocessing step (Section 3.1). This is contradictory and should be discussed.\", \"the definition of synonyms as entities that are interchangeable under certain contexts is well known and well understood and does not require a reference. If a reference is given, it should not be a generic Wikipedia URL.\", \"the first and second bulletpoint of contributions should be merged into one. They refer to the same thing.\", \"the paper is full of English mistakes. A proficient English speaker should correct them.\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
SkgQBn0cF7 | Modeling the Long Term Future in Model-Based Reinforcement Learning | [
"Nan Rosemary Ke",
"Amanpreet Singh",
"Ahmed Touati",
"Anirudh Goyal",
"Yoshua Bengio",
"Devi Parikh",
"Dhruv Batra"
] | In model-based reinforcement learning, the agent interleaves between model learning and planning. These two components are inextricably intertwined. If the model is not able to provide sensible long-term prediction, the executed planer would exploit model flaws, which can yield catastrophic failures. This paper focuses on building a model that reasons about the long-term future and demonstrates how to use this for efficient planning and exploration. To this end, we build a latent-variable autoregressive model by leveraging recent ideas in variational inference. We argue that forcing latent variables to carry future information through an auxiliary task substantially improves long-term predictions. Moreover, by planning in the latent space, the planner's solution is ensured to be within regions where the model is valid. An exploration strategy can be devised by searching for unlikely trajectories under the model. Our methods achieves higher reward faster compared to baselines on a variety of tasks and environments in both the imitation learning and model-based reinforcement learning settings. | [
"model-based reinforcement learning",
"variation inference"
] | https://openreview.net/pdf?id=SkgQBn0cF7 | https://openreview.net/forum?id=SkgQBn0cF7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"BJetiDy-lE",
"ryln2JpOk4",
"SkxokekkyE",
"r1gl889CAX",
"HJlGMQh6C7",
"BkgihzhTR7",
"BJlOcMnT07",
"HJl6NdMsR7",
"rkeeGBQ5AQ",
"Ske-CdpK0X",
"rkxdPmhYCm",
"SkeIssotCX",
"BkgyCu_uRQ",
"SklnLOduCX",
"Hyxyb_OdRX",
"SkgclFyU07",
"SyxYMYcXCQ",
"B1eIKu97Rm",
"H1xTMd97AX",
"SkgrTD57AX",
"SylPxP9XR7",
"SJl4a85QC7",
"BJgN9I97CX",
"B1xorU9XCm",
"HJlR5FsT2Q",
"Skg9apeK2X",
"r1eK_NeLi7"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544775585445,
1544241075700,
1543593954555,
1543575112268,
1543516937810,
1543516851228,
1543516816470,
1543346229250,
1543283976231,
1543260361130,
1543254880103,
1543252894074,
1543174343440,
1543174227650,
1543174134646,
1543006449780,
1542854928533,
1542854781715,
1542854677121,
1542854589423,
1542854383054,
1542854331519,
1542854284054,
1542854211464,
1541417366212,
1541111234044,
1539863665507
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1527/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1527/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1527/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1527/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1527/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1527/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1527/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1527/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1527/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1527/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1527/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1527/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1527/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1527/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1527/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1527/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1527/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1527/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1527/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1527/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1527/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1527/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1527/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1527/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1527/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1527/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1527/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper explores the use of multi-step latent variable models of the dynamics in imitation learning, planning, and finding sub-goals. The reviewers found the approach to be interesting. The initial experiments were a main weakpoint in the initial submission. However, the authors updated the experimental results to address these concerns to a significant degree. The reviewers all agree that the paper is above the bar for acceptance. I recommend accept.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"meta review\"}",
"{\"title\": \"Feedback\", \"comment\": \"I want to thank the authors for the thorough engagement with the reviewers and the additional effort for improving the original submission.\"}",
"{\"title\": \"Feedback\", \"comment\": \"Dear Reviewer,\\n\\nThanks for encouraging words. We note that except adding comparison to the Learning to query paper, other results are already added to the paper (i.e the intuition behind the auxiliary cost as the reviewer suggested, related work section as other reviewer suggested, comparison to the baseline without auxiliary loss). Since we cant update the paper now, if the paper gets accepted , we will update it then! \\n\\nThanks for your time! :)\"}",
"{\"title\": \"Feedback\", \"comment\": \"I think the authors addressed really well the reviewes and engaged a lot on this paper with providing additional results and experiments which were requested by reviewers. I think the rebuttal was very adequare and definately would be useful for both reviewers and any outside readers. I will revise all of the provided extra information and the new draft of the paper and make a revision based on those.\\nThank you for the great communication.\"}",
"{\"title\": \"Thanks for increasing score!\", \"comment\": \"Dear Reviewer,\\n\\nWe thank the reviewer for taking time in reading our feedback, and increasing their score.\\nYour feedback has already been very helpful in improving the paper. \\n\\nThanks!\"}",
"{\"title\": \"Feedback Useful! thanks :)\", \"comment\": \"Dear Reviewer,\\n\\nYour feedback has already been very helpful in improving the paper. We would like to know if our response adequately addressed your concerns. \\n\\nAs the discussion period is coming to an end. If you have any questions or would like to provide more specific context behind your scores, we would be happy to provide feedback. Are there any other aspects of the paper that you think could be improved?\\n\\nthanks for your time! :)\"}",
"{\"title\": \"Feedback very useful!\", \"comment\": \"Dear Reviewer,\\n\\nYour feedback has already been very helpful in improving the paper. We would like to know if our response adequately addressed your concerns. \\n\\nAs the discussion period is coming to an end. If you have any questions or would like to provide more specific context behind your scores, we would be happy to provide more feedback. Are there any other aspects of the paper that you think could be improved?\"}",
"{\"title\": \"Final Rebuttal\", \"comment\": \"We would like to thank all the reviewers for taking time and giving detailed feedback on our paper. Feedback by the reviewers have already been very helpful in improving the paper. We are also glad that the reviewers found our paper to be \\\"quite well presented and concise.\\\" (Reviewer 1) and \\\"an interesting paper\\\" (Reviewer 2).\\n\\nWe would open source the code for the proposed method.\\n\\nWe conducted additional experiments, and rewrote certain sections of the paper to make it more concise.\\n\\n- We conducted additional experiments and compared our paper with the state of the art state space model. (Buesing, Lars, et al. [1]). We found that in our preliminary experiments our model performs better as compared to [1]. (Reviewer 1 and Reviewer 3). We also note that the source code for this paper is not available (Buesing, Lars, et al. [1]), and the authors only evaluated their method on Atari taking millions of samples.\\n\\n- We also conducted additional experiments comparing the proposed approach as to when no auxiliary cost was included (all reviewers). We find that the proposed approach performs better as compared to the case when auxiliary cost is not included (and hence showing that our model is learning a better predictive model). (ALL REVIEWERS)\\n\\n- We added more references showing that the problem of long-term future prediction exists in the context of sequential LVMs. (Reviewer 3).\\n\\n- We updated the part of the paper where we describe the KL cost. (Reviewer1).\\n\\n- We also ran additional experiments comparing our work with the prediction and control paper as pointed by Reviewer 1. Here, also we outperform the proposed baseline. Again, we note that this paper does not have open source code base.\\n\\n\\n[1] Buesing, Lars, et al. \\\"Learning and Querying Fast Generative Models for Reinforcement Learning.\\\" *arXiv preprint arXiv:1802.03006* (2018).\\n\\nWe feel that conducting additional experiments has improved the quality of the paper and we also think that we have appropriately addressed the comments by the reviewers.\\n\\nWe again thank all the reviewers, area chair for their time.\\n\\nThank you! :-)\"}",
"{\"title\": \"Feedback by reviewer\", \"comment\": \"We would appreciate it if the reviewer could take another look at our changes and additional results, and let us know if the reviewer would like to request additional changes that would alleviate reviewers concerns, and let us know if you would like to either revise your rating of the paper.\\n\\n We once again thank the reviewer for the feedback of our work.\\n\\nThanks for your time! :)\"}",
"{\"title\": \"Clarification understood\", \"comment\": \"Thanks, that makes sense. Feel free to update your original comment.\"}",
"{\"title\": \"More clarification\", \"comment\": \"We apologize for the confusion. We evaluate the likelihood of heldout trajectory (i.e \\\"test set\\\" trajectories).\\n\\n\\nWe meant this, i.e we sample the trajectory from the \\\"true\\\" env, and then evaluate likelihood under the proposed model, and the rest of the baselines.\\nE[log p_model(trajectory|past)]_{trajectory sampled from true environment}\\n\\nWe would appreciate it if the reviewer could take another look at our changes and additional results, and let us know if the reviewer would like to request additional changes that would alleviate reviewers concerns. \\n\\nOnce the reviewer says yes, we would update the first reply so that other readers dont get confused. Thanks again for taking time in reading our reply.\"}",
"{\"title\": \"Clarification of the Likelihood values\", \"comment\": \"Thanks again for incorporating this feedback and for the effort in improving the submition.\", \"i_have_a_question_regarding_the_first_table_in_the_above_comment_specifically\": \"\\\"Here, we compute the likelihood of the predicted trajectories. This result shows that the trajectories generated by the proposed model are more likely as compared to the baseline methods. We compare the proposed method to stochastic RNN, learning to query paper, proposed model (without auxiliary cost) and proposed model with auxiliary cost.\\\" \\n\\nI'm a bit confused what exactly do you mean that you evaluate. Do you mean that you calculate:\\nE[log p_model(trajectory|past)]_{tranjectory sampled according to p_model} \\nor do you mean \\nE[log p_model(trajectory|past)]_{tranjectory sampled from true environment}\\n\\nThe language used (predicted trajectories) indicates the former. However, I'm not sure this indicates nessacarily that one model is better than another as it more-likely tells us what is the entropy of the distribution, but not if that distribution is good in any way. \\n\\nAlso thanks for the MS Pacman experiments as well.\"}",
"{\"title\": \"Request for feedback.\", \"comment\": \"Thank you again for the thoughtful review. We would like to know if our rebuttal adequately addressed your concerns. We would also appreciate any additional feedback on the revised paper. (We have compared to the Learning to query paper and Prediction and Control paper which the reviewer asked, and added the missing baselines). Are there any other aspects of the paper that you think could be improved?\"}",
"{\"title\": \"Request for feedback.\", \"comment\": \"Thank you again for the thoughtful review. We would like to know if our rebuttal adequately addressed your concerns. We would also appreciate any additional feedback on the revised paper. (We have compared to the Learning to query paper and Prediction and Control paper which the reviewer asked, and also rewritten the motivation behind the KL term).\\nAre there any other aspects of the paper that you think could be improved?\"}",
"{\"title\": \"Request for Feedback.\", \"comment\": \"Thank you again for the thoughtful review. We would like to know if our rebuttal adequately addressed your concerns. We would also appreciate any additional feedback on the revised paper. Are there any other aspects of the paper that you think could be improved?\"}",
"{\"title\": \"Comparison with Temporal Segment Models\", \"comment\": \"As the reviewer requested, we also compared the proposed method to the Temporal segment models. We also note that this paper does not have an open-source implementation, so we are trying to get the proposed baseline right.\\n\\nIn order to show that the proposed model, learns a better predictive model, we first train the proposed model and the baseline using trajectories sampled from an expert policy. We then evaluate the log-likelihood on the test trajectories.\\n\\nMethod Likelihood\\nVariational RNN\\t\\t\\t\\t\\t 1.27\\nPrediction and Control\\t\\t\\t\\t\\t\\t 1.61\\nProposed model 1.84\\n\\n(Higher is better)\\n\\nThis shows that the proposed method performs better as compared to the Prediction and Control baseline. \\n\\nWe would appreciate it if the reviewer could take another look at our changes and additional results, and let us know if the reviewer would like to request additional changes that would alleviate reviewers concerns. We hope that our updates to the manuscript address the reviewer's concerns about clarity, and we hope that the discussion above addresses the reviewer's concerns about empirical significance. We once again thank the reviewer for the feedback of our work.\"}",
"{\"title\": \"Comparison with state of the art (2/2)\", \"comment\": \"Q: \\u201cBoth in the introduction and during the main text the authors have not cited [1] which I think is a very closely related method. In this work similarly, a generative model of future segments is learned using a variational framework. In addition, the MPC procedure that the authors present in this paper is not novel, but has already been proposed and tried in [1] - optimizing over the latent variables rather than the actions directly, and there have been named Latent Action Priors. \\u201c\\n\\nWe again agree with the reviewer that the paper [1] should be cited and discussed. We think, that a more related paper to our proposed method is the use of state space models where you are actually learning the dynamics model at some higher level of abstraction. We don\\u2019t claim that using the proposed method for MPC planning is novel, only the choice of bidirectional inference network and thereby leveraging variational methods and autoregressive models (RNNs) to improve training of the predictive model (at some higher level of hierarchy) in order to more accurately predict the future. Hence, use of inference network as well as using the auxiliary cost is novel (as shown by our results). We outperform both the Sectar [1] paper and learning to query paper. [2].\\n\\n[1] Sectar. https://arxiv.org/abs/1806.02813\\n[2] Learning and Querying Generative Models for RL https://arxiv.org/abs/1802.03006\", \"q\": \"\\u201cThe authors claim that they train the auxiliary loss using Variational Inference, yet they drop the KL term, which is \\\"kinda\\\" an important feature of VI. Auxiliary losses are well understood that often help in RL, hence there is no need to over-conceptualize the idea of adding the extra term log p(b|z) as a VI and then doing something else. It would be much more clear and concise just to introduce it as an extra term and motivate it without referring to the VI framework, which the authors do not use for it (they still use it for the main generative model). The only way that this would have been acceptable if the experiment section contained experiments with the full VI objective as equation (6) suggest and without the sharing of the variational priors and posteriors and compared them against what they have done in the current version of the manuscript. \\u201c\\n\\nWe thank the reviewer for pointing this out. We agree with the reviewer and have updated our paper to reflect this change in Section 2.3 of the paper.\\n\\n\\\"Comparison with state of the art state space models\\\"\\n\\n We first compare the proposed method to state of the art state space model (Buesing, Lars, et al) [1]. We also note that [1] does not have an open-source implementation, and they ([1]) only evaluated on few atari games using millions of samples per game. We believe that comparing to such a strong baseline is very important and hence we compared to this on a challenging image based mujoco env, and MS_PACMAN from ALE.\\nWe ask the reviewer to see the headline \\\"Comparison with state of the art state space model - ALL REVIEWERS \\\" for more details.\\n\\n[1] Buesing, Lars, et al. \\\"Learning and Querying Fast Generative Models for Reinforcement Learning.\\\" *arXiv preprint arXiv:1802.03006* (2018).\"}",
"{\"title\": \"Thanks for feedback! (1/2)\", \"comment\": \"We thank the reviewer for such a detailed feedback. We have conducted additional experiments to address the concerns raised about the evaluation, and we clarify specific points below. We believe that these additions address all of your concerns about the work, though we would appreciate any additional comments or feedback that you might have. We acknowledge that the paper was certainly lacking polish and accept that this may have made the paper difficult to read in places. We have uploaded a revised version in which we have revised the problem statement and writing as per the reviewer's suggestions. We briefly summarize the key idea of the paper and then address the specific concerns.\", \"q\": \"\\u201cHowever, given that similar result has been shown in [1] regarding the planning framework it is unclear how novel the result is. \\u201c\\n\\nThe reviewer is right. We are not suggesting the use of the proposed method is novel for planning. The novelty comes from using bidirectional inference network and using the auxiliary cost for exploration. We showed that the method learns a better inference network by using the proposed method for planning, and showing that the proposed method outperforms more complicated and state of the art methods like [1], [2].\\n\\n[1] Sectar. https://arxiv.org/abs/1806.02813\\n[2] Learning and Querying Generative Models for RL https://arxiv.org/abs/1802.03006\"}",
"{\"title\": \"Comparison to the state of the art models (2/2)\", \"comment\": \"Q: \\u201cSlightly unsure about the details of the imitation and RL (MPC + PPO + Model learning) experiments. How large is the replay buffer? What\\u2019s the value of k? It would be interesting how the value of k affects learning performance. It\\u2019s unclear how many seeds experiments were repeated with.\\u201d\\n\\nWe agree with the reviewer. We have added the details about each experiment\\u2019s setup in the appendix. For example, we use k=19 for wheeled locomotion tasks. We followed the exact same setup (hyperparameters) as SeCTAr[1] and so we did not try any other values of k to ensure fairness. All imitation learning experiments and RL experiments are repeated 5 times with different random seeds. \\n\\n[1] Sectar. https://arxiv.org/abs/1806.02813\", \"q\": \"\\u201cNot sure if the ideas really do scale to \\u201clong-horizon\\u201d problems. The MuJoCo tasks don\\u2019t need good long horizon models and the BabyAI problem seems fairly small.\\u201d\\n\\nWe thank the reviewer for pointing this out, we agree this is an important point. The wheeled locomotion tasks we used in our experiments is a challenging task with long-horizon planning. The agent is presented with multiple goals and sparse rewards. The agent needs to plan to reach each goal sequentially, only after reaching the 3rd goal, the agent obtains a reward of 1. We currently outperform the Sectar paper which we believe is a very strong baseline. We ran other experiments to compare our proposed method with the state of the art state space models[1]. We ask the reviewer to refer to the \\\"ALL REVIEWERS\\\" headline.\\n\\n[1] Learning and Querying Generative Models for RL https://arxiv.org/abs/1802.03006\\n\\nWe would appreciate it if the reviewer could take another look at our changes and additional results, and let us know if the reviewer would like to request additional changes that would alleviate reviewers concerns. We hope that our updates to the manuscript address the reviewer's concerns about clarity, and we hope that the discussion above addresses the reviewer's concerns about empirical significance. We once again thank the reviewer for the feedback of our work.\"}",
"{\"title\": \"Thanks for feedback! (1/2)\", \"comment\": \"We thank the reviewer for such a detailed feedback. We have conducted additional experiments to address the concerns raised about the evaluation, and we clarify specific points below. We believe that these additions address all of your concerns about the work, though we would appreciate any additional comments or feedback that you might have. We acknowledge that the paper was certainly lacking polish and accept that this may have made the paper difficult to read in places. We have uploaded a revised version in which we have added the extra references as per the reviewer's suggestions.\\n\\nWe have conducted additional experiments to compare the proposed model to the state of the art state space model. We ask the reviewer to refer to the heading \\\"Comparison with state of the art state space model - ALL REVIEWERS\\\"\", \"q\": \"\\u201cOn the inference side, the paper makes a few choices to make the posterior approximation. It would be useful to describe the intuitions behind the choices especially the dependence of the posterior on actions a_{t-1}:T because it seems like the actions _should_ be fairly important for modeling the dynamics in a stochastic system.\\u2018\\n\\nWe thank the reviewer for pointing this out. In principle, the posterior should depend on future actions. To take into account the dependence on future actions as well as future observations, we can use the LSTM that processes the observation-action sequence backwards. In pilot trials, we conducted experiments with and without the dependencies on actions for the backward LSTM and we didn\\u2019t notice a noticeable difference in terms of performance. We hence chose to drop the dependencies on actions in the backward LSTM to simplify the code. We have updated the paper (appendix) to clarify this difference.\"}",
"{\"title\": \"Learning better dynamics model using auxiliary cost and bidirectional inference (3/3)\", \"comment\": \"Q:\\u201d Application of learning models to RL is not novel, see references above. But maybe this is a misunderstanding on my side, as the Buesing paper is cited in the related work.\\u201d\\n\\nWe agree that the application of learning models to RL is not novel, the novelty of our method comes from building a more accurate predictive model of the environment. Our methods differs from previous works in many aspects. Mainly, we differ on our model architecture, which dictates how to train the model and how to use it for control or sequential task in general. For instance, Buesing et al. use a pretrained state-space model to generate trajectories of latent states. These are encoded by an LSTM and the obtained embedding is fed to a policy network along with the real observations. This policy network is then trained using a model-free method. Therefore, model accuracy is not really critical in their setting as they don\\u2019t execute an explicit planning with the learned model. In our work, we focus on training a sequential LVM model that learns a better model of the longer term future by forcing latent variables to carry information about future observations. The accuracy of our model is critical in our setting in order to provide sensible explicit planning. In our experiments, we also compared the proposed method to the work of Buesing et. al, and in our preliminary experiments, we outperform as compared to their work.\\n\\nWe would appreciate it if the reviewer could take another look at our changes and additional results, and let us know if the reviewer would like to request additional changes that would alleviate reviewers concerns. We hope that our updates to the manuscript address the reviewer's concerns about clarity, and we hope that the discussion above addresses the reviewer's concerns about empirical significance. We once again thank the reviewer for the feedback of our work.\"}",
"{\"title\": \"Long term prediction sequential LVM's (2/3)\", \"comment\": \"\\\" Especially since other works have made model-based control work in challenging environments:/ Application of learning models to RL is not novel, see references above. But maybe this is a misunderstanding on my side, as the Buesing paper is cited in the related work.\\u201d\\\"\\n\\nWe thank the reviewer for pointing out the references for other model-based RL works. We have updated these references in the \\u201crelated works\\u201d section. We build our work on many works which explore models ranging from deterministic recurrent neural networks (RNNs) to fully stochastic models, to be more precise on the comparisons between our work compared with other works in this area.\\n\\n[1], [2], [5] train stochastic stochastic RNN with latent variables, but not in the context of model based reinforcement learning (i.e not for building model of the environment, but for supervised or unsupervised learning learning tasks such as language modeling and speech modeling). [6], [7] train an action-conditioned video prediction network by first learning the latent representation, and then using that latent representation for learning the model. Similar to [9], we present stochastic sequence models that work on high-dimensional data.\\n\\n\\n[1] Black box variational inference for state space models. https://arxiv.org/abs/1511.07367\\n[2] Recurrent Latent Variable for sequential data https://arxiv.org/abs/1506.02216\\n[3] Sequential neural models with stochastic layers https://arxiv.org/abs/1605.07571\\n[4] Deep kalman filters. https://arxiv.org/abs/1511.05121\\n[5] Z-Forcing https://arxiv.org/abs/1711.05411\\n[6] Embed to control: A locally linear latent dynamics model for control from raw images.\\n[7] From pixels to torques: Policy learning with deep dynamical models.\\n[8] Value prediction network. https://arxiv.org/abs/1707.03497\\n[9] Learning and Querying Generative Models for RL https://arxiv.org/abs/1802.03006\", \"q\": \"\\u201d The authors chose to use the latent states for planning. This turns the optimisation into a POMDP problem. How is the latent state inferred at run time? How do we assure that the policy is still optimal?\\n\\nAt the run time, latent variables are inferred by executing a planning algorithm. In the case of MPC, which we use in our RL experiments, we generate multiple samples of latents variables from the prior p(z_t| h_{t-1}), we evaluate the corresponding generated trajectories and then we pick the latent variable sequence that gives the most rewarding trajectory. By planning over latent variables and not over actions directly, we assure that the actions generated by the optimal latent variables are also approximately optimal with respect to state-action distribution captured by the model. If we assume that we explore enough and that our model is accurate enough, the obtained policy is ensured to be optimized towards maximizing expected rewards.\"}",
"{\"title\": \"Thanks for review! Comparison against state space models. (1/3)\", \"comment\": \"We thank the reviewer for such a detailed feedback. We have conducted additional experiments to address the concerns raised about the evaluation, and we clarify specific points below. We believe that these additions address all of your concerns about the work, though we would appreciate any additional comments or feedback that you might have. We acknowledge that the paper was certainly lacking polish and accept that this may have made the paper difficult to read in places. We have uploaded a revised version in which we have added the extra references as per the reviewer's suggestions.\", \"q\": \"\\u201cThere are numerous typos in text and in equations (e.g. $dz$ missing from integrals).\\u201d\\n\\nWe thank the reviewer for pointing these out. We have corrected these typos in the updated paper.\"}",
"{\"title\": \"Comparison with state of the art state space model\", \"comment\": \"We thank the reviewers for such a detailed feedback. We have conducted additional experiments to address the concerns raised about the evaluation, and we clarify specific points below. We believe that these additions address the shared concerns of the reviewers. We will address individual reviewer's concerns in their respective threads and we still welcome any additional comments or feedback that you might have.\\n\\nWe first compare the proposed method to state of the art state space model (Buesing, Lars, et al) [1]. We also note that [1] does not have an open-source implementation, and they ([1]) only evaluated on few atari games using millions of samples per game. We believe that comparing to such a strong baseline is very important and hence we compared to this on a challenging image based mujoco env, and MS_PACMAN from ALE. For our evaluations on image based mujoco domain, we use image-based continuous control tasks (half-cheetah). This environments provide qualitatively different challenges, as its nearly impossible to infer the velocity of the half-cheetah just from images, and hence using only the images makes the task partially observable (and challenging). We compare the proposed method with the state-of-the-art Learning to Query model (Buesing, Lars, et al) [1]. In order to show that the proposed model, learns a better predictive model, we first train the proposed model and the baselines using trajectories sampled from an expert policy. We evaluate both the proposed model, and the baseline by predicting the future for longer timesteps (100 timesteps) than it was train for (50 time steps). We demonstrate that the proposed model helps to learn a better model with improved long term dependencies by making the latent variable z conditioned on the future, and using the latent variable for predicting the future.\\n\\nMethod Likelihood\\nVariational RNN\\t\\t\\t\\t\\t 1.2\\nLearning to query\\t\\t\\t\\t\\t\\t 1.62\\nProposed model without the auxiliary cost 1.59\\nProposed model 1.79\\n\\n\\nHere, we compute the likelihood of the predicted trajectories. This result shows that the trajectories generated by the proposed model are more likely as compared to the baseline methods. We compare the proposed method to stochastic RNN, learning to query paper, proposed model (without auxiliary cost) and proposed model with auxiliary cost.\\n\\n=================================\\n\\nWe also compare the proposed model on the ms_pacman env from the atari. This env was chosen to cover a broad range of env. dynamics. The data was collected by running a pretrained policy, and collecting sequence of observations, actions and rewards for 10 time steps. Results are computed on held out test set. Since, each of these models takes considerable amount of time to compute, we did not do any hyperparameter search neither for the baseline, nor for our proposed method. We report likelihood improvements over a baseline model. We consider 3 baselines [1] Variational RNN [2] learning to query paper [3] proposed method without the auxiliary cost [4] Proposed method with auxiliary cost. \\n\\nWe report Improvement of test likelihoods of environment models over a baseline model \\n\\nMethod Likelihood (in units of 10^-3.nats/pixel)\\nVariational RNN\\t\\t\\t\\t\\t\\t\\t\\t 1.4\\nLearning to query\\t\\t\\t\\t\\t\\t 1.77\\nProposed model 1.85\\nProposed model without the auxiliary cost 1.69\\n\\nFor this env also, the proposed model outperforms both the learning to query model, as well as the baseline without the auxiliary cost.\"}",
"{\"title\": \"Review of \\\"Modeling the Long Term Future in Model-Based Reinforcement Learning\", \"review\": \"The authors claim that long-term prediction as a key issue in model-based reinforcement learning. Based on that, they propose a fairly specific model to which is then improved with Z-forcing to achieve better performance.\\n\\n## Major\\n\\nThe main issue with the paper is that the premise is not convincing to me. It is based on four works which (to me) appear to focus on auto-regressive models. In this submission, latent variable models are considered. The basis for sequential LVMs suffering from these problems is therefore not given by the literature. \\n\\nThat alone would not be much of an issue, since the problem could also be shown to exist in this context in the paper. But the way I understand the experimental section, the approach without the auxiliary cost is not even evaluated. Therefore, we cannot assess if it is that alone which improves the method. The central hypothesis of the paper is not properly tested.\\n\\nApart from that, the paper appears to have been written in haste. There are numerous typos in text and in equations (e.g. $dz$ missing from integrals).\\n\\nTo reconsider my assessment, I think it should be shown that the problem of long-term future prediction exists in the context of sequential LVMs. Maybe this is obvious for ppl more knowledgeable in the field, but this paper fails to make that point by either pointing out relevant references or containing the necessary experiments. Especially since other works have made model-based control work in challenging environments:\\n\\n- Buesing, Lars, et al. \\\"Learning and Querying Fast Generative Models for Reinforcement Learning.\\\" *arXiv preprint arXiv:1802.03006* (2018).\\n- Karl, M., Soelch, M., Becker-Ehmck, P., Benbouzid, D., van der Smagt, \\n P., & Bayer, J. (2017). Unsupervised Real-Time Control through \\n Variational Empowerment. *arXiv preprint arXiv:1710.05101*.\\n\\n## Minor\\n\\n- The authors chose to use the latent states for planning. This turns the optimisation into a POMDP problem. How is the latent state inferred at run time? How do we assure that the policy is still optimal?\\n- Application of learning models to RL is not novel, see references above. But maybe this is a misunderstanding on my side, as the Buesing paper is cited in the related work.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting approach; not sure if really scales to long horizon problems\", \"review\": \"The paper introduces an interesting approach to model learning for imitation and RL. Given the problem of maintaining multi-step predictions in the context of sequential decision making process, and deficiencies faced during planning with one-step models [1][2], it\\u2019s imperative to explore approaches that do multi-step predictions. This paper combines ideas from learning sequential latent models with making multi-step future predictions as an auxiliary loss to improve imitation learning performance, efficiency of planning and finding sub-goals in a partially observed domain.\\n\\nFrom what I understand there are quite a few components in the architecture. The generative part uses the latent variables z_t and LSTM hidden state h_t to find the factored autoregressive distribution p_\\\\theta. It\\u2019s slightly unclear how their parameters are structured and what parameters are shared (if any). I understand these are hard to describe in text, so hopefully the source code for the experiments will be made available.\\n\\nOn the inference side, the paper makes a few choices to make the posterior approximation. It would be useful to describe the intuitions behind the choices especially the dependence of the posterior on actions a_{t-1}:T because it seems like the actions _should_ be fairly important for modeling the dynamics in a stochastic system.\\n\\nIn the auxiliary cost, it\\u2019s unclear what q(z|h) you are referring to in the primary model. It\\u2019s only when I carefully read Eq 7, that I realized that it\\u2019s p_\\\\theta(z|h) from the generator. \\n\\nSlightly unsure about the details of the imitation and RL (MPC + PPO + Model learning) experiments. How large is the replay buffer? What\\u2019s the value of k? It would be interesting how the value of k affects learning performance. It\\u2019s unclear how many seeds experiments were repeated with.\\n\\nOverall it\\u2019s an interesting paper. Not sure if the ideas really do scale to \\u201clong-horizon\\u201d problems. The MuJoCo tasks don\\u2019t need good long horizon models and the BabyAI problem seems fairly small.\\n\\n- Minor points\\n\\nSec 2.3: not sensitive *to* how different\", \"algorithm_2\": \"*replay* buffer\\n\\n[1]: https://arxiv.org/abs/1612.06018\\n[2]: https://arxiv.org/abs/1806.01825\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Good idea, good paper, needs more experiment for more conclusive results\", \"review\": \"After the rebuttal and the authors providing newer experimental results, I've increased my score. They have addressed both the issue with the phrasing of the auxiliary loss, which I'm very happy they did as well as provided more solid experimental results, which in my opinion make the paper strong enough for publication.\\n\\n#####\\nThe paper proposes a variational framework for learning a Model of both the environment and the actor's policy in Reinforcement Learning. Specifically, the model is a deterministic RNN which at every step takes as input also a new stochastic latent variable z_t. Compared to more standard approaches, the prior over z_t is not standard normal but depends on the previously hidden state. The inference model combines information from the forward generative hidden state and a backward RNN that looks only at future observations. Finally, an auxiliary loss is added to the model that tries to predict the future states of the backward RNN using the latent variable z_t. The idea of the paper is quite well presented and concise. \\n\\nThe paper tests the proposed framework on several RL benchmarks. Using it for imitation learning outperforms two baseline models: behaviour cloning and behaviour cloning trained with an auxiliary loss of predicting the next observation. Although the results are good, it would have been much better if there was also a comparison against a Generative model (identical to the one proposed) without the auxiliary loss added? The authors claim that the results of the experiment suggest that the auxiliary loss is indeed helping, where I find the evidence unconvincing given that there is no comparison against this obvious baseline. Extra comparison against the method from [1] or GAIL would make the results even stronger, but it is understandable that one can not compare against everything, hence I do not see this as a major issue. \\nThe authors also compare on long-horizon video prediction. Although their method outperforms the method proposed in Ha & Schmidhuber, this by no means suggests that the method is really that superior. I would argue that in terms of future video prediction that [3] provides significantly better results than the World Models, nevertheless, at least one more baseline would have supported the authors claims much better. \\nOn the Model-Based planning, the authors outperform SeCTAR model on the BabyAI tasks and the Wheeled locomotion. This result is indeed interesting and shows that the method is viable for planning. However, given that similar result has been shown in [1] regarding the planning framework it is unclear how novel the result is. \\n\\nIn conclusion, the paper presents a generative model for training a model-based approach with an auxiliary loss. The results look promising, however, stronger baselines and better ablation of how do different components actually contribute would make the paper significantly stronger than it is at the moment. Below are a few further comments on some specific parts of the paper.\", \"a_few_comments_regarding_relevant_literature\": \"Both in the introduction and during the main text the authors have not cited [1] which I think is a very closely related method. In this work similarly, a generative model of future segments is learned using a variational framework. In addition, the MPC procedure that the authors present in this paper is not novel, but has already been proposed and tried in [1] - optimizing over the latent variables rather than the actions directly, and there have been named Latent Action Priors. \\n\\nThe data gathering process is also not a new idea and using the error in a dynamics model for exploration is a well-known method, usually referred to as curiosity, for instance see [2] and some of the cited papers as Pathak et. al., Stadie et. al. - these all should be at least cited in section 3.2.2 as well not only in the background section regarding different topics.\", \"on_the_auxiliary_loss\": \"The authors claim that they train the auxiliary loss using Variational Inference, yet they drop the KL term, which is \\\"kinda\\\" an important feature of VI. Auxiliary losses are well understood that often help in RL, hence there is no need to over-conceptualize the idea of adding the extra term log p(b|z) as a VI and then doing something else. It would be much more clear and concise just to introduce it as an extra term and motivate it without referring to the VI framework, which the authors do not use for it (they still use it for the main generative model). The only way that this would have been acceptable if the experiment section contained experiments with the full VI objective as equation (6) suggest and without the sharing of the variational priors and posteriors and compared them against what they have done in the current version of the manuscript. \\n\\n\\nA minor mistake seems to be that equation (5) and (7) have double counted log p(z_t|h_t-1) since they are written as an explicit term as well as they appear in the KL(q(z_t|..)|p(z_t|h_t-1)). \\n\\n\\n\\n[1] Prediction and Control with Temporal Segment Models [Nikhil Mishra, Pieter Abbeel, Igor Mordatch, 2017]\\n\\n[2] Large-Scale Study of Curiosity-Driven Learning [Yuri Burda, Harri Edwards, Deepak Pathak, Amos Storkey, Trevor Darrell, Alexei A. Efros, 2018]\\n\\n[3] Action-Conditional Video Prediction using Deep Networks in Atari Games [Junhyuk Oh, Xiaoxiao Guo, Honglak Lee, Richard Lewis, Satinder Singh, 2015]\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
BJgQB20qFQ | Learning to Progressively Plan | [
"Xinyun Chen",
"Yuandong Tian"
] | For problem solving, making reactive decisions based on problem description is fast but inaccurate, while search-based planning using heuristics gives better solutions but could be exponentially slow. In this paper, we propose a new approach that improves an existing solution by iteratively picking and rewriting its local components until convergence. The rewriting policy employs a neural network trained with reinforcement learning. We evaluate our approach in two domains: job scheduling and expression simplification. Compared to common effective heuristics, baseline deep models and search algorithms, our approach efficiently gives solutions with higher quality. | [
"problem solving",
"reactive decisions",
"problem description",
"inaccurate",
"heuristics",
"better solutions",
"slow",
"new",
"solution",
"local components"
] | https://openreview.net/pdf?id=BJgQB20qFQ | https://openreview.net/forum?id=BJgQB20qFQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"HkgK3YNll4",
"ryxL6cPq0m",
"BJQjn8P9Am",
"S1xk3HP5AQ",
"Bye7t4LXCQ",
"BylgMrxITQ",
"SJeXthVBa7",
"rkgCNnNB6Q",
"H1g_BV_c2m",
"r1eZ-Hw8nX"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_comment",
"official_comment",
"official_review",
"official_review"
],
"note_created": [
1544731057406,
1543301822201,
1543300787085,
1543300519090,
1542837371119,
1541960968451,
1541913722824,
1541913653811,
1541207103732,
1540941049354
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1526/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1526/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1526/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1526/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1526/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1526/AnonReviewer4"
],
[
"ICLR.cc/2019/Conference/Paper1526/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1526/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1526/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1526/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper provides a new approach for progressive planning on discrete state and action spaces. The authors use LSTM architectures to iteratively select and improve local segments of an existing plan. They formulate the rewriting task as a reinforcement learning problem where the action space is the application of a set of possible rewriting rules. These models are then evaluated on a simulated job scheduling dataset and Halide expression simplification. This is an interesting paper dealing with an important problem. The proposed solution based on combining several existing pieces is novel. On the negative side, the reviewers thought the writing could be improved, and the main ideas are not explained clearly. Furthermore, the experimental evaluation is weak.\", \"confidence\": \"3: The area chair is somewhat confident\", \"recommendation\": \"Reject\", \"title\": \"Borderline paper\"}",
"{\"title\": \"Revision\", \"comment\": [\"We thank all reviewers for their comments! We have revised the paper with the following major changes to incorporate the comments:\", \"We have added an ablation study to demonstrate that our approach is not heavily biased by the initial solutions.\", \"For expression simplification, we have added an evaluation on Z3, a high-performance theorem prover developed by Microsoft Research. Since its simplifier would invoke a solver to rewrite the expressions, the simplification steps performed by this solver may not be included in the Halide ruleset, which makes it a strong baseline to compare with.\"]}",
"{\"title\": \"Thank you for your response\", \"comment\": \"Thank you for your response! We have tried the self-critical approach, and we find that it does not considerably affect the performance, thus we did not include the results in our revision.\"}",
"{\"title\": \"Response and clarification\", \"comment\": [\"Thank you for your review! About your questions and comments:\", \"1. In our job scheduling problem setup:\", \"Each state is the current job schedule (Figure 2 (a) on page 5).\", \"The state transition is performed by a rewriting step, which switches the scheduling order of 2 jobs (Section 3.1 on page 3 and Figure 2 (a) on page 5).\", \"The model is trained with Advantage Actor-Critic algorithm (Section 4.4 on page 6).\", \"It is episodic, and we consider a rewriting process that starts from an initial schedule for a given set of jobs (earliest job first in our evaluation), and ends at the timestep when the neural network considers that the current schedule cannot be further improved (i.e., the score predictor (SP) computes a negative value), as an episode (Section 5.1.2 on page 7, and more details are in Appendix D on page 14).\", \"We use epsilon-greedy exploration strategy, and the details are in Appendix D on page 14-15. We can move it to the main body if it is clearer.\", \"2. In our evaluation:\", \"The criticism about unfair comparison against DeepRM is incorrect. We evaluate on not only the same tasks as in DeepRM, but also on more complicated settings with larger number of resource types (Section 5.1.1 on page 7). The point is to show our proposed approach is able to deal with more complicated settings than prior works, achieving stronger performance.\", \"The reasons why we choose to evaluate on Halide repository are two-fold: (1) Halide is widely used at scale in multiple products of Google (e.g., YouTube) and Adobe Photoshop. Its expression simplifier has been carefully tuned with manually-designed heuristics, thus provides a strong baseline for comparison. (2) The format of Halide expressions is general and covers a large part of common operations, including standard arithmetic operators (+, -, *, /, %), boolean operators (&&, ||, !), comparison operators (<, <=, !=), min/max operators, etc (Section 3.2 on page 3, and more details are in Appendix A on page 11). Notice that this is a more comprehensive operator set than previous works on finding equivalent expressions, which consider only boolean expressions [1] [2] or a subset of algorithmic operations [1]. More related work can be found in Section 2 on page 2. Thus, the effectiveness of our approach in the Halide domain provides a good indication that it could also generalize to other expression simplification problems. We have revised Section 3.2 (page 3) to make this point clearer.\", \"Besides the Halide rewriter, we have added an evaluation on Z3, which is a high-performance theorem prover developed by Microsoft Research. Note that Z3 simplifier works by traversing each sub-formula in the input expression and invoking the solver to find a simpler equivalent one to replace it, thus the simplification steps performed by this solver may not be included in the Halide ruleset, which makes it a strong baseline to compare with. The results and discussion can be found in Section 5.2 (page 8-9).\", \"3. The concrete rewriting process varies with different initial solutions, e.g., a nearly optimal solution would require a much fewer rewriting steps; however, the quality of the final solution does not heavily depend on the initial one. We have added an ablation study about this point in Appendix E (page 15) in our revision, and we address the main confusion below:\", \"For job scheduling, we note that the initial schedules are constructed using the earliest-job-first policy, because this schedule is intuitive, easy to compute with a negligible overhead, while is much less effective than the optimal solution, as reported in the paper (Table 1 on page 7). Thus, our evaluation demonstrates that our approach dramatically improves the quality of an initial highly ineffective solution, and results in better ones than computed using other baselines. In our ablation study with initial schedules of different average slow down, the results demonstrate that our neural rewriter model consistently achieves a better performance than baseline approaches. This demonstrates that our rewriting model is robust to the quality of the initial solution.\", \"For expression simplification, since our evaluation metric is the average reduction of expressions (Section 5.2.1 on page 8), the results demonstrate that our approach significantly reduces the complexity of the initial expressions (Table 3 on page 8). Note that the initial expressions could be quite complicated, e.g., with a parse tree of 100 nodes (Table 2 on page 8).\", \"4. These definitions are in Section 4.1 (page 4).\", \"[1] Miltiadis Allamanis, Pankajan Chanthirasegaran, Pushmeet Kohli, Charles Sutton, Learning Continuous Semantic Representations of Symbolic Expressions, ICML 2017.\", \"[2] Richard Evans, David Saxton, David Amos, Pushmeet Kohli, Edward Grefenstette, Can Neural Networks Understand Logical Entailment? ICLR 2018.\"]}",
"{\"title\": \"self-critical approach\", \"comment\": \"https://arxiv.org/pdf/1612.00563.pdf\"}",
"{\"title\": \"Interesting read but unclear contribution/implications\", \"review\": \"This paper addresses the challenges of prediction-based, progressive planning on discrete state and action spaces. Their proposed method applies existing DAG-LSTM/Tree-LSTM architectures to iteratively refine local sections in the existing plan that could be improved until convergence. These models are then evaluated on a simulated job scheduling dataset and Halide expression simplification.\\n\\nWhile this paper presents an interesting approach to the above two problems, its presentation and overall contribution was pretty unclear to me. A few points:\\n\\n1. Ambiguous model setup: It may have been more advantageous to cut a large portion of Section 3 (Problem Setup), where the authors provide an extensive definition of an optimization problem, in favor of providing more critical details about the model setup. For example, how exactly should we view the job scheduling problem from an RL perspective? How are the state transitions characterized, how is the network actually trained (REINFORCE? something else?), is it episodic (if so, what constitutes an episode?), what is the exploration strategy, etc. It was hard for me to contextualize what exactly was going on\\n\\n2. Weak experimental section: The authors mention that they compare their Neural Rewriter against DeepRM using a simplified problem setup from the original baseline. I wonder how their method would have fared against a task that was comparable in difficulty to the original method -- this doesn\\u2019t feel like a fair comparison. And although their expression simplification results were nice, I would also like to know why the authors chose to evaluate their method on the Halide repository specifically. Since they do not compare their method against any other baselines, it\\u2019s hard for me to gauge the significance of their results.\\n\\n3. Variance across initializations: It would have been nice to see an experiment on how various initializations of schedules/expressions affect the policies learned. I would imagine that poor initializations could lead to poor results, but it would be interesting if the Neural Rewriter was robust to the quality of the initial policy. Since this is not addressed in the paper, it is difficult to gauge whether the authors\\u2019 model performed well due to an unfair advantage. Additionally, how much computational overhead is there to providing these (reasonable) initial policies as opposed to learning from scratch?\\n\\n4. Unclear notation: As previously addressed by other reviewers, key definitions such as the predicted score SP(.) are missing from the text.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Response and revision plan\", \"comment\": \"Thank you for your review and suggestions! We are working on more tasks and ablation study, and will include the results once we finish the experiments. About your questions, we are not sure about what you mean by \\u201cself-critical approach\\u201d, could you elaborate it?\"}",
"{\"title\": \"Response and clarification\", \"comment\": \"Thank you for your review! About your comments:\\n\\n- \\u201cHowever, when starting with an initial solution there is always the danger of the final solution being overly biased by the initial solution.\\u201d\\n\\nThe motivation of our approach is to improve from existing solutions, and we agree that the rewriting process varies with different initial solutions, e.g., a nearly optimal solution would require a much fewer rewriting steps. However, we note that the quality of the final solution does not heavily depend on the initial one. In fact, we did experiments on job scheduling tasks with random initial schedules, and we found that the neural rewriter model achieves similar performance to the results starting with the earliest job first schedules, as reported in the paper (Section 5.1 on page 7). This demonstrates that our rewriting model is not overly biased by the initial solution. We will perform an ablation study about this point in our revision.\\n\\n- \\u201cSince they are simply using previously proposed LSTM variants, I do not see much contribution here.\\u201d\\n\\nWe do not claim that each individual component of our model is novel; instead, our key contribution is the overall framework (Fig. 1) that learns a neural network to progressively improve existing planning in the discrete space, and training the framework with reinforcement learning.\\n\\n- \\u201cMore importantly, details are missing such as the definitions of SP and RS from section 4.4.\\u201d\\n\\nThese definitions are in Section 4.1 (page 4).\"}",
"{\"title\": \"Seems novel, but the evaluations could use some work\", \"review\": \"\", \"summary\": \"Search-based policies are stronger than a reactive policies, but the resulting time consumption can be exponential. Existing solutions include designing a plan from scratch given a complete problem specification or performing iterative rewriting of the plan, though the latter approach has only been explored in problems where the action and state spaces are continuous.\\n\\nIn this work, the authors propose a novel study into the application of iterative rewriting planning schemes in discrete spaces and evaluate their approach on two tasks: job scheduling and expression simplification. They formulate the rewriting task as a reinforcement learning problem where the action space is the application of a set of possible rewriting rules to modify the discrete state. \\n\\nThe approach is broken down into two steps. In the first step, a particular partition of the discrete state space is selected as needing to be changed by a score predictor. Following this step, a rule selector chooses which action to perform to modify this state space accordingly.\\n\\nIn the job scheduling task, the partition of the state space corresponds to a single job who\\u2019s scheduled time must be changed. the application of a rule to rewrite the state involves switching the order of any two jobs to be run. In the expression simplification task, a state to be rewritten corresponds to a subtree in the expression parse tree that can be converted to another expression.\\n\\nTo train, the authors define a mixed loss with two component:\\n1. A mean squared error term for training the score predictor that minimizes the difference between the benefit of the executed action and the predicted score given to that node\\n2. An advantage actor critic method for training the rule selector that uses the difference between the benefit of the executed action and the predicted score given to that node as a reward to evaluate the action sampled from the rule set\", \"pros\": \"-The approach seems to be relatively novel and the authors address an important problem.\\n-The authors don\\u2019t make their approach more complicated than it needs to be\", \"cons\": \"\", \"notation\": \"The notation could be a lot clearer. The variable names used in the tasks should be directly mapped to those defined in the theory in Section 2. It wasn\\u2019t clear that the state s_t in the job scheduling problem was defined as the set of all nodes g_j and their edges and that the {\\\\hat g_t} corresponds to a single node. Also, there are some key details that have been relegated to the appendix that should be in the main body of the paper (e.g., how inference was performed)\", \"evaluation\": \"The authors perform this evaluation on two automatically generated synthetic datasets. It\\u2019s not clear that the method would generalize to real data. Why not try the approach on a task such as grammar error correction? Additionally, I would have liked to see more analysis of the method. Apart from showing the comparison of the method with several baselines, the authors don\\u2019t provide many insights into how their method works. How data hungry is the method? Seeing as the data is synthetically generated, how effective would the method be with 10X of the training data, or 10% of it? Were any other loss functions attempted for training the model, or did the authors only try the Advantage Actor Critic? What about a self-critical approach? I'd like to see more analysis of how varying different components of the method such as the rule selector and score predictor affect performance.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"An application of tree and DAG LSTMs with important details missing from the draft\", \"review\": \"The paper proposes to plan by taking an initial plan and improving it. The authors claim that 1) this will achieve results faster than planning from scratch and 2) will lead to better results than using quick, local heuristics. However, when starting with an initial solution there is always the danger of the final solution being overly biased by the initial solution. The authors do not address this adequately. They show how to apply tree and DAG-based LSTMs to job scheduling and shortening expressions. Since they are simply using previously proposed LSTM variants, I do not see much contribution here. The experiments show some gains on randomly generated datasets. More importantly, details are missing such as the definitions of SP and RS from section 4.4.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
HygQBn0cYm | Model-Predictive Policy Learning with Uncertainty Regularization for Driving in Dense Traffic | [
"Mikael Henaff",
"Alfredo Canziani",
"Yann LeCun"
] | Learning a policy using only observational data is challenging because the distribution of states it induces at execution time may differ from the distribution observed during training. In this work, we propose to train a policy while explicitly penalizing the mismatch between these two distributions over a fixed time horizon. We do this by using a learned model of the environment dynamics which is unrolled for multiple time steps, and training a policy network to minimize a differentiable cost over this rolled-out trajectory. This cost contains two terms: a policy cost which represents the objective the policy seeks to optimize, and an uncertainty cost which represents its divergence from the states it is trained on. We propose to measure this second cost by using the uncertainty of the dynamics model about its own predictions, using recent ideas from uncertainty estimation for deep networks. We evaluate our approach using a large-scale observational dataset of driving behavior recorded from traffic cameras, and show that we are able to learn effective driving policies from purely observational data, with no environment interaction. | [
"model-based reinforcement learning",
"stochastic video prediction",
"autonomous driving"
] | https://openreview.net/pdf?id=HygQBn0cYm | https://openreview.net/forum?id=HygQBn0cYm | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"BJe-g33-gE",
"HJlDT4Zq0m",
"Hkgnb3au07",
"SJgb6s6uR7",
"HJl8WopdRQ",
"Ske7vSaORQ",
"H1g7krauRQ",
"SyeU6MK03Q",
"ByeJ-YFahQ",
"HkeR-X8w2m"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544829929413,
1543275710528,
1543195652173,
1543195577174,
1543195390302,
1543193947271,
1543193819306,
1541472958311,
1541409014530,
1541001990050
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1525/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1525/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1525/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1525/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1525/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1525/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1525/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1525/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1525/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1525/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"Reviewers are in a consensus and recommended to accept after engaging with the authors. Please take reviewers' comments into consideration to improve your submission for the camera ready.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Paper decision\"}",
"{\"title\": \"Additional Updates.\", \"comment\": \"We have made a few additional formatting changes, please see the updated version.\"}",
"{\"title\": \"Thank you for the review (2/2)\", \"comment\": \">\\u201cI wonder if there is a way for a neural network to \\\"hack\\\" the uncertainty cost. I suppose that the proposed approach is an approximation to some entropy term, and it would be informative to see how exactly.\\u201d\\n\\u201cOverall, the separation of data uncertainty/risk vs model uncertainty is not done. This indicates that heterskedastic environments are candidats where the method can fail, and this limitation needs to be discussed or pointed out.\\u201d\\n\\n\\nIn Section 2.3 we perform a similar uncertainty decomposition as Depeweg et. al (for covariance matrices, rather than scalar variances), and show that the uncertainty cost is obtained using the trace of the covariance matrix reflecting the epistemic uncertainty. Note also that the covariance matrix corresponding to the aleatoric uncertainty (second term in Equation 2) will change depending on the inputs. This allows our approach to handle heteroscedastic environments, where the aleatoric uncertainty will vary for different inputs. Intuitively, the latent variables in the VAE capture aleatoric uncertainty, whereas the change across different dropout masks reflects epistemic uncertainty. \\n\\n>\\u201dThe objective function of the forward model is only given in the appendix. I think it needs to be moved to the main text, especially because the sum-of-squares term indicates a homoskedastic Gaussian for a likelihood. This has implications for the uncertainty estimates (see point above).\\u201d\\n>\\u201cFurther, the authors did not observe a benefit from using a stochastic forward model. Especially, if the prior instead of the approximate posterior is used. My point would be that, depending on the exact grapical model and the way the sampling is done to train the policy, it is actually mathematically *right* to sample from the prior. This is also how it is described in the last equation of section 2.\\u201d\\n\\nWe have moved the objective function to the main text. We have also proposed a modification to the VAE posterior distribution which now leads to a significant gain in performance of the stochastic model over the deterministic model, which is described in Section 2.1. (please also see top comment). \\n\\nPlease let us know if these address your concerns, and if you would consider updating your score if so.\"}",
"{\"title\": \"Thank you for the review (1/2)\", \"comment\": \"Thank you for the constructive suggestions. We have made several updates to the paper based on them, and we provide answers to specific points below.\\n\\n>\\u201cThe work by Depeweg et al addresses quite the same question as the authors of this work, but with a broader scope (i.e. not limited to traffic) but very much the same machinery. There are some important theoretical insights in this work and the connection to this submission should be drawn. In particular, the proposed method needs to be either compared to this work or it needs to be clarified why it is not applicable.\\u201d\\n\\n\\nThank you for pointing us to the work of Depeweg et al. [3]. It is indeed relevant and we have updated the paper to relate our work to theirs. The main difference between our approaches is that they use the framework of Bayesian neural networks trained with alpha-divergence minimization, whereas we use variational autoencoders trained with Dropout Variational Inference (VI). \\n\\nBoth approaches aim to model aleatoric and epistemic uncertainties, but do so in different ways. Alpha-BNNs place a factorized Gaussian prior both over latent variables and network weights, and learn the parameters of these distributions by minimizing an energy function whose minimizer corresponds to a local minimum of alpha-divergences. \\nVariational Autoencoders also represent latent variables as factorized Gaussians, whereas Dropout VI corresponds to placing a prior over network weights - specifically, a mixture of two Gaussians with small variances, with the mean of one component fixed at zero. As described in the new Section 2.3 which we have added, our approach corresponds to defining a variational distribution which is the composition of these two distributions. \\n\\nAn advantage of using alpha-divergences over variational inference (pointed out in [1, 2, 3]) is that VI can underestimate model uncertainty by fitting to a local mode of the exact posterior, whereas alpha-divergence minimization can give better coverage of the distribution. However, there are also challenges associated with alpha-BNNs. One which was pointed out by [2] is that they require significant changes in existing deep learning models and code bases, and the functions they optimize are less intuitively interpretable by non-experts. We investigated the approach described in [2], which proposes a dropout-based reparameterization of the alpha-divergence objective, which seems to offer a balance between compatibility with existing frameworks and better-calibrated uncertainty estimates. However, this requires performing several stochastic passes through the forward model at training time in order to calculate the proposed loss. In our setup, doing 10 stochastic passes (the number used in the paper) required reducing the minibatch size from 64 to 8 to fit in memory, which significantly slowed down training. We did not obtain any reasonable results after 5 days of training on GPU, whereas with our current approach the model finishes training after 4 days. Since the minibatch size with the dropout-based alpha-divergence objective is 8x smaller than our original minibatch size, a rough estimate would place training time for the forward model at around 30 days. We note that the work of Depeweg et al. is applied to much lower-dimensional problems (2-30 dimensions, <100,000 transitions), whereas our setting involves high-dimensional images and a larger dataset (around 2 million transitions). We believe that investigating alternate methods for uncertainty estimation in our setting would be interesting, but to do so thoroughly is best left for future work.\", \"references\": \"[1] \\u201cLearning and Policy Search In Stochastic Dynamical Systems with Bayesian Neural Networks\\u201d, Depeweg S, Hernandez-Lobato H, Doshi-Velez F, Udluft S. ICLR 2017. \\n[2] \\u201cDropout Inference in Bayesian Neural Networks with Alpha-Divergences\\u201d, Yingzhen Li and Yarin Gal. ICML 2017. \\n[3] \\u201cDecomposition of Uncertainty in Bayesian Deep Learning for Efficient and Risk-Sensitive Learning\\u201d Depeweg et al, ICML 2018.\"}",
"{\"title\": \"Thank you for the review.\", \"comment\": \"Thank you for the helpful review. We have made updates to the paper, please see our main comment and our answer below.\\n\\n>\\u201dThe paper did not seem to reach a conclusion on why stochastic forward model does not yield a clear improvement over the deterministic model. This may be due to the limitation of the dataset or the prediction horizon which seems to be 2 second.\\u201d \\n\\n\\nWe have proposed a modification to the VAE posterior distribution for the stochastic model which now leads to a significant gain in performance over the deterministic model (please see top comment, and Section 2.1). Note also that we show, at least qualitatively, that the stochastic model without this modification does not respond very well to the input actions, even though it produces reasonable predictions. This is likely the reason for the suboptimal performance. The stochastic model with the modified posterior responds better, and also translates into better performance. \\n\\n\\n>\\\"The dataset is only 45 minutes which captured by a camera looking down a small section of the road. So the policies learned might only do lane following and occasionally doing collision avoidance. I would encourage the authors to look into more diverse dataset. See the paper DESIRE: Distant Future Prediction in Dynamic Scenes with Interacting Agents, CVPR 2017.\\\"\\n\\nThank you for the pointer to this work. It seems very relevant and will be worth investigating in future work. We would like to note that two interesting features of our dataset are that it consists of real human driver behavior, and involves dense traffic. We believe this addresses an underexplored setting: as noted in the related work section, most other works deal with the problem of doing lane following or avoiding static obstacles in visually rich environments. Our setting instead focuses on visually simplified environments, but with complex and difficult to predict behavior by other drivers. The longer-term goal is to learn policies in visually rich settings with complicated driver behavior, and we believe solving this dataset is a step towards that goal. Also note that for autonomous driving, the success rate needs to be extremely high, and although our approach performs well in comparison to others, it is still far from 100%. We therefore believe that to obtain satisfactory performance, policies will have to learn fairly complex policies, and this dataset can serve as a useful testing environment. \\n\\nPlease let us know if these address your concerns, and if you would consider updating your score if so.\"}",
"{\"title\": \"Thank you for the review\", \"comment\": \"Thank you for the helpful suggestions, we have updated the paper. Please see our answers to specific points below:\\n\\n>\\u201cUnclear motivation to penalize prediction uncertainty to make the predicted states stay in the training data\\u201d\\n\\u201cMore theoretical explanation is needed or perhaps some intuition.\\u201d\\n\\nAs requested, we have added a section (Section 2.3 and Appendix B), where we show that our approach can be seen as training a Bayesian neural net with latent variables using variational inference. We also perform a similar uncertainty decomposition as Depeweg et. al [1], and show that the uncertainty cost is obtained using the trace of the covariance matrix reflecting the epistemic uncertainty.\\n\\n>\\u201cWithout any addition of data, the variance reduction, which results by penalizing the high variance during training, might indicate over-fitting to the current training data. As the penalty forces the model to predict states only in the training dataset, it is unclear how this shows better test-time performance. The output of the policy network will simply be biased towards the training set as a result of the uncertainty cost. \\n\\nWe would like to clarify that the uncertainty penalty does not necessarily bias the policy network towards the training trajectories, but rather toward the states where the forward model has low uncertainty. This includes the training trajectories, but it also includes regions of the state space where the forward model generalizes well, which were not seen during training. The prediction results, which are obtained by feeding initial states from the testing set which the forward model was not trained on, still look reasonable, which indicates that the forward model is able to generalize fairly well. Note also that we evaluate the trained policy network on trajectories from the testing set, which the forward model was not trained on. \\n\\n>\\u201dAlso, in some cases references to existing work that includes real robotic systems is out of context at minimum. So yes there are similarities between this paper and existing works on learning control for robotics systems using imitation learning, model based control and uncertainty aware cost function. However there is a profound difference in terms of working in simulation and working with a real system for which model and environment uncertainty is a very big issue. There are different challenges in working with a real uncertain system which you will have to actuate, and working with set of images for making predictions in simulation.\\u201d \\n\\n\\nWe agree that there is a big difference between our setup and a real robotic system. We felt it fair to include references to other work in imitation learning and model-based control, even if the setups are quite different. We are happy to update our related work section with additional references, if you have any suggestions. \\n\\nPlease let us know if these address your concerns, and if you would consider updating your score if so. \\n\\n[1] \\u201cDecomposition of Uncertainty in Bayesian Deep Learning for Efficient and Risk-Sensitive Learning\\u201d Depeweg et al, ICML 2018.\"}",
"{\"title\": \"Updated Paper\", \"comment\": \"We would like to thank all the reviewers for their helpful feedback. We have made several updates to the paper which we hope address the reviewers\\u2019 concerns, which we describe below. We give more detailed responses to the individual comments.\\n\\nBoth Reviewer 2 and Reviewer 3 mentioned the fact that the stochastic model did not yield an improvement over the deterministic model as a limitation. In the updated version of the paper we propose a modified posterior distribution for the VAE, which gives improved performance relative to both the standard stochastic model and the deterministic model. This modification is simple to implement, and involves sampling the latent variable from the prior, rather than posterior, a fraction of the time during training. In addition to improving the performance of the trained policies (in terms of success and distance travelled), upon visual inspection (shown at the URL) this modification makes the forward model more responsive to the input actions, which we believe is the reason for the standard stochastic model\\u2019s suboptimal performance. This modification can be seen as \\u201cdropping out\\u201d the latent code with some probability, and although simple, we are not aware of it being proposed elsewhere in the literature. \\n\\n\\nBoth Reviewer 1 and Reviewer 2 mentioned they would like to see more theoretical explanation. We have added a new section (Section 2.3 and Appendix B) which shows that our approach can be viewed as training a Bayesian neural network with latent variables using variational inference. We show that the loss function which we optimize is in fact an approximation to the negative evidence lower bound obtained by using a variational distribution which is the composition of a diagonal Gaussian (over latent variables) and the dropout approximating distribution (over model parameters) described in [1]. We also perform a decomposition of the covariance of the distribution over predictions induced by this approximate posterior (similar to [2]) into two covariance matrices, which represent the aleatoric and epistemic uncertainties. Our uncertainty penalty is in fact penalizing the trace of the matrix representing the epistemic uncertainty. \\n\\nWe have moved certain parts of the main text to the appendix to make room for this new section and stay within the page limit. We have also rerun the experiments with different seeds to obtain more robust performance estimates, and made some changes in our training procedure/hyperparameters (these are detailed in the Appendix, and will be available in our code release). Note that the MPUR results are now somewhat higher than in the first version, although their relative performance is similar (i.e, deterministic and stochastic are still similar to each other, although the stochastic model with our modified posterior is better than both). \\n\\n[1]: \\\"Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning\\\", Gal and Ghahramani. ICML 2016. \\n\\n[2]: \\u201cDecomposition of Uncertainty in Bayesian Deep Learning for Efficient and Risk-Sensitive Learning\\u201d Depeweg et al, ICML 2018.\"}",
"{\"title\": \"An Ok paper that combines dropout methods with learning policy using observational data.\", \"review\": \"- Does the paper present substantively new ideas or explore an under explored or highly novel question?\\n\\nSomewhat, the paper combines two popular existing approaches (Imitation Learning, Model Based Control and Uncertainty Quantification using Dropout). The novelty is in combining pre-existing ideas. \\n\\n- Does the results substantively advance the state of the art? \\n\\nNo, the compared methods are not state-of-the-art.\\n\\n- Will a substantial fraction of the ICLR attendees be interested in reading this paper? \\n\\nYes. I think that the topics of this paper would be very interesting to ICLR attendees. \\n\\n-Quality: \\n\\nUnclear motivation to penalize prediction uncertainty to make the predicted states stay in the training data. Also, in some cases references to existing work that includes real robotic systems is out of context at minimum. So yes there are similarities between this paper and existing works on learning control for robotics systems using imitation learning, model based control and uncertainty aware cost function. However there is a profound difference in terms of working in simulation and working with a real system for which model and environment uncertainty is a very big issue. There are different challenges in working with a real uncertain system which you will have to actuate, and working with set of images for making predictions in simulation. \\n\\n \\n\\n-Clarity: \\n\\nEasy to read. Experimental evaluation is clearly presented. \\n\\n-Originality: \\n\\nSimilar uncertainty penalty was used in other paper (Kahn et al. 2017). Therefore the originality is in some sense reduced.\\n\\n- Would I send this paper to one of my colleagues to read?\\n\\nYes I would definitely send this paper to my colleagues. \\n\\n- General Comment: \\n\\nDropout can be used to represent the uncertainty/covariance of the neural network model. The epistemic uncertainty, coming from the lack of data, can be gained through Monte Carlo sampling of the dropout-masked model during prediction. However, this type of uncertainty can only decrease by adding more explored data to current data set. Without any addition of data, the variance reduction, which results by penalizing the high variance during training, might indicate over-fitting to the current training data. As the penalty forces the model to predict states only in the training dataset, it is unclear how this shows better test-time performance. The output of the policy network will simply be biased towards the training set as a result of the uncertainty cost. More theoretical explanation is needed or perhaps some intuition. \\n\\nThis observation is also related to the fact that the model based controller used is essentially a risk sensitive controller.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Review for \\\"Model-Predictive Policy Learning with Uncertainty Regularization for Driving in Dense Traffic\\\"\", \"review\": \"The paper addresses the difficulty of covariate shift in model-based reinforcement learning. Here, the distribution over trajectories during is significantly different for the behaviour or data-collecting policy and the target or optimised policy. As a mean to address this, the authors propose to add an uncertainty term to the cost, which is realised by the trace of the covariance of the outputs of a MC dropout forward model. The method is applied to driving in dense traffic, where even single wrong actions can be catastrophic.\\n\\nI want to stress that the paper was a pleasure to read. It was extraordinarily straightfoward to follow, because the text was well aligned with the necessary equations.\\n\\nThe introduction and related work seem complete to me, with two exceptions:\\n\\n- Depeweg, S., Hernandez-Lobato, J. M., Doshi-Velez, F., & Udluft, S. \\n (2018, July). Decomposition of Uncertainty in Bayesian Deep Learning for Efficient and Risk-sensitive Learning. In *International Conference on Machine Learning* (pp. 1192-1201).\\n- Thomas, Philip S. *Safe reinforcement learning*. Diss. University of Massachusetts Libraries, 2015.\\n\\nThe work by Depeweg et al addresses quite the same question as the authors of this work, but with a broader scope (i.e. not limited to traffic) but very much the same machinery. There are some important theoretical insights in this work and the connection to this submission should be drawn. In particular, the proposed method needs to be either compared to this work or it needs to be clarified why it is not applicable.\\n\\nThe latter appears to be of less significance in this context, but I found robust offline policy evaluation underrepresented in the related work. \\n\\nI wonder if there is a way for a neural network to \\\"hack\\\" the uncertainty cost. I suppose that the proposed approach is an approximation to some entropy term, and it would be informative to see how exactly. \\n\\nThe approach shown by Eq 1 appears to be an adhoc way of estimating whether the uncertainty resulting from an action is due to the data or the model. What happens if this approach is not taken?\\n\\nThe objective function of the forward model is only given in the appendix. I think it needs to be moved to the main text, especially because the sum-of-squares term indicates a homoskedastic Gaussian for a likelihood. This has implications for the uncertainty estimates (see point above).\\n\\nOverall, the separation of data uncertainty/risk vs model uncertainty is not done. This indicates that heterskedastic environments are candidats where the method can fail, and this limitation needs to be discussed or pointed out.\\n\\nFurther, the authors did not observe a benefit from using a stochastic forward model. Especially, if the prior instead of the approximate posterior is used. My point would be that, depending on the exact grapical model and the way the sampling is done to train the policy, it is actually mathematically *right* to sample from the prior. This is also how it is described in the last equation of section 2. \\n\\n## Summary\\n\\nOverall, I liked the paper and the way it was written. However, there are some shortcomings, such as the comparison to the work by Depeweg et al, which does a very similar thing. Also, justifying the used heuristics as approximations to a principled quantity would help. It appears that the question why and how stochastic forward models should be used requires further investigation.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"a good model-based RL attempt for autonomous driving, however, dataset is very limited\", \"review\": \"Pros:\\nThe paper formulates the driving policy problem as a model-based RL problem. Most related work on driving policy has been traditional robotics planning methods such as RRT or model-free RL such as policy gradient methods.\\n\\nThe policy is learned through unrolling a learned model of the environment dynamics over multiple time steps, and training a policy network to minimize a differentiable cost over this rolled-out trajectory.\\n\\nThe cost combine the objective the policy seeks to optimize (proximity to other cars) and an uncertainty cost representing the divergence from the states it is trained on.\", \"cons\": \"The model based RL formulation is pretty standard except that the paper has a additional model uncertainty cost.\\n\\nRealistically, the output of driving policy should be planning decision, i.e. the waypoints instead of steering angles and acceleration / deceleration commands. There does not seem to be a need to solve the control problem using learning since PID and iLQR has solved the control problem very well. \\n\\nThe paper did not seem to reach a conclusion on why stochastic forward model does not yield a clear improvement over the deterministic model. This may be due to the limitation of the dataset or the prediction horizon which seems to be 2 second. \\n\\nThe dataset is only 45 minutes which captured by a camera looking down a small section of the road. So the policies learned might only do lane following and occasionally doing collision avoidance. I would encourage the authors to look into more diverse dataset. See the paper DESIRE: Distant Future Prediction in Dynamic Scenes with Interacting Agents, CVPR 2017.\\n\\nOverall, the paper makes an interesting contribution: formulate the driving policy problem as a model-based RL problem. The techniques used are pretty standard. There are some insights in the experimental section. However, due to the limitation of the dataset, it is not clear how much the results can generalize to complex settings such as nudging around other cars, cutting in, pedestrian crossing, etc.\", \"response_to_rebuttal\": \"It is good to know that the authors have a new modified VAE posterior distribution for the stochastic model which can achieve significant gain over the deterministic model. Is this empirical and specific to this dataset? Without knowing the details, it is not clear how general this new stochastic model is.\\n\\nI agree that it is worthwhile to test the model using the 45 minute dataset. However, I still believe the dataset is very limiting and it is not clear how much the experimental results can apply to other large realistic datasets.\\n\\nMy rating stays the same.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
|
ByeMB3Act7 | Learning to Screen for Fast Softmax Inference on Large Vocabulary Neural Networks | [
"Patrick Chen",
"Si Si",
"Sanjiv Kumar",
"Yang Li",
"Cho-Jui Hsieh"
] | Neural language models have been widely used in various NLP tasks, including machine translation, next word prediction and conversational agents. However, it is challenging to deploy these models on mobile devices due to their slow prediction speed, where the bottleneck is to compute top candidates in the softmax layer. In this paper, we introduce a novel softmax layer approximation algorithm by exploiting the clustering structure of context vectors. Our algorithm uses a light-weight screening model to predict a much smaller set of candidate words based on the given context, and then conducts an exact softmax only within that subset. Training such a procedure end-to-end is challenging as traditional clustering methods are discrete and non-differentiable, and thus unable to be used with back-propagation in the training process. Using the Gumbel softmax, we are able to train the screening model end-to-end on the training set to exploit data distribution. The algorithm achieves an order of magnitude faster inference than the original softmax layer for predicting top-k words in various tasks such as beam search in machine translation or next words prediction. For example, for machine translation task on German to English dataset with around 25K vocabulary, we can achieve 20.4 times speed up with 98.9% precision@1 and 99.3% precision@5 with the original softmax layer prediction, while state-of-the-art (Zhang et al., 2018) only achieves 6.7x speedup with 98.7% precision@1 and 98.1% precision@5 for the same task. | [
"fast inference",
"softmax computation",
"natural language processing"
] | https://openreview.net/pdf?id=ByeMB3Act7 | https://openreview.net/forum?id=ByeMB3Act7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"SylGxNNegN",
"HkeASf47AX",
"HyetbGEm0m",
"SJl5TbNQAQ",
"Hyx4lWEXRX",
"ByxDcLsR2X",
"BylZTkmc3X",
"HyehHMVFnm",
"B1eYqyEt3m",
"S1gF35P_nQ"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"comment",
"official_review",
"comment",
"official_review"
],
"note_created": [
1544729578036,
1542828613551,
1542828544623,
1542828481941,
1542828268302,
1541482127236,
1541185464857,
1541124676202,
1541123985461,
1541073585040
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1524/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1524/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1524/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1524/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1524/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1524/AnonReviewer3"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1524/AnonReviewer1"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1524/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper introduces an approach for improving the scalability of neural network models with large output spaces, where naive soft-max inference scales linearly with the vocabulary size. The proposed approach is based on a clustering step combined with per-cluster, smaller soft-maxes. It retains differentiability with the Gumbel softmax trick. The experimental results are impressive. There are some minor flaws, however there's consensus among the reviewers the paper should be published.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Good paper\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"We want to thank the reviewer for the useful suggestions!!\\n\\n-- about larger vocabulary experiment:\\n\\nWe have added an experiment with a much larger dataset --- Wikitext103 with vocabulary size of 80k. The result of prediction time speedup versus accuracy is shown in Figure 9 in the new version. As you can see from the figure, we can achieve more than 15x speedup with accuracy of 99.8%. In addition, in Table 3, we show the result on DE-EN, an NMT task with vocabulary size around 25k. We summarize the vocabulary size of all the datasets in Table 1. \\n\\n-- about result on speed-up of L2S over full softmax with respect to the vocabulary size\\n\\nWe have included an experiment of prediction time speed-up versus vocabulary size on PTB dataset. Results are summarized in Figure 8. In this figure, we could observe that our method can achieve higher speed-up with larger vocabulary size.\\n\\n-- about clustering parameters and label sets\\n\\nWe have added Table 7 to show the label sets learned from our method. We observe some interesting clusters---some words with similar meanings are in the same cluster.\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"Thanks for your comments and that you enjoyed reading the paper!\", \"responses_to_questions\": \"-- about larger vocabulary experiment:\\n\\nWe have added an experiment with a much larger dataset --- Wikitext103 with vocabulary size to be 80k. The result of prediction time speedup versus accuracy is shown in Figure 9 in the new version. As you can see from the figure, we can achieve more than 15x speedup with accuracy of 99.8%. In addition, in Table 3, we show the result on DE-EN, an NMT task with vocabulary size around 25k. We summarize the vocabulary size of all the datasets in Table 1. \\n\\n-- about perplexity and probability estimation\\n\\nThis is a great point. We agree that our method tends to generate better approximation of ranking of the words instead of probability of that word. The main reason for the reduced gain for PPL is that to compute PPL, after performing our method (L2S), we need an additional step to assign a probability to words that are not located in the predicted cluster, although this is a rare case (less than 5% chance). There are several potential ways to model this rare case and we chose to use SVD to approximate probability (same as svd softmax [Kyuhong Shim et.al in NIPS 2017]); however, SVD itself has lots of computational overhead. Therefore prediction time speedup is less pronounced for PPL than for the accuracy results. \\n\\nOn the other hand, we get reasonable probability estimation when the word is within the predicted cluster (usually they are top-k predicted words). Therefore we still achieve very good (>10x) speed up in NMT tasks with beam search (see Table 3). \\n\\n\\n-- about qualitative analysis \\n\\nWe have added two qualitative analyses in the new version. Firstly, we show the words from different clusters learned from our method in Table 7, and observe some interesting structures--some words with similar meanings are in the same cluster. Secondly, examples of translation pairs by our method compared with original softmax results are shown in Table 8.\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"We are thankful for the constructive comments!!\\n\\n-- about word clusters are not continuous and training end to end \\n\\nThere are several ways to make word clusters continuous such as using soft clustering, however, these strategies on the other hand will increase the prediction time. Even though word clusters representation is not continuous in L2S, our model can still train end-to-end in the sense that the clustering stage and the label selection are trained jointly with the gumbel technique. Our algorithm back-propagates the gradient to the clustering weights to update both clustering partition and label sets simultaneously. \\n\\n-- about speeding up training time\\n\\nWe focus on speeding up prediction in this work. We could potentially use the same idea--clustering+learning candidate words, to speed up training as well since we could narrow down the update on a few candidate words instead of the entire vocabulary when updating softmax\\u2019s weight matrix. This is certainly an interesting future direction to work on.\\n\\n-- qualitative examples\\n\\nWe have added two qualitative analyses in the new version. Firstly, we show the words from different clusters learned from our method in Table 7, and observe some interesting structures---some words with similar meanings are in the same cluster. Secondly, examples of translation pairs by our method compared with full softmax results are shown in Table 8.\"}",
"{\"title\": \"Summary of Changes\", \"comment\": \"Hi all,\\n\\nWe appreciate the constructive feedback from the reviewers and the community. And thanks for the patience for waiting our responses. We have made the following main changes to the current version to make our paper more complete.\\n\\n1. For NMT task, we apply our method on a new dataset EN-VE translation with vocabulary size of 22749. Results are summarized in Table 3. For this task, our method can achieve 20x speedup with BLEU score of 25.27, and the original softmax\\u2019s BLEU is 25.35.\\n\\n2. Besides additional NMT experiment, we perform our algorithm on a larger vocabulary dataset Wikitext-103, a language model dataset with 80k vocabularies. Results are summarized in Figure 9. For this task, our method can achieve more than 15x speedup with P@1 at 99.8%.\\n\\n3. We also include an experiment on prediction time speed-up versus vocabulary size on PTB dataset. In this experiment, we vary the vocabulary size and show the speedup and accuracy. Results are summarized in Figure 8, showing that our method achieves higher speed-up with larger vocabulary size. \\n\\n4. We add two qualitative analysis in the appendix. Firstly, we show the words from different clusters learned from our method in Table 7, and observe some interesting structures--some words with similar meanings are in the same cluster. Secondly, examples of translation pairs by our method compared with full softmax results are shown in Table 8. Please look through those interesting examples!\"}",
"{\"title\": \"a nice method accelerating softmax for prediction in large vocabulary at test time\", \"review\": \"This paper proposes a novel method to speedup softmax computation at test time. Their approach is to partition the large vocabulary set into several discrete clusters, select the cluster first, and then do a small scale exact softmax in the selected cluster. Training is done by utilizing the Gumbel softmax trick.\", \"pros\": \"1. The method provides another way that allows the model to learn an adaptive clustering of vocabulary. And the whole model is made differentiable by the Gumbel softmax trick. \\n2. The experimental results, in terms of precision, is quite strong. The proposed method is significantly better than baseline methods, which is a really exciting thing to see. \\n3. The paper is written clearly and the method is simple and easily understandable.\", \"cons\": \"1. I\\u2019d be really expecting to see how the model will perform if it is trained from scratch in NMT tasks. And I have reasons for this. Since the model is proposed for large vocabularies, the vocabulary of PTB (10K) is by no terms large. However, the vocabulary size in NMT could easily reach 30K, which would be a more suitable testbed for showing the advantage of the proposed method. \\n2. Apart from the nice precision results, the performance margin in terms of perplexity seems not as big as that of precision. And according to earlier discussions in the thread, the author confirmed that they are comparing the precision w.r.t. original softmax, not the true next words. This could raise a possible assumption that the model doesn\\u2019t really get the probabilities correct, but somehow only fits on the rank of the words that was predicted by the original softmax. Maybe that is related to the loss? However, I believe sorting this problem out is kind of beyond the scope of this paper. \\n3. In another scenario, I think adding some qualitative analysis could better present the work. For example, visualize the words that got clustered into the same cluster, etc. \\n\\nIn general, I am satisfied with the content and enjoys reading the paper.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"comment\": \"Hi there,\\n\\nThanks for your interest and useful clarifying questions !!!\\n\\n1) You are right. We didn't train the context vector jointly with approximation. Our problem setup is given a pre-trained NLM, how to speed up the inference operations.\\n\\n2) Firstly we need to point out after training the cluster label set (c_t) and clustering weights (v_t), we will just select the cluster by choosing the one with maximal z(h) in eq(2). That is to say, in the inference time, given a hidden state h, the corresponding selected cluster is fixed. Apparently there is no guarantee the ground truth token will be in the selected cluster, but our training objective function tries to make the predicted candidate set contains the ground truth token. \\n\\n\\n3) Sorry for the confusion, I think we will reconsider how to rephrase the scenario. We are not trying to approximate \\\"next-word-prediction accuracy\\\" but to approximate \\\"next-work-prediction operation\\\". \\n\\nSince in LM and NMT, next-word-prediction is done by taking the maximal inner product between context vector h and Softmax layer W, we refer \\\"next-word-prediction\\\" as the operation to do so. \\nWe didn't consider the true \\\"next-word-prediction accuracy\\\" because even for taking the original maximal inner product between softmax W and h, it will only give us around 26% accuracy for P@1 when compared to ground truth token. To increase this accuracy actually means to improve the performance of the model over original W. For this work, we focus on making a given pre-trained LM/NMT faster in prediction time but not making a pre-trained LM/NMT having higher accuracy. Therefore, we try to approximate softmax W (the real operation to generate next word) instead of matching ground-truth label by clustering-based thinking. \\n\\n\\n4) In section 4.2 and corresponding table 2, we do try to add \\\"%\\\" there. We report the BLEU scores which is within .5% difference when compared to the original BLEU score. For example, in NMT: DE-EN Beam=5 row in table 2 we get 13.4 times speed-up with BLEU score drops from 30.33 to 30.19. If we consider the ratio (30.33 - 30.19) / (30.33) which is around 0.0046 ~= 0.46%. Whereas, \\\".5\\\" BLEU score would be (0.5)/30.33 ~= 1.65% which is 3 times more loss. \\n\\n\\n5) Sorry for the confusion again, we will again consider rephrase the notations. We will check again all notations in particular the comma issue you mentioned. Here, we briefly reply to the dimensions of the notations you mentioned. Let's assume there is |V| vocabularies in the model. \\n\\nFor c_t, it in the shape of |v| x 1 vector and we are trying to make entry either 0 or 1 as a pointer of the inclusion of certain. c_{ts} is s-entry of the c_t vector, and thus is binary in the sense. c_{p_bar{h_i},s} refers to the s-entry of the c_{p_bar{h_i}} vector, p_bar{h_i} defined in the paper is the 1-hot entry of the Straight-Through gumbel, which can be thought as the sampled cluster. Thus c_{p_bar{h_i}} is a vector of |v| x 1 shape and c_{p_bar{h_i},s} refers to s-entry and yes it's binary eventually.\", \"title\": \"Replied to question \\\"A few questions\\\"\"}",
"{\"title\": \"Fast and accurate approximation to softmax, but more in-depth analysis results would be required\", \"review\": \"This paper presents an approximation to the softmax function to reduce the computational cost at inference time and the proposed approach is evaluated on language modeling and machine translation tasks. The main idea of the proposed approach is to pick a subset of the most probable outputs on which exact softmax is performed to sample top-k targets. The proposed method, namely Learning to Screen (L2S), learns jointly context vector clustering and candidate subsets in an end-to-end fashion, so that it enables to achieve competitive performance.\\n\\nThe authors carried out NMT experiments over the vocabulary size of 25K. It would be interesting if the authors provide a result on speed-up of L2S over full softmax with respect to the vocabulary size. Also, the performance of L2S on larger vocabularies such as 80K or 100K needs to be discussed.\\n\\nAny quantitative examples regarding the clustering parameters and label sets would be helpful.\\nL2S is designed to learn to screen a few words, but no example of the screening part is provided in the paper.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"comment\": \"1) I want to confirm that you used fully pre-trained language/NMT models before learning the softmax approximation. That is, the context vectors where given and not jointly learned with the approximation?\\n\\n2) For the perplexity calculation, are you selecting the correct candidate set which contains the ground truth token, and then just using the low-rank approximation for all other words? Is the probability of a given word reliant on the probability of selecting that candidate set? \\n\\n3) When defining precision@, you say 'This measures the accuracy of next-word-prediction in LM and NMT'. However, I don't think that is quite correct. You seem to be measuring the overlap between the top words matching between the true softmax and the approximation and not if the next word actually matches the ground truth next word? So even if the true softmax got the word incorrect, you are still trying to match the true softmax. \\n\\n4) In section 4.2, you say '.5% BLEU'. I don't think you want the '%' there?\\n\\n5) I'm having some difficulty with the notation. Can you confirm that c_t, c_{ts} and c_{p(h_i), s} are all binary variables? (also the comma before the subscript s doesn't seem to be used consistently) \\n\\nThanks for your time. I enjoyed this paper.\", \"title\": \"A few questions\"}",
"{\"title\": \"I like the pape\", \"review\": [\"The paper proposes a way to speed up softmax at test time, especially when top-k words are needed. The idea is clustering inputs so that we need only to pick up words from a learn cluster corresponding to the input. The experimental results show that the model looses a little bit accuracy in return of much faster inference at test time.\", \"pros:\", \"the paper is well written.\", \"the idea is simple but BRILLIANT.\", \"the used techniques are good (especially to learn word clusters).\", \"the experimental results (speed up softmax at test time) are impressive.\", \"cons:\", \"the model is not end-to-end because word clusters are not continuous. But it not an important factor.\", \"it can only speed up softmax at test time. I guess users are more interesting in speeding up at both test and training time.\", \"it would be better if the authors show some clusters for both input examples and corresponding word clusters.\"], \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
r1efr3C9Ym | Interpolation-Prediction Networks for Irregularly Sampled Time Series | [
"Satya Narayan Shukla",
"Benjamin Marlin"
] | In this paper, we present a new deep learning architecture for addressing the problem of supervised learning with sparse and irregularly sampled multivariate time series. The architecture is based on the use of a semi-parametric interpolation network followed by the application of a prediction network. The interpolation network allows for information to be shared across multiple dimensions of a multivariate time series during the interpolation stage, while any standard deep learning model can be used for the prediction network. This work is motivated by the analysis of physiological time series data in electronic health records, which are sparse, irregularly sampled, and multivariate. We investigate the performance of this architecture on both classification and regression tasks, showing that our approach outperforms a range of baseline and recently proposed models.
| [
"irregular sampling",
"multivariate time series",
"supervised learning",
"interpolation",
"missing data"
] | https://openreview.net/pdf?id=r1efr3C9Ym | https://openreview.net/forum?id=r1efr3C9Ym | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"rkgLIyhelN",
"rJljJo--1N",
"H1gDJqZW1N",
"Hyx9K4r2C7",
"S1eQkYGnRX",
"rye9fvOkRQ",
"HygepnVy07",
"HJe2vXVJCQ",
"rkefjESCp7",
"HklR8g0p6Q",
"B1eEd30jpQ",
"rJgm6MxxT7",
"H1lAMFaq2Q",
"SJgvOS-92m"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544761165787,
1543736034724,
1543735775059,
1543423106484,
1543411931505,
1542584082299,
1542569143717,
1542566756363,
1542505625814,
1542475862468,
1542347884251,
1541567162595,
1541228822098,
1541178735504
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1523/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1523/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1523/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1523/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1523/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1523/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1523/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1523/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1523/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1523/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1523/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1523/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1523/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1523/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"After much discussion, all reviewers agree that this paper should be accepted. Congratulations!!\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Meta-Review for Interpolation-Predictions paper\"}",
"{\"title\": \"Thank You!\", \"comment\": \"We thank you for your helpful reviews once again. We will add the comparison results with MGP-RNN in the final paper.\"}",
"{\"title\": \"Updated the paper with your suggestions.\", \"comment\": \"Thanks again for the helpful comments. I have updated the paper with your suggestions.\", \"q\": \"Continuous predictions after observing all the data...\", \"a\": \"What I meant is if we want to make continuous rolling predictions with the same model (i.e. same input size), then the amount of lookback window should be kept fixed. For example, if there is a model trained with an input window of 24 hrs with given inducing point 1 per hour, then the input to the prediction network would be (batch_size, 24, features). With such a model, we can make rolling predictions where the fixed lookback window would be 24 hr.\", \"references\": \"[1] Harutyunyan, et al. \\\"Multitask learning and benchmarking with clinical time series data.\\\" arXiv preprint arXiv:1703.07771 (2017)\"}",
"{\"title\": \"Appears to outperform Futoma, et al.\", \"comment\": \"Great work on those experiments, and kudos for acting on my feedback so quickly. If you're still allowed to upload an updated manuscript, please add these results. I am leaning toward acceptance, but I need to confer with the other reviewers and the chair before I revise my score.\"}",
"{\"title\": \"Comparison with MGP-RNN\", \"comment\": \"Thanks for the quick response. We ran the experiments with MGP-RNN (Futoma's version) on our dataset. In the table below, we report the results from the 5-fold cross validation in terms of the average area under the ROC curve (AUC score) and average area under the precision-recall curve (AUPRC score). We also report the standard deviation over cross-validation folds.\\n\\nModel AUC AUPRC\\nMGP-RNN 0.847 +/- 0.007 0.377 +/- 0.017\\nGRU-HD 0.845 +/- 0.006 0.390 +/- 0.010\\nProposed 0.853 +/- 0.007 0.418 +/- 0.022\\n\\nThe proposed model results in statistically significant improvements over the baseline models (p < 0.01) with respect to both the metrics. The proposed approach also addresses some difficulties with prior approaches including the complexity of the Gaussian process interpolation layers used in Li & Marlin, 2016 and Futoma et al. 2017, and the lack of modularity in the approach of Che et al. (2018a). Our framework also introduces novel elements including the use of semi-parametric, feed-forward interpolation layers, and the decomposition of an irregularly sampled input time series into multiple distinct information channels.\", \"publishing_the_code\": \"We actually plan to release all the code (including the dataset) right after the decision deadline.\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"Thank you for your helpful comments. We address the issues below:\", \"q\": \"Can the authors test the proposed method on logistic regression (LR) and multi-layer perceptron (MLP)?\", \"a\": \"Our proposed framework is highly flexible and can be used with any differentiable network on top of the interpolation layers. As request, we replaced the GRU prediction network with a simpler Logistic Regression network and a fully connected feed-forward network (MLP). We report the results below:\\n\\nModel UWave(Accuracy) MIMIC-III (AUC on mortality classification task)\\nIpN + LR . 0.878 0.78 +/- 0.010\\nIpN + MLP 0.877 0.77 +/- 0.010\\nIpN + GRU 0.942 0.85 +/- 0.007 \\n*IpN: Proposed Interpolation Network\\nIncreasing the size of the hidden layer in MLP leads to overparameterization and thus reduces the performance.\"}",
"{\"title\": \"Other miscellaneous comments\", \"comment\": \"Great response overall. In the interest of conserving space and time, you can assume that I am satisfied with any answer I don't directly reply to!\\n\\n> The first interpolation layer performs...\\n\\nNice succinct explanation with plain language. I would recommend putting THAT in your paper. ;)\\n\\n> For the time series missing entirely, the first interpolation layer just outputs the global mean for that channel...\\n\\nCan you further explain \\\"global\\\" here? Is it global across all data points in the data set, across measurements at that particular time, etc.? Also, this is not immediately obvious from the description in Section 3.2.1. The summations in Equation (1) are over only those measurements in an individual record, not across all records, so where does the global mean get computed and how does it come into play? Is substituting the global mean just an ad hoc post-processing step?\\n\\n> Continuous predictions after observing all the data:...\\n\\n> Missing information in Table 2...our baseline methods convert the problem of irregular sampling into a missing data problem...\\n\\nMakes sense. Once again, put this plain language explanation in your paper!\\n\\n> The experiments in [1, 2] use a reduced number of cohorts...\\n\\nI don't want to belabor this too much: ultimately, it's your choice! However, the community is embracing the notion of shared benchmarks to help accelerate progress and promote reproducibility [2]. Thus, electing NOT to use an existing benchmark requires a strong justification. I think requiring a different patient population for a specific problem, e.g., for studying respiratory distress in pediatrics or looking at the efficacy of different treatments for sepsis patients, is a sound justification. However, I'm not fully satisfied with \\\"reduced cohorts\\\" as an explanation in a methods paper not concerned with a specific clinical question. Your manuscript does not anywhere indicate that the proposed approach sensitive to data set size (it should work equally well for ~30K vs. ~50K stays). What is more, many clinicians would tell you that combining pediatric and adult populations is undesirable, and a common critique of ML research that uses large MIMIC cohorts for predicting mortality is that they mix multiple causes.\\n\\n> Continuous predictions after observing all the data...\\n\\nWhy is a fixed length look-back window required? Does this imply that your approach can only use a limited history (which would reduce the benefit of using an RNN)?\"}",
"{\"title\": \"Regarding comparison vs. multivariate GP-GRU\", \"comment\": \"Thanks for the switch response!\\n\\n> We omit the GP-GRU model from MIMIC-III experiments...\\n\\nThis is a reasonable response. I happen to think that the burden of reproducibility is on the previous publication, i.e., if they expect subsequent research to compare against their method in a given setting (here, multivariate time series), then they should provide a publicly accessible, easy-to-use, reliable implementation. Thus, I think it would be unfair to punish you for not comparing your approach to a multivariate version of the GP-GRU. However, we need to add two caveats:\\n\\n(1) Nonetheless, the absence of a multivariate GP-GRU baseline weakens your paper. I don't think that necessarily requires us to reject your submission from ICLR, but it certainly limits your contribution. There is an open question about how you'd compare in the multivariate regime and plenty of reason to think the GP-GRU would capture multivariate correlations better.\\n\\n(2) You're not completely off the hook! Futoma's version of the MGP-RNN, which you cite and which is closely related to the Li and Marlin univariate baseline, IS available on github: https://github.com/jfutoma/MGP-RNN. Further, it looks pretty usable, and Futoma himself is pretty responsive and would be willing to assist you in performing a comparison.\\n\\nReturning to my original philosophical point, regarding this footnote from your paper:\\n\\n> We plan to share all data extraction and model code on Github.\\n\\nWe all know that for every ten papers that include a statement like that in a submission, like 1-2 actually publish their code after acceptance. How close are you to ACTUALLY publishing your code? No need to provide a link or anything (we don't want to violate double blind) -- I'm just looking for a forthright reply!\"}",
"{\"title\": \"Continued..\", \"comment\": \"\", \"q\": \"FYI: the Che, et al., 2016, paper on missing value...\", \"a\": \"We have updated the citation. Thanks for reminding.\"}",
"{\"title\": \"Thank you for your insightful and detailed comments.\", \"comment\": \"Thank you for your insightful and detailed comments. We address your concerns below:\", \"q\": \"How does the proposed approach handle time series that are missing entirely...\", \"a\": \"For the time series missing entirely, the first interpolation layer just outputs the global mean for that channel, but the second interpolation layer performs a more meaningful interpolation using the learned correlations from other channels.\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"Thank you for your comments. We address the issues below:\", \"q\": \"the model is sharing many characteristics with (referenced) published methods ....\", \"a\": \"The proposed model is designed to allow the flexible selection of prediction networks, which is characteristic that it shares with the prior GP-based methods. Here, the primary contribution of our approach is a highly significant reduction in the compute time relative to using GP-based methods, which makes the method much more suitable for practical use. In addition, our approach to decomposing the continuous time data to directly expose smooth trends and transient components is absent from prior GP-based methods. Relative to prior neural network based approaches (the GRU-* family), our method focuses on enabling global interpolation and direct use of continuous time data with no ad-hoc decisions about how to assign values to discrete time intervals. These are significant differences relative to the prior approaches, particularly in terms of the interpolation process. Indeed, these differences between global learned interpolation and local imputation directly account for the improved performance of our approach over the GRU-* family of methods.\"}",
"{\"title\": \"Refreshingly simple approach to irregular data but limited novelty, flawed writing, uninspiring results\", \"review\": [\"I have mixed feelings about this submission, and as such, I look forward to discussing it with both the authors and my fellow reviewers. In short, I like the simplicity of the idea, but I am uncertain about the degree to which it satisfies ICLR's novelty criterion (\\\"present substantively new ideas or explore an underexplored or highly novel question\\\"); I do feel confident that some ICLR readers would (perhaps unfairly) describe this approach as \\\"obvious.\\\" The paper's presentation suffers, and it fails to communicate essential details clearly. Finally, for folks familiar with healthcare data and MIMIC-III specifically, the results are underwhelming: yes, the proposed approach beats (the authors' own implementations of) baselines, but it underperforms other published results on the MIMIC-III 48-hour mortality task ([1][2] report AUCs of 0.87 or higher). As such, I am assigning the paper a \\\"weak accept\\\" to communicate my ambivalence and reserve the right to adjust it up or down after discussion.\", \"SUMMARY\", \"This paper proposes an \\\"interpolation layer\\\" to resample irregularly sampled time series before feeding them into a neural net architecture. The interpolation layer consists of parametric kernels, e.g., radial basis functions, configured to estimate the values of input time series at reference time points based on univariate temporal and then multivariate correlations. The outputs include smooth and transient interpolated values (controlled by kernel bandwidth) and counts (referred to as intensity) at each reference point. As far as I understand, this model can be trained end-to-end. The paper also proposes a simple strategy for combatting overfitting (add an autoencoder and reconstruction error term to the objective in combination with a heuristic in which some points are masked as inputs and must be interpolated from non-masked points). In experiments on two data sets (UWaveGesture and a medical data set) and two tasks (classification and regression) this approach outperforms the main competing approaches [3][4][5][6] in most contexts.\", \"Below I provide a list of strengths, weaknesses, and general questions or feedback.\", \"STRENGTHS\", \"I applaud the simplicity of the idea: this much simpler framework leverages many of the intuitions behind the GP adapter framework (GP-GRU) [4][5] with comparable performance and appears to train orders of magnitude faster (caveat: on one data set and task)\", \"It likewise outperforms both commonly used preprocessing (GRU-F) [2][3] and the much more complicated neural net architecture (GRU-HD) from [6] (across two datasets and tasks)\", \"The simplicity of this approach probably lends itself to additional customization and innovation\", \"The literature review seems quite thorough and does an especially nice job of covering recent work on RNNs for multivariate time series and irregular sampling or missing values\", \"The experiments are thorough and well-designed overall. The authors use two data sets and two tasks (classification and regression). More data sets and tasks is always nice, but even two is pretty laudable (many authors might settle on just one given the experimental and computational effort required for these experiments). They include and beat or outperform two baselines that can justifiably be called state-of-the-art (GP-GRU and GRU-HD).\", \"I think a relatively safe takeaway is that for irregularly sampled data, this approach is is preferable to both heuristic preprocessing and more complex models. That seems like a not insignificant finding in empirical machine learning for messy time series data.\", \"WEAKNESSES\", \"Section 3 is possibly the most critical section (since it describes the contribution) but is hard to follow: I don't envy the authors the task of explaining a variable with two superscripts and three subscripts (Equation 1), but it IS their paper, so it's on them to do it. See feedback section for other examples.\", \"Although I consider the related work well done, I can't help but wonder if there isn't older work on RBFs, etc., that might have been missed (I mostly want to encourage the authors to look once more and then come back and tell me I'm wrong).\", \"The MIMIC-III experiments omit the GP-GRU model, which weakens the results by leaving the reader to imagine how it might compare (I would expect it to outperform the proposed approach by an even wider margin than it did for UWave).\", \"I am sympathetic to the idea of fixing certain architectural choices, e.g., layers and units in the GRU and number of inducing points, across all models because it (a) gives the appearance of a \\\"fair comparison\\\" and (b) reduces burden of effort, but I do not agree that it yields a truly fair comparison. The GRU-* model performance on UWave is suspiciously bad, suggesting severe overfitting and the possibility that the models are overparameterized. It leaves the reader wondering if the architectural choices happen to be optimal for the proposed model only (whether by accident or design). A truly fair comparison requires independently tuning hyperparameters for each model.\", \"Although the proposed approach outperforms baselines in these experiments, the overall results are underwhelming in the wider context of recent work using MIMIC-III. Multiple publications have reported AUCS of 0.87 [1][2] or higher for 48-hour risk of mortality (it is difficult to compare the LOS results since different papers use different units). Of course, the experiments use different cohorts and variables so they're not directly comparable, but it nonetheless diminishes the potential impact of the results presented here.\", \"FEEDBACK AND QUESTIONS\", \"I had to read 3.2.1 multiple times to understand the relationships between the different \\\"layers\\\" in the interpolator, and I'm still not sure what the relationship is between the smooth and transient kernels or exactly how the intensity values are estimated (are they just windowed counts or weighted sums?).\", \"I'm also not 100% clear on (a) which parameters (if any) in the interpolator are optimized during end-to-end learning and which are just fixed or tuned as hyperparameters. This should be stated clearly and even better, I'd recommend writing down the gradient update rules for the interpolator parameters (you can put them in the appendix).\", \"Since the model uses global structure for interpolation and requires pre-specifying the number of inducing points, could it be used to make continuous predictions (and how?), e.g., forecast mortality at each hour?\", \"On a related note, if the number of inducing points is pre-specified, can the model be applied to sequences of different length?\", \"How does performance depend on choice of number of inducing points?\", \"How does the proposed approach handle time series that are missing entirely, e.g., if no pH values are measured?\", \"What does Table 3 in the appendix mean by \\\"missingness?\\\" Given that the paper is concerned with irregular sampling (not missing data), I would expect statistics on sampling rates, not missingness...\", \"Why derive your own MIMIC-III subset and tasks rather than use one of several pre-existing benchmarks (both of which include more variables and tasks) [1][2]?\", \"FYI: the Che, et al., 2016, paper on missing values [6] has been published in JBIO, so you should cite that version.\", \"REFERENCES\", \"[1] Purushotham, et al. \\\"Benchmark of Deep Learning Models on Large Healthcare MIMIC Datasets.\\\" arXiv preprint arXiv:1710.08531 (2017)\", \"[2] Harutyunyan, et al. \\\"Multitask learning and benchmarking with clinical time series data.\\\" arXiv preprint arXiv:1703.07771 (2017)\", \"[3] Lipton, Kale, and Wetzel, 2016\", \"[4] Li and Marlin, 2016.\", \"[5] Futoma, et al., 2017.\", \"[6] Che, et al., 2016. <-- new JBIO 2018 version!\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Possibly, simple yet effective solution to handle time series data with missing values\", \"review\": \"Summary:\\nThe authors propose a framework for making predictions on a sparse, irregularly sampled time-series data. The proposed model consists of an interpolation module, and the prediction module, where the interpolation module models the missing values in using three outputs: smooth interpolation, non-smooth interpolation, and intensity. The authors test the proposed method on two different datasets (MIMIC-III and UWave), although only one of the datasets are multi-variate. The proposed method shows comparable training time to other GRU variants, and outperforms all baseline models for mortality prediction and length-of-stay prediction.\", \"pros\": [\"Possibly, simple yet effective solution to handle time series data with missing values.\", \"I appreciate the thorough survey of the related works.\"], \"issues\": [\"My biggest concern is that the authors spend some time to address the disadvantage of discretizing the timeline when modeling missing values (5th paragraph of section 2) and emphasize how their method does not have such limitation. But it seems that, when using the proposed method, the user still needs to pre-define evenly spaced reference points r_1, r_2, ..., r_T. So there is still this dilemma how dense you want the reference points to be. And I couldn't find the values used for the reference points in the experiments section. It's quite possible that one of the baselines can outperform the proposed method with different reference points, given that the evaluation scores overlap with each other wrt standard deviation ranges.\", \"Method description in section 3.2.1 is quite confusing. I could follow until Eq.2, but afterwards, the first interpolants (x^{21}) and the second interpolants (x^{12}) become very confusing. It would have been helpful if the authors explicitly described what the interpolation channel 'c' was before talking about the interpolants.\", \"\\\"taking into account learned correlations\\\" in page 5: I suggest changing that to \\\"taking into account learnable/trainable correlations\\\" since \\\"learned correlations\\\" gives the impression that the correlations were already learned prior to training the model.\", \"Can the authors test the proposed method on logistic regression (LR) and multi-layer perceptron (MLP)? It would be interesting to see if the proposed method improves the performance of LR and MLP.\", \"After considering the author feedback and their effort to address my concerns, I've decided to raise my rating to 6. Thank you for the hard work.\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting but still immature solutions to a critical issues in EHRs\", \"review\": \"In the submitted manuscript, the authors introduce a novel deep learning architecture to solve the problem of supervised learning with sparse and irregularly sampled multivariate time series, with a specific interest in EHRs. The architecture is based on the use of a semi-parametric interpolation network followed by the application of a prediction network, and it is tested on two classification/regression tasks.\", \"the_manuscript_is_interesting_and_well_written\": \"the problem is properly located into context with extensive bibliography, the method is sufficiently detailed and the experimental comparative section is rich and supportive of the authors\\u2019 claim. However, there are a couple of issues that need to be discussed:\\n\\n\\t\\u25aa\\tthe reported performances represent only a limited improvement over the comparing baselines, indicating that the proposed model is promising but it is still immature\\n\\t\\u25aa\\tthe model is sharing many characteristics with (referenced) published methods, which the proposed algorithm is a smart combination of - thus, overall, the novelty of the introduced method is somewhat limited.\\n\\n\\n######### \\n\\nAfter considering the proposed improvements, I decided to raise my mark to 6. Thanks for the good job done!\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
B1lfHhR9tm | The Natural Language Decathlon: Multitask Learning as Question Answering | [
"Bryan McCann",
"Nitish Shirish Keskar",
"Caiming Xiong",
"Richard Socher"
] | Deep learning has improved performance on many natural language processing (NLP) tasks individually.
However, general NLP models cannot emerge within a paradigm that focuses on the particularities of a single metric, dataset, and task.
We introduce the Natural Language Decathlon (decaNLP), a challenge that spans ten tasks:
question answering, machine translation, summarization, natural language inference, sentiment analysis, semantic role labeling, relation extraction, goal-oriented dialogue, semantic parsing, and commonsense pronoun resolution.
We cast all tasks as question answering over a context.
Furthermore, we present a new multitask question answering network (MQAN) that jointly learns all tasks in decaNLP without any task-specific modules or parameters more effectively than sequence-to-sequence and reading comprehension baselines.
MQAN shows improvements in transfer learning for machine translation and named entity recognition, domain adaptation for sentiment analysis and natural language inference, and zero-shot capabilities for text classification.
We demonstrate that the MQAN's multi-pointer-generator decoder is key to this success and that performance further improves with an anti-curriculum training strategy.
Though designed for decaNLP, MQAN also achieves state of the art results on the WikiSQL semantic parsing task in the single-task setting.
We also release code for procuring and processing data, training and evaluating models, and reproducing all experiments for decaNLP. | [
"multitask learning",
"natural language processing",
"question answering",
"machine translation",
"relation extraction",
"semantic parsing",
"commensense reasoning",
"summarization",
"entailment",
"sentiment",
"dialog"
] | https://openreview.net/pdf?id=B1lfHhR9tm | https://openreview.net/forum?id=B1lfHhR9tm | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"oZyoQ92pScV",
"H1lNn9A1eE",
"SygF6GHkkN",
"rJliXSXJ1E",
"H1eWSvGJk4",
"BklnA4fkyE",
"BJxWFWzyy4",
"HyeuPA-JkN",
"H1lMai-1kN",
"S1equDZkJN",
"HJeuil0sRX",
"BkekRkTsR7",
"S1lkWIVjCm",
"BJgYq03KCQ",
"HJeLFjOtA7",
"Bkl_SsOYRm",
"BkgxS6EKAm",
"rJglodNF0Q",
"Bye7ZuVKCX",
"r1xjw4VK0Q",
"B1xIPcoqh7",
"Syx1siQK37",
"S1lGsQAShm",
"HJe-iuo5qm",
"S1gNRnjt5m"
],
"note_type": [
"comment",
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"comment",
"comment"
],
"note_created": [
1732650479049,
1544706732206,
1543619265406,
1543611683469,
1543608121222,
1543607507661,
1543606649178,
1543605855888,
1543605178038,
1543604082037,
1543393439893,
1543389127292,
1543353846595,
1543257745292,
1543240574440,
1543240511809,
1543224631757,
1543223448120,
1543223290893,
1543222371038,
1541220958111,
1541122966701,
1540903833775,
1539123352772,
1539058891825
],
"note_signatures": [
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1522/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1522/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1522/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1522/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1522/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1522/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1522/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1522/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1522/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1522/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1522/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1522/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1522/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1522/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1522/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1522/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1522/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1522/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1522/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1522/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1522/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1522/AnonReviewer2"
],
[
"(anonymous)"
],
[
"~quan_vuong1"
]
],
"structured_content_str": [
"{\"comment\": \"I came across this review today and found this comment to be quite interesting.\\n\\n\\\"Question answering is not a unified phenomenon. **There is no such thing as \\\"general question answering\\\", not even for humans.** Consider \\\"What is 2 + 3?\\\", \\\"What's the terminal velocity of a rain drop?\\\", and \\\"What is the meaning of life?\\\" **All of these questions require very different systems to answer**, and trying to pretend they are the same doesn't help anyone solve any problems.\\\" -- ICLR 2019 Conference Paper1522 AnonReviewer2\", \"title\": \"revisiting this review in 2024\"}",
"{\"metareview\": \"This paper presents a new multi-task training and evaluation set up called the Natural Language Decathlon, and evaluates models on it. While this AC is sympathetic to any work which introduces new datasets and evaluation tasks, the reviewers agreed amongst themselves that the paper is not quite ready for publication. The main concern is that multi-task learning should show benefits of transferring representations or other model components between tasks, demonstrating better generalisation and less task-specific overfitting, but that the results in the paper do not properly show this effect. A more thorough study of which tasks \\\"interact constructively\\\" and what model changes can properly exploit this needs to be done. With this further work, the AC has no doubt that this dataset and task suite, and associated models, will be very valuable to the NLP community.\\n\\nI should note that there were some issues during the review period which lead to AC-confidential communication between AC and authors, and AC and reviewers, to be leaked to the reviewers. It was due to an OpenReview bug, and no party is at fault. Through private discussion with the interested parties, we were able to resolve this matter, and through careful examination of the discussion, I am satisfied that the reviews and final recommendations of the reviewers were properly argued for and presented in good faith.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Some great contributions, but more work needed on cross-task transfer\"}",
"{\"title\": \"No need to escalate\", \"comment\": \"Thank you for your review, Reviewer 2. As AC, I should clarify two things.\\n\\n1. References to authors by first names are vague. It would be helpful for Reviewer 2 to clarify who they mean by \\\"Luheng and Omer\\\" (one would assume Luheng He and Omer Levy) and which paper they are referring to.\\n\\n2. Reserving judgement on whether or not the reviewer is right in suggesting that the paper tries to fit too much content in its main body, it is abiding by the formatting rules of the conference. Readers and reviewers are not required to read the supplementary materials, and thus the paper must stand on its own without them when you evaluate it. However, the length of the appendix is not grounds for desk rejecting it, and it should go through the full review process.\\n\\\"\\\"\\\"\\n\\nThat doesn't seem like it was supposed to be sent to us, so I don't know what's going on with the email system. I'm disappointed that R2 got such an email, knew it was meant to be a private post from the authors, but posted it publicly anyways instead of following up privately. That felt like a betrayal, but I assume they had some positive intent that I can't see. There shouldn't be any more escalation here, and I apologize if I contributed to a negative or adversarial review environment even if unintentionally. I appreciate that you weighed in publicly, and I'll follow up in an attempt to convey my genuine respect for R2's constructive feedback while also responding to the requests in R2's better attempt at point 1 (which I found helpful).\\n\\nThanks again.\", \"comment_title\": \"Some comments\"}",
"{\"title\": \"response to AC\", \"comment\": \"Thank you for replying. I understand the point you are making. I have updated my scoring because, after re-reading the author responses, I think my rating needed to be updated to reflect their clarifications. Thanks, again, for pointing this out.\"}",
"{\"title\": \"response\", \"comment\": \"Thank you for giving further details of your concerns.\\n\\nI do not wish for you to think that my comment was compelling you to change your score, although you are welcome to do so if you think it right. \\n\\nUltimately, you should give a score which you think reflects the strength and suitability of the paper for the conference. The only thing that matters to me (and, I suspect, to the authors) is that if you are going to recommend rejection, it be for clear reasons and with sufficient detail to permit the authors to properly revise their paper for further resubmission.\\n\\nAC\"}",
"{\"title\": \"response to AC\", \"comment\": \"Thanks for commenting! I'm sorry if my review was unclear.\\n\\nI agree that it seems that most of what they moved to the appendix was not done in bad faith. I'm sorry if my review suggested that was the case. However, I did think that there were some interesting details (aside from the related work) mentioned briefly in the appendix that I would have appreciated more analysis on, such as the experiments on curriculum learning strategies or the comparison of which tasks were more similar to each other (and therefore more beneficial for multi-task learning). But, again, I don't think that was done in bad faith. I really just wanted to provide feedback for the authors on that point. I was not recommending rejection on those grounds.\\n\\nRather, I had some difficulty determining the original contributions of this paper. One problem for me was that some of the experiments were unclear (which I asked the authors about in my review), which made it difficult to understand what we can conclude from them. This was particularly the case in the transfer-learning experiment which seemed to suggest that the benefits in transfer learning were coming from the multi-task set-up directly, without showing a single-task transfer-learning baseline (which the authors responded to in their review). \\n\\nAnother question I had was about what the advantages of multi-task/single-task set-ups were. In the paper's tables, it is clear that multi-task set-ups are outperformed by the single-task set-up in nearly all tasks (as well as overall by a nontrivial margin). This goes against the main point of the paper (which seems to be that multi-task setups are beneficial), but it isn't discussed much in the running text. I was hoping the authors would clarify a bit more about why we should use multi-task set-ups if single-task set-ups typically outperform them. Because of the discrepancy in performance, I would have appreciated a more detailed discussion/analysis of the advantages of multi-task learning (this is brought up by Reviewer 3 as well).\\n\\nI appreciate the response/clarifications of the authors to many of my comments and questions. I'm not sure that I could recommend a strong acceptance, but I would probably be inclined to raise my initial rating slightly based on their clarifications.\"}",
"{\"title\": \"Review\", \"comment\": \"Dear Reviewer 2 and Authors\\n\\nThe aim of the peer review process is to ensure that the work presented at conferences is of a sufficient scientific standard. To this end, while not necessarily so, it can end up being an adversarial process: results must be examined, comparisons must be called for, assumptions must be questioned, and so on. We must not let these moments where constructive criticism, and even rejection, is called for poison the well of communication, community, and collaboration which permits our field to grow. To this end, it is extremely important that the authors of negative reviews be especially mindful of their language and of how criticism is framed.\\n\\nUpon examining Reviewer 2's initial comments, I agree with the authors that\\u2014while not explicitly insulting\\u2014the tone is unpleasant. Reviewer 2 perhaps did not intend this, and has apologized for any offense caused. The content of the review is detailed and objective enough that I am not worried about the authors being treated unfairly, when it comes to the assessment of the paper. I encourage the reviewer to consider one last time their score in view of the discussion that has been had, and other reviews, and consider whether they wish to keep it (if so, why) or adjust it. I also encourage the reviewer to consider, in future reviews, how their well-meaning and expert counsel might be perceived by authors\\u2014who may perhaps be students or otherwise fairly new to our field\\u2014if improperly presented.\\n\\nAC\"}",
"{\"title\": \"Notes\", \"comment\": \"[I believe this response will only be seen by the authors, AC and above.]\\n\\nSorry for only coming onto this now. I agree the wording of the review and title is not ideal. The reviewer states that they did not intend to come across as a rude, and has apologized. I will make a public comment to weigh in.\\n\\nIt is still my opinion that the review makes some substantive arguments, and that while its author has failed to be pleasant in a way we would all like even critical reviews to be, they have not failed to be reasonably objective in the case they make for their score.\\n\\nIf the authors wish for me to further escalate this issue, it can be done, but in the absence of such a request or further notice I will assume they accept R2's apology. I commend the authors for their calm and professional response.\\n\\nAC\"}",
"{\"title\": \"Related work sections\", \"comment\": \"I am uncomfortable with this assessment. The reviewer is right that the related work section should not be in the appendix. The reviewer is also correct that the paper should stand on its own without the supplementary material. The role of the paper is to advertise and motivate the work, describing the key experiments, and the supplementary exists to provide enough detail for, say, reproduction or further analysis. Authors should not take advantage of supplementary materials to compensate for a poorly written or organized paper, or to bypass page limits while preserving large swathes of material in overall their submission.\\n\\nFrom looking over this paper, and without prejudice to whatever faults it may or may not otherwise have, it is clear that while the authors made a mistake in moving the contents of Appendices B and C out of the main paper, it was clearly not done in bad faith. The paper is under 8 pages, and the content of these appendices could clearly be moved into the main paper with minor and workable changes while remaining under 10 pages. It seems unfair to me to strongly argue the paper is worthy of rejection on these grounds.\\n\\nPlease, could Reviewer 1 explain in further detail why they are recommending clear rejection: other than the relevant work sections, are there any sections currently in the appendix for which the paper suffers by not having them in the main body? Are there any other strong grounds for rejection? I must admit it is not clear to me from reading the review, in its current form.\\n\\nYou are welcome to discuss these issues with the authors and other reviewers, as there is about a week left before I must provide preliminary decisions.\\n\\nAC\"}",
"{\"title\": \"Response for authors\", \"comment\": \"Thank you for clarifying! I agree with several of the points you make above, and I appreciate your argument about the potential of the multi-task set-ups for transferability and compression. I hope that you are able to revise future iterations of the paper to reflect some of the strong points you've made in the comments section here.\"}",
"{\"title\": \"Thanks for the feedback!\", \"comment\": \"We\\u2019ll keep working on the gaps and make sure to provide additional analysis of task relatedness in future work.\"}",
"{\"title\": \"comments to authors\", \"comment\": \"I agreed on the point that the paper raises an interesting challenge and a potentially interesting research direction. I also agree that not any set of tasks can be combined together for the multi-task learning. More analysis and study should be done to decide which tasks can benefit each other. I am interested in seeing that authors give more study in this direction and/or narrow the gaps (as mentioned in the response) in the future work.\"}",
"{\"title\": \"A better attempt at point 1\", \"comment\": \"It seems my idiolect has a different connotation for \\\"misguided\\\" than yours, and I apologize for using a term that was offensive. What I meant was essentially, \\\"fundamentally the wrong way of thinking about the problem.\\\" If I'm not supposed to comment on the framing of a research problem in a review, I'm not sure what the point of the review is. You called my paragraphs in point 1 \\\"pontificating\\\" - I read them as arguments explaining _why_ I think this is the wrong way of thinking about the problem. I have seen no counter-arguments from you, either in the paper or in your response to my review.\\n\\nSo, some constructive criticism: provide me some arguments for why we should be thinking about \\\"question answering\\\" as a general phenomenon, or show empirically that we can get some benefit from thinking about things this way. I see no empirical results that demonstrate that this is worthwhile, in fact I see quite the opposite. While ELMo and BERT improve performance through multi-task learning, treating everything as QA and training them jointly hurts performance in almost all cases.\\n\\nYou've mentioned SOTA on WikiSQL, but recall that those results were from _single task_ performance of MQAN and have nothing to do with transfer. Performance unsurprisingly drops, quite a lot, when you try to jointly train WikiSQL with other \\\"QA\\\" tasks.\\n\\nIf you're able to show that some gains can be had for translation or classification by thinking of them as QA (more than you can get by doing the same kind of label replacement but without QA), then I will be quite happy to give your paper a positive review. Until then, this really feels like it's going down the wrong path and will give people the wrong impression about QA research. I have had conversations with senior researchers who do not take QA research seriously because of papers saying that \\\"everything is QA\\\" - this is not theoretical harm that I am talking about.\"}",
"{\"title\": \"Clarifications and thanks for your helpful feedback\", \"comment\": \"Yes, I understand that your intent was probably not rudeness. I didn't think it was my place to publicly comment on your writing compared to say, R1, who makes nearly all the same criticisms and gives an equally low score without using terms that are condescending like 'misguided'. I did not post this publicly because I am both an author and a reviewer, and I understand that I am biased towards reading this review as more negative than it should be read. That is why I posted this to ACs and Higher so that they could evaluate. For some reason, the system must have some unintuitive behavior (too me at least) that sends you an email for comments on your posts regardless of the chosen visibility. Not sure what happened since the original post is still visible to me and was not deleted. Now that you've posted it to Everyone, I might as well clarify.\\n\\nAs a reviewer, my criticism of this review has nothing to do with QA or the paper itself. Title and 1) seem to be written too combatively (perhaps to use this platform to balance out \\\"very prominent, public voice[s] advocating for it\\\"?). I don't think this is the place for that. On my view, authors submit for review to get valuable criticism. The reviewer's ultimate goal should be to tell authors how to improve their research; it should not be to combat the research agenda. The paper has problems, especially in total content and organization. As mentioned in the post to ACs and Higher, you raise important criticisms in points 2) and 3). But, I think Title and 1) deviated from what I see as a reviewer's goal too much. I just think you could have done without 1), and you should also avoid using words like 'misguided' unless you intend to up the rudeness factor by a few notches. We might just have to agree to disagree here about tone and word choice, maybe even about the goal of reviewing. \\n\\nNow switching back into author mode. \\n\\n2) You're right that the transfer learning experiments for any one task are not new results. What we find interesting here is that the multi-task model retains transferability to all of the tasks it has been trained on. In this sense, these experiments verify that the representations of the multi-task model are somehow compressing the transferable utility of ten single-task models into a single model (10x smaller).\\n\\nRegarding your point about the gap between single- and multi-task performance, I'll point you to our response to R3 so that you don't have to do redundant reading. \\n\\nRegarding switching classification labels. Yes, this is a rough approximation for something we were trying to study -- whether the model could adapt to new, but related kinds of questions and adapt its output space. Certainly this experimental design has some problems, but we do think it demonstrates the more general capacity of the model to switch output spaces based on the question because the model must realize that even though the context is the same, it must use different output labels based on different questions.\\n\\n3) No objections here. Organizing all this information into a reasonable order is tough, and clearly one big take-away from this reviewing process is to break things down into more conference-sized chunks rather than cram everything into appendices. Definitely don't put related works in an appendix -- it is disrespectful even if the intentions were good (more space to expand on it all).\\n\\nFinal paragraph) A lot of additional valuable feedback here. This gives a good sense of how we might restructure and support claims with new experiments. Very much appreciated.\\n\\nOverall, thanks for the discussion. Even though I disagreed with your reviewing style for 1), I think you make really good points in the remainder of your review. Thanks for offering so much of your time.\"}",
"{\"title\": \"Response\", \"comment\": \"I apologize that my review came off to you as rude. That was not my intent. I knew that my review was quite negative, and I read it several times trying to make sure the criticisms were of the ideas in the paper, not the people who did the work. I apparently did not do a good enough job of that, and I am sorry. When I read it again, even having seen your response, I still have a hard time finding ad hominem attacks (and even you have to say that it's \\\"in disguise\\\"). I can imagine that when it's your work it feels more ad hominem than it was intended.\\n \\nI stand by my criticisms of the paper, however. I strongly feel that this framing of translation and classification as QA harms QA research, and you have a very prominent, public voice advocating for it. You say that my claim is \\\"baseless\\\" and you \\\"can find any number of people to disagree with\\\" it. Citation please (or better yet, just give the arguments themselves instead of appealing to a nameless authoritative crowd). I provided evidence in my second point - treating everything as QA makes performance on most tasks drop, except in cases where the task was designed to be QA and makes sense as QA.\"}",
"{\"title\": \"Reply deleted\", \"comment\": \"I received an email with a response; I'm assuming the authors posted the response and then deleted it, so that it only showed up to reviewers. At the risk of escalating things further, I'm posting the reply here so I can respond to it.\", \"response_title\": \"Red Flag\", \"response_comment\": \"I'll respond to this review's points 2) and 3) eventually in a way that is visible to everyone without discussing this particular aspect of the review, but I have to say that the title and complaint 1) come off as condescending and, frankly, just plain mean.\\n\\nAs a fellow ICLR reviewer, I can't imagine titling a review as \\\"Misguided and Overcrowded\\\". How about \\\"Concerns with QA as a general framework; too much material in appendices\\\"? That took me about a second to rephrase in a way that is more informative and avoids conveying an intention to humiliate and demean.\\n\\nSimilarly, there are plenty of ways to politely raise concerns about multi-task learning and framing multi-task learning as question answering, but the reviewer chooses an alternative approach. Take for example this excerpt:\\n\\n\\\"this paper does more harm than good, because it perpetuates a misguided view of question answering... Question answering is not a unified phenomenon.... There is no such thing as general question answering... All of these questions require very different systems to answer, and trying to pretend they are the same doesn't help anyone solve any problems.\\\" \\n\\nThe above starts out by repeating the same baseless claim that I can find any number of people to disagree with. The way that paragraph ends makes it read as if the whole thing is really an ad hominem attack in disguise. In my opinion as a fellow reviewer, I do not think we should be entering into ideological arguments. Rather, the reviewer should be basing their claims in the empirical results of the paper and prior published literature. This reviewer is not doing that; they are just stating their opinion despite the fact that the idea of \\\"general QA\\\" has been used as an idea in this paper to get SOTA on WikiSQL and make significant progress on two crucial multi-task learning problems (see response to R3).\\n\\nI'm quite shocked that anyone that considers themselves part of a professional community would talk to someone else in that community so rudely. I'm not writing this so much as an author because the review eventually does raise some good concerns. I acknowledge that the paper has issues with the amount of information in the appendices. But -- as a reviewer I find myself asking, \\\"Why did they have to have all the condescending meanness before getting to helpful, critical feedback? How does all that pontificating help the authors improve their research?\\\" It is clear to me that it did not need to be there because such pontificating does not help. For this reason, I think this kind of review should be discouraged by ACs and Higher.\"}",
"{\"title\": \"Related work\", \"comment\": \"Thanks for taking the time to suggest how we could prioritize all of this information more effectively.\\n\\nYou're right that even though we cite the work you mention (Collobert and Weston 2008) along with the follow up (Collobert et al. 2011) in our original submission, we only do so in the related works, which are currently placed in the appendices. \\n\\nI assure you that we meant no disrespect to these related works by placing them in an appendix. Our thinking at the time was that we could only do justice to the significant literature in both multi-task learning and single-task learning for all these tasks by moving such discussions to sections that had no page limit. \\n\\nWe have a lot of information in the appendices that we view as just as important as the information in the main body. We just didn't have the same interpretation of appendices (as lesser material) going into this submission. We simply ordered on what we thought would need to be read first in order to understand the benchmark and the progress so far. For example, the details on anti-curriculum pre-training are actually quite important to us as a contribution, but they didn't seem as essential as understanding the nature of all the tasks. Our motivations for the tasks, the related works, and the task-specific related works are all important. The fact that they are in appendices is only because of the total amount of information in the submission.\\n\\nThat being said, feedback from multiple sources has indicated that at least some of these related works need to be in the main body, and we will reorganize the paper accordingly.\"}",
"{\"title\": \"50k most common words across all tasks in decaNLP\", \"comment\": \"Exciting! Glad you're finding decaNLP to be a good resource for further research!\\n\\nIn our experiments, the generative vocabulary in Eq. 11 contains the most frequent 50000 words in the\\ncombined training sets for all tasks in decaNLP. A lot of these training details are way down in Appendix D on Preprocessing and Training Details. They aren't necessarily optimal if you have a bigger memory budget than we did or have a more clever motivation for how these kinds of decsision should be based on individual tasks.\"}",
"{\"title\": \"Response to R1: Thanks for your review, and some clarificaitons\", \"comment\": \"Regarding your point about the gap between single- and multi-task performance, I'll point you to our response to R3 so that you don't have redundant reading.\\n\\nRegarding the transfer learning experiments. The performance gain does not come from the multi-task objective, as single-task models would exhibit similar behavior for their respective task. What we find interesting here is that the multi-task model retains transferability to all of the tasks it has been trained on. In this sense, these experiments verify that the representations of the multi-task model are somehow compressing the transferable utility of ten single task models into a single model (10x smaller!).\\n\\nFor the label replacement on the SST dataset, the empirical results show a minor degradation in performance (~1%, so ~86 vs ~87 according to Table 2 and subsection 4.3). This was a naive replacement mapping all answers that were 'positive' to 'happy' and all answers that were 'negative' to 'angry'. This shows how the model is learning to capitalize on the common output space (all of English in GloVe) to adapt to new labels without any additional training. This is advantageous over models that do not actually generate answer sequences because it allows them to be more robust in intuitive ways.\\n\\nYou're certainly right that the appendix carries a lot of useful information and some of the details about contributions. We had moved the related works to the appendix because that was the only way we found we could sufficiently do justice to the long line of literature in multi-task learning as well as all of the literature for each task, but it does seem we will need to include at least a part of our full related works in the main body. There is quite a bit of material overall, and we thank you for your suggestions about where to cut/condense and how to prioritize information.\\n\\nThank you again for your questions and your feedback about organization.\"}",
"{\"title\": \"Respone to R3: Thank you for your review; more on single- vs. multi-task performance\", \"comment\": \"First of all, thank you for your review. You touch upon a crucial point that does require clarification: the gap between the single- and multi-task performance.\\n\\nAs you mentioned, the multi-task learning literature has taught us at least one thing: related tasks tend to help each other, and unrelated tasks tend to interfere with each other. The latter is an interesting phenomenon, and it is what we see as the primary multi-task learning problem of concern in this paper, and we are proposing decaNLP as a benchmark for measuring progress on this problem.\\n\\nThere are two ways in which unrelated tasks tend to interfere. The first is during the modeling phase where some tasks prevent us from using priors (like span prediction for QA or a German-only output vocabulary) that would be useful for some tasks. The second is during the training phase where some tasks tend to interfere with representation learning.\\n\\nThese two kinds of interference lead to two kinds of gaps that we measure with this benchmark. The first is the gap between the current best decaNLP model (in the single- and multi-task settings) and a combination of state-of-the-art models for each task. The second is the gap between the best decaNLP model in the multi-task setting and a combination of ten of those best decaNLP models each trained for a single task. \\n\\nThe concrete contributions of this paper are 1) the preparation of benchmark along with reasonable sequence-to-sequence baselines, 2) progress on the first kind of gap by switching from seq2seq to multi-sequence-to-sequence with MQAN (by transforming problems into QA triplets), and 3) progress on the second kind of gap by demonstrating the superiority of anti-curriculum learning (or pre-training on harder tasks) over the baseline fully joint training. 3) actually ties multi-task learning back to transfer learning as an effective means of representation learning.\\n\\nBut yes, we have not yet entirely closed these gaps; as you mentioned, that is a key part of the challenge to the community. We have chosen to introduce this challenge now because we believe solutions to this problems are within reach in the near future if the community focuses on them.\\n\\nAnd yes, though this approach will likely be successful whenever tasks are related (just based on what we know from the rest of the multi-task learning literature), it is sometimes not yet the best way to optimize for single-task performance. Keep in mind though that it did lead to new state-of-the-art results on WikiSQL despite no direct modeling or tuning for that task. \\n\\nThanks again for your time.\"}",
"{\"title\": \"A good example to treat different NLP problems as Q&A and trained together. Results for some problems are worse than their state-of-the-art.\", \"review\": \"The paper formulates several different NLP problems as Q&A problem and proposed a general deep learning architecture. All these tasks are trained together.\\n\\nIf the goal is to achieve general AI, the paper gives a good starting point. One technical novelty is the deep learning architecture for this general Q&A problem including the multi-pointer-generator. The paper presents an example of how to do a multi-task learning for 10 different tasks. It raises a very challenging problem or in some way release a new dataset.\\n\\nIf our goal is to optimize a single task, the usefulness of the method proposed by the paper is questionable. \\nAs we know, multi-task learning works well if some important knowledge shared by different tasks can be learned and leveraged. From table 2, we see for many problems, the results of the single task training are better than the multi-task training, meaning that other tasks can't really help at least under this framework. This makes me doubt if this multi-task learning is useful if our goal is to optimize the performance of a single task. This general model also sacrifices some important prior knowledge of an individual task. For example, for the Squad, the prior that the answer is a continuous span. Ideally, the prior knowledge should be leveraged.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"New framework has a lot of potential, but the experiments, motivations, and related work are missing details\", \"review\": \"Update: I've updated my score based on the clarifications from the authors to some of my questions/concerns about the experimental set-up and multi-task/single-task differences.\", \"original_review\": \"This paper provides a new framework for multitask learning in nlp by taking advantage of the similarities in 10 common NLP tasks. The modeling is building on pre-existing qa models but has some original aspects that were augmented to accommodate the various tasks. The decaNLP framework could be a useful benchmark for other nlp researchers. \\n\\nExperiments indicate that the multi-task set-up does worse on average than the single-task set-up. I wish there was more analysis on why multi-task setups are helpful in some tasks and not others. With a bit more fine-grained analysis, the experiments and framework in this paper could be very beneficial towards other researchers who want to experiment with multi-task learning or who want to use the decaNLP framework as a benchmark.\", \"i_also_found_the_adaptation_to_new_tasks_and_zero_shot_experiments_very_interesting_but_the_set_up_was_not_described_very_concretely\": \"-in the transfer learning section, I hope the writers will elaborate on whether the performance gain is coming from the model being pretrained on a multi-task objective or if there would still be performance gain by pretraining a model on only one of those tasks. For example, would a model pre-trained solely on IWSLT see the same performance gain when transferred to English->Czech as in Figure 4? Or is it actually the multi-task training that is causing the improvement in transfer learning? \\n -Can you please add more detail about the setup for replacing +/- with happy/angry or supportive/unsupportive? What were the (empirical) results of that experiment?\\n\\nI think the paper doesn\\u2019t quite stand on its own without the appendix, which is a major weakness in terms of clarity. The related work, for example, should really be included in the main body of the paper. I also recommend that more of the original insights (such as the experimentation with curriculum learning) should be included in the body of the text to count towards original contributions. \\n\\nAs a suggestion, the authors may be able to condense the discussion of the 10 tasks in order to make more room in the main text for a related work section plus more of their motivations and experimental results. If necessary, the main paper *can* exceed 8 pages and still fit ICLR guidelines.\", \"very_minor_detail\": \"I noticed some inconsistency in the bibliography regarding full names vs. first initials only.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Misguided and overcrowded\", \"review\": \"I appreciate the work that went into creating this paper, but I'm afraid I see little justification for accepting it. I have three major complaints with this paper:\\n \\n1. I think the framing of decaNLP presented in this paper does more harm than good, because it perpetuates a misguided view of question answering.\\n \\nQuestion answering is not a unified phenomenon. There is no such thing as \\\"general question answering\\\", not even for humans. Consider \\\"What is 2 + 3?\\\", \\\"What's the terminal velocity of a rain drop?\\\", and \\\"What is the meaning of life?\\\" All of these questions require very different systems to answer, and trying to pretend they are the same doesn't help anyone solve any problems.\\n \\nQuestion answering is a _format_ for studying particular phenomena. Sometimes it is useful to pose a task as QA, and sometimes it is not. QA is not a useful format for studying problems when you only have a single question (like \\\"what is the sentiment?\\\" or \\\"what is the translation?\\\"), and there is no hope of transfer from a related task. Posing translation or classification as QA serves no useful purpose and gives people the wrong impression about question answering as a format for studying problems.\\n\\nWe have plenty of work that studies multiple datasets at a time (including in the context of semi-supervised / transfer learning), without doing this misguided framing of all of them as QA (see, e.g., the ELMo and BERT papers, which evaluated on many separate tasks). I don't see any compelling justification for setting things up this way.\\n \\n2. One of the main claims of this paper is transfer from one task to another by posing them all as question answering. There is nothing new in the transfer results that were presented here, however. For QA-SRL / QA-ZRE, transfer from SQuAD / other QA tasks has already been shown by Luheng He (http://aclweb.org/anthology/N18-2089) and Omer Levy (that was the whole point of the QA-ZRE paper), so this is merely reproducing that result (without mentioning that they did it first). For all other tasks, performance drops when you try to train all tasks together, sometimes significantly (as in translation, unsurprisingly). For the Czech task, fine tuning a pre-trained model has already been shown to help. Transfer from MNLI to SNLI is known already and not surprising - one of the main points of MNLI was domain transfer, so obviously this has been studied before. The claims about transfer to new classification tasks are misleading, as you really have the _same_ classification task, you've just arbitrarily changed how you're encoding the class label. It _might_ be the case that you still get transfer if you actually switch to a related classification task, but you haven't examined that case.\\n \\n3. This paper tries to put three separate ideas into a single conference paper, and all three ideas suffer as a result, because there is not enough space to do any of them justice. Giving 15 pages of appendix for an 8 page paper, where some of the main content of the paper is pushed to the appendix, is egregious. Putting your work in the context of related work is not something that should be pushed into an appendix, and we should not encourage this behavior.\\n \\nThe three ideas here seem to me to be (1) decaNLP, (2) the model architecture of MQAN, (3) transfer results. Any of these three could have been a single conference paper, had it been done well. As it stands, decaNLP isn't described or motivated well enough, and there isn't any space left in the paper to address my severe criticisms of it in my first point. Perhaps if you had dedicated the paper to decaNLP, you could have given arguments that the framing is worthwhile, and described the tasks and their setup as QA sufficiently (as it is, I don't see any description anywhere of how the context is constructed for WikiSQL; did I miss it somewhere?). For MQAN, there's more than a page of the core new architecture that's pushed into the appendix. And for the transfer results, there is very little comparison to other transfer methods (e.g., ELMo, CoVe), or any deep analysis of what's going on - as I mentioned above, basically all of the results presented are just confirming what has already been done elsewhere.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"comment\": \"The related work section should not be buried in Appendix B on page 17.\\n\\nFrom the text of the main paper, a reader would have no indication that multi-task NLP has been around for 10+ years, and that the main novelty here is the particular selection of tasks and aggregating performance across those tasks as a benchmark. The authors should be more clear and honest about what their contribution is.\\n\\nAs an example, I'll point to [1], a well known paper (2.7k cites) from ICML 2008, titled \\\"A Unified Architecture for Natural Language Processing: Deep Neural Networks with Multitask Learning\\\". \\n\\n[1] https://ronan.collobert.com/pub/matos/2008_nlp_icml.pdf\", \"title\": \"Please respect prior work\"}",
"{\"comment\": \"Thanks for the paper and the collections of datasets!\\n\\nI'm using decaNLP in my research and would like to ask a clarification question. Section 3 mentions \\\"We gives it access to v additional vocabulary tokens\\\". What are the v additional tokens and how were they chosen?\", \"title\": \"Clarifications\"}"
]
} |
|
HyxGB2AcY7 | Contingency-Aware Exploration in Reinforcement Learning | [
"Jongwook Choi",
"Yijie Guo",
"Marcin Moczulski",
"Junhyuk Oh",
"Neal Wu",
"Mohammad Norouzi",
"Honglak Lee"
] | This paper investigates whether learning contingency-awareness and controllable aspects of an environment can lead to better exploration in reinforcement learning. To investigate this question, we consider an instantiation of this hypothesis evaluated on the Arcade Learning Element (ALE). In this study, we develop an attentive dynamics model (ADM) that discovers controllable elements of the observations, which are often associated with the location of the character in Atari games. The ADM is trained in a self-supervised fashion to predict the actions taken by the agent. The learned contingency information is used as a part of the state representation for exploration purposes. We demonstrate that combining actor-critic algorithm with count-based exploration using our representation achieves impressive results on a set of notoriously challenging Atari games due to sparse rewards. For example, we report a state-of-the-art score of >11,000 points on Montezuma's Revenge without using expert demonstrations, explicit high-level information (e.g., RAM states), or supervisory data. Our experiments confirm that contingency-awareness is indeed an extremely powerful concept for tackling exploration problems in reinforcement learning and opens up interesting research questions for further investigations. | [
"Reinforcement Learning",
"Exploration",
"Contingency-Awareness"
] | https://openreview.net/pdf?id=HyxGB2AcY7 | https://openreview.net/forum?id=HyxGB2AcY7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"ryeCqv8egV",
"SygtznNglE",
"HkgYEAahJN",
"H1eXJyTACX",
"rJlI1eLA0Q",
"BkxNyUCqR7",
"rJecqfd9Cm",
"Hkxsdfd5Cm",
"HyelEz_qC7",
"SyeF5cv5AQ",
"BkeVt5D9Am",
"SygATEK7pQ",
"HJljoJPp27",
"H1ejPD2tnm"
],
"note_type": [
"meta_review",
"official_comment",
"comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544738710241,
1544731665228,
1544506928611,
1543585498722,
1543557085962,
1543329243548,
1543303825904,
1543303795420,
1543303719998,
1543301777038,
1543301756059,
1541801157576,
1541398434779,
1541158754705
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1520/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1520/Authors"
],
[
"(anonymous)"
],
[
"ICLR.cc/2019/Conference/Paper1520/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1520/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1520/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1520/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1520/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1520/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1520/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1520/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1520/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1520/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1520/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper addresses the challenging and important problem of exploration in sparse-rewards settings. The authors propose a novel use of contingency awareness, i.e., the agent's understanding of the environment features that are under its direct control, in combination with a count-based approach to exploration. The model is trained using an inverse dynamics model and attention mechanism and is shown to be able to identify the controllable character. The resulting exploration approach achieves strong empirical results compared to alternative count-based exploration techniques. The reviewers note that the novel approach has potential for opening up potential fruitful directions for follow-up research. The obtained strong empirical results are another strong indication of the value of the proposed idea.\\n\\n\\nThe reviewers mention several potential weaknesses. First, while the proposed idea is general, the specific implementation seems targetted specifically towards Atari games. While Atari is a popular benchmark domain, this raises questions as to whether insights can be more generally applied. Second, several questions were raised regarding the motivation for some of the presented modeling choices (e.g., loss terms) as well as their impact on the empirical results. Ablation studies were recommended as a step to resolving these questions Reviewer 3 questioned whether the learned state representation could be directly used as an additional input to the agent, and if it would improve performance. Finally, several related works were suggested that should be included in the discussion of related work.\\n\\nThe authors carefully addressed the issues raised by the reviewers, running additional comparisons and adding to the original empirical insights. Several issues of clarity were resolved in the paper and in the discussion. Reviewer 3 engaged with the authors and confirmed that they are satisfied with the resulting submission. The AC judges that the suggestions of reviewer 1 have been addressed to a satisfactory level. A remaining issue regarding results reporting was raised anonymously towards the end of the review period, and the AC encourages the authors to address this issue in their camera ready version.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Novel approach to exploration with strong empirical validation\"}",
"{\"title\": \"Response\", \"comment\": \"Thank you very much for your comment. You are correct that the reported performance of DDQN+ is achieved at 25M steps rather than at 50M steps. We will update the table in the final version of the paper. To the best of our knowledge, DDQN+ code is not publicly available and in our experience it was not trivial to replicate the results. On Montezuma\\u2019s revenge, very often many methods can reach the score of 2500 quite easily but afterwards they struggle to achieve higher scores (so running the algorithm longer usually doesn\\u2019t guarantee further improvement in scores). If the authors can share their code or report their results with more steps on Montezuma\\u2019s revenge, we are happy to include it in the table.\\n\\nConsidering that all of the baselines in Table 2 use frameskip of 4, reporting the number of frames (instead of number of steps) does not make a difference in the comparison. However, we will consider reporting the number of frames in the final version.\"}",
"{\"comment\": \"Just a heads-up that you've overstated the training time for Bellemare et al.'s DDQN+ agent by a factor of 2. If you check their Figure 2 and the surrounding text, you'll see that it was only trained for 100m frames, or 25m \\\"environment timesteps\\\" in your terminology. In Table 2, you've stated that it was trained for 50m environment timesteps. With this in mind, if you compare the first quarter of your Figure 2 to theirs, it seems pretty dubious whether your agent is actually ahead.\", \"side_note\": \"I think it would be better if you quoted training times with the multiplier of 4 throughout, as this is by-and-large the more common time scale used in the literature.\", \"title\": \"Training time for DDQN+ agent is overstated by a factor of 2\"}",
"{\"title\": \"Re: Re: Response to Reviewer 3\", \"comment\": \"Thank you for the clarifications\"}",
"{\"title\": \"Re: Re: Response to Reviewer 3\", \"comment\": \"Dear Reviewer 3,\\n\\nThank you very much for quickly and carefully going through our response and the updated draft.\\n\\nRegarding the description of \\\\tau, we will update it in the final version of paper. The caption of Table 4 shall now read: \\u201cFor the four games where there is no change of high-level visual context (FREEWAY, FROSTBITE, QBERT and SEAQUEST), we do not include c in the state representation \\u03c8(s), hence there is no \\\\tau.\\u201d\\n\\nRegarding Table 5, we note that A2C+CoEX(c) slightly differs from the vanilla A2C even on those games, as it has a decaying exploration bonus at each time step, whereas the vanilla A2C has no bonus reward at all. It can affect the agent\\u2019s behavior; for instance, a positive reward at every time step is known to incentivize the agent to survive longer.\", \"regarding_your_questions_about_the_new_ablation_study\": \"1) This is our small mistake (sorry, Seaquest was added later), thank you for pointing this out. We have fixed this, which will appear in the final version.\\n\\n2) The mean score of 94 and 77 happens as a spike in the early stage of training, but the agent failed to retain the score, yielding almost zero mean reward afterwards (as shown in the plot). We will fix wordings accordingly.\\n\\nThe goal in the game of Venture is basically to navigate the world visiting many different rooms and destroy the enemy, and there is not much benefit going back to a previously explored room (more precisely, after clearing the room: i.e., killing enemies and picking up the score-items). Therefore, exploration with cumulative reward as extra state may not be beneficial. \\n\\n* More detailed answer: we would like to refer to our previous response on why the cumulative rewards may be useful as extra state information as it can potentially serve as important contextual change (e.g., picking up a key in Montezuma\\u2019s Revenge) that may incentivize the agent to revisit previously explored states (e.g., going to the door even if the corresponding state was previously explored without the key). However, in Venture, such revisiting behavior based on the change of cumulative rewards does not yield benefit due to the nature of the game.\\n\\n3) We have added the number of seeds in Figure 9 and Table 5, which will appear in the final version. Thanks for the suggestion!\"}",
"{\"title\": \"Re: Response to Reviewer 3\", \"comment\": \"Sorry, I was not aware of the rebuttal period extension. Thank you for the detailed response and updated revision, I will update my review rating accordingly.\\n\\nRegarding the (lack of) generality of the proposed method, I do agree that at high level similar ideas could probably be used in different settings, however this remains hypothetical until actually verified empirically (that's what I meant by \\\"another example of application of these ideas to a different domain could have strengthened the submission\\\").\\n\\nAs far as the last point is concerned (tau), after quickly browsing through the changes in the new revision I didn't see mentioned in the text that some games were not using the clustering scheme. Please make sure it's clear (it should probably be at least in the caption of Table 4). If I understand correctly this also means that for these 4 games, methods A2C and A2C+CoEX(c) in Table 5 are actually the same and the differences only come from re-running the experiments (in that case maybe using the same numbers, e.g. those from A2C, could avoid some confusion).\", \"about_the_new_content_in_the_revised_version\": \"1) On p.16 (last paragraph), \\\"on these two games\\\" should be \\\"on these three games\\\". You also claim that \\\"full ADM worked best\\\", but that is not the case on Seaquest.\\n\\n2) On p.17, you claim that \\\"the variants without contingent regions (...) [gave] almost no improvement over the A2C baseline\\\", mentioning \\\"Montezuma's Revenge and Venture\\\" as examples: however in Venture both variants (scores 94 & 77) improve on A2C (score 0). It's also interesting to see how removing the reward from psi in Venture helps reach a much better score, do you have any idea why? (maybe it somehow has to do with how scoring works in this game?)\\n\\n3) Please mention the number of seeds in Table 5's caption.\"}",
"{\"title\": \"Response to Reviewer 1 (Part 1/2)\", \"comment\": \"Dear Reviewer 1, \\n\\nThank you for the constructive and positive feedback. Please have a look at the revised draft for ablation studies and other improvements. We are happy to provide additional information upon request.\\n\\n\\n[Extra Loss Terms of ADM]\\n\\n>> Why not include an entropy regularization loss for policy?\\nWe agree on the importance of entropy regularization for policy optimization. In fact, in our submission, the standard entropy regularization term H(pi(a|s)) was already included in policy training (we used the default regularization weight 0.01) --- please see Appendix A for details. We have revised the description to make it clearer.\\n\\n>> How is the second issue (= distribution shift & non i.i.d. training data) mitigated?\\nOur goal is to make the ADM model generalize to unseen trajectories. However, if the model is trained only on the trajectories obtained by the current policy, there is a significant risk of overfitting. To prevent this we incorporate different forms of regularization, including attention entropy regularization and policy entropy regularization. We empirically find that this helps the model generalize better. In Appendix E we have included a concrete example on Freeway illustrating the positive impact of additional regularization terms in preventing overfitting.\\n\\nHowever, to address this issue more directly, we believe one can potentially incorporate a replay buffer of previous trajectories to optimize the ADM model on off-policy data, or one can train the ADM based on random exploration. We leave this to future work. That being said, we did not observe serious issues with on-line training of the ADM model in our experiments. \\n\\n>> Ablation Study of ADM.\\nWe first note that the proposed ADM loss function worked very well on the 8 Atari games considered. That said, there might be other combinations of training objectives that can also work well. Upon your suggestion we have included ablation experiments in Appendix E to study the effect of ADM loss terms. Additional loss terms help to attain better performance and stability of ADM. In environments where the consequence of actions is easily predictable (e.g., Seaquest) the additional regularization may not be necessary. In more difficult games the additional loss terms improve the stability and the generalization of ADM.\\n\\n[Cell Loss Confusion]\\nThere was a typo on the cell-wise cross-entropy loss. It was fixed to p(\\\\hat{a} | e) in the revision. Thank you for pointing it out.\\n\\n[State Representation]\\nWe have added a small comment on what \\\\psi(s) consists of. We assumed that the construction of \\\\psi(s) can be thought of as an implementation detail in a more general perspective, to simply keep Section 3.2 as concise as possible.\\n\\n[Plots]\\nThe x-axis denotes the environment step (100M steps = 400M frames due to the frameskip of 4), and the y-axis denotes the mean reward over recent 40 episodes for each individual run (shown in light curves). The learning curve (shown in dark) is obtained by averaging over 3 random seeds.\\n\\n(To be continued in part 2)\"}",
"{\"title\": \"Response to Reviewer 1 (Part 2/2)\", \"comment\": \"(Continued from part 1)\\n\\n\\n[Results]\\nWe conjecture that the performance drop on Montezuma\\u2019s Revenge is mainly due to the instability of the A2C algorithm when it encounters large nonstationary exploration bonus rewards. However in our preliminary experiments, when a stronger and more stable base RL algorithm is used (e.g., PPO), we observe very stable results without such a performance drop. More specifically, using PPO+CoEX on Montezuma\\u2019s Revenge we achieve a score >11,000 averaged over 3 runs at 250M environment steps. The performance seems to keep improving as the number of steps increases, whereas the vanilla PPO achieves a score of <100. This suggests that such a high performance is not due to the use of PPO alone. We report the trend (score vs #steps) below:\\n\\nTest score, # of environmental steps\\n-------------------------------------------\\n5,066 at 100M steps (= 0.4B frames)\\n8,015 at 150M steps (= 0.6B frames)\\n10,108 at 200M steps (= 0.8B frames)\\n11,108 at 250M steps (= 1B frames)\\n(Plot) The corresponding learning curve is available at the supplementary web page: http://goo.gl/sNM3ir \\n\\nTo the best of our knowledge this result is above (or equal to) the state-of-the-art performance in Montezuma\\u2019s Revenge without using any explicit high-level information such as RAM states (as in SmartHash [Tang et al., NIPS 2018] or any expert demonstrations (e.g. DQfD [Hester et al., 2017]), when compared with work published to date. We will incorporate more comprehensive experiments with PPO and revise the paper for the final version.\\n\\nIn PrivateEye, we observe the instability of performance mainly due to the trick of clipping reward within the range [-1, 1], which is a standard used in DQN and A2C to deal with different scales of environment rewards. Specifically, PrivateEye has a negative raw reward (e.g. -1 at each time step) but the scale of positive and negative rewards are different (i.e., the scale of positive rewards is often much bigger than that of negative rewards). As a result, the agent actually increases the cumulative sum of \\u201cclipped\\u201d extrinsic rewards (which increases from around -500 to 0, which correspond to raw reward of approximately 3000 and 0 respectively), but the raw episode return drops as shown in Figure 2. Similar behaviors are also observed in (Bellemare et al. 2016).\\n\\n\\n[Appendix (Algorithm 1&2)]\\nWe have extended the description of loss functions and fixed notation issues as suggested by the reviewer. Regarding the question about the Algorithm 2 (clustering) it also makes sense to assign a frame to the closest cluster [Kulis & Jordan, 2012]. However, based on our experience we observe that there is no significant difference in terms of the agent\\u2019s end performance when we use the closest cluster. This is likely because we have chosen \\\\tau so that such a cluster is mostly unique and there would be only very little difference in room assignment. We will update the paper with the results with the algorithm assigning frames to the closest cluster in the final version.\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"Dear Reviewer 2,\\nThank you very much for your feedback. We are glad to hear that you find our work insightful and interesting. We have updated the draft to correct small errors and make the exposition of the paper clearer. Please let us know if you have additional comments. We are happy to provide additional information upon request.\"}",
"{\"title\": \"Response to Reviewer 3 (Part 2/2)\", \"comment\": \"[Ablative Studies]\\nWe conducted an ablation study on the state representation by exploring variants of A2C+CoEX without the predicted location information. We have added it in the Appendix F. To briefly summarize the result: as expected the variants without contingency-region information (especially the (c,R) baseline) perform much worse than the one with contingent region information. It is common for these variants to achieve almost no reward on Montezuma\\u2019s Revenge and Venture, where the reward is extremely sparse. This demonstrates that the contingent region information indeed plays an important role in count-based exploration.\\n\\nMethod | Freeway | Frostbite | Hero | Montezuma | PrivateEye | Qbert | Seaquest | Venture\\nA2C | 7.2 | 1099 | 34352 | 12.5 | 574 | 19620 | 2401 | 0\\nA2C+CoEX (c) | 10.7 | 1313 | 34269 | 14.7 | 2692 | 20942 | 1810 | 94\\nA2C+CoEX (c; R) | 34.0 | 941 | 34046 | 9.2 | 5458 | 21587 | 2056 | 77\\nA2C+CoEX (x; y; c) | 33.7 | 5066 | 36934 | 6558 | 5377 | 21130 | 1978 | 1429\\nA2C+CoEX (x; y; c; R) | 34.0 | 4260 | 36827 | 6635 | 5316 |23962 | 5169 | 204\", \"table_5\": \"Summary of results for the ablation study: the maximum mean scores (averaged over 40 recent episodes) achieved over 100M environment steps of training.\\n\\n\\n\\n[Providing the policy with learned representation]\\nWe first note that one can obtain a better function approximation by using this representation as an additional input, which is already claimed in (Bellemare et al., 2012). One easy way of providing learned contingency region information is to use it as an additional input to the policy and the value network. In our preliminary experiments this improved the performance only by a small margin, therefore we did not include those results for the clarity of the paper. We believe that taking advantage of contingent regions for policy learning could be more useful in a hierarchical RL setting or in combination with planning, which we plan to explore as a future work.\\n\\n\\n[Long-term prediction of ADM]\\nWe agree that one could improve ADM by taking multi-step transitions into consideration as suggested. We can consider extending an inverse-dynamics model to provide a window of state sequences that is a few steps wider and predict the action taken in the middle of the transition (e.g. given x_{t-3:t+2} predict a_t). This might be helpful on more complex environments, but it turns out that 1-step prediction works relatively well for the environments we experimented with. We plan to investigate the extension to multi-step prediction in a future work when dealing with more challenging environments.\\n\\n[Writing & Other Remarks]\\nThanks for pointing out several typos and other suggestions on writing. We have fixed all of them as well as missing references, related work, etc. Regarding Table 2, there were unnecessary star and cross symbols used for denoting different steps which are now removed.\\n\\n[Choice of \\\\tau in clustering]\\nThe games with no tau in Table 4 do not have c in the state representation because there is no change of high-level visual context (objects, layouts, etc.) in these games. We did not use RAM to tune the hyperparameter but chose a reasonable value of \\\\tau in the range [0.5, 0.8] based on visual inspection, such that it would give a sensible clustering result of observation samples collected across different visual contexts. One can tune this hyperparameter more extensively if given enough time/computational resources to find the best \\\\tau to reach the highest score in the game; however tuning of \\\\tau was not our primary concern.\"}",
"{\"title\": \"Response to Reviewer 3 (Part 1/2)\", \"comment\": \"Dear Reviewer 3,\\n\\nWe appreciate your positive, constructive, and detailed feedback. Our impression was that the rebuttal deadline is extended until November 26 per emails sent from the PCs. We apologize for not submitting the response earlier, as we have been using the extra time from the three-day extension of the revision period to prepare the best version of our response. Below we answer questions and address the concerns mentioned in the review. Please take a look at the revised draft for minor corrections and more related work. Please let us know if this addresses your points; we are happy to provide additional responses/information upon request.\\n\\n[Specificity of domain]\\nOur experiments focus on 2D Atari games as they are popular in the RL community; however, the proposed high-level ideas are more general. We also briefly describe how our method can be extended to address your points.\\n\\n > Regarding applicability to different (e.g., non-Atari) environments: The idea of contingency awareness is applicable to continuous control problems as well, e.g., environments with continuous actions and image observations (e.g. rendering of 3D physics-based fully-observable environments from camera, such as AntMaze [Frans et al., ICLR 2018 / Nachum et al., NeurIPS 2018]). In such domains we can still discover controllable aspects out of observations via an attention mechanism by exploiting the correlation between actions and pixels, and then apply a similar exploration technique for the agent.\\n\\n > Regarding the assumption that a single region of the screen is being controlled by the agent: To deal with multiple controllable entities in the environment, one can extend our ADM with multiple attention heads, which could identify and track multiple controllable entities. In this case we could enrich the state representation for exploration to include information about multiple objects.\\n\\n > Regarding the clustering assumption: we used clustering to identify the context information (e.g., \\u201crooms\\u201d), but one can alternatively use different methods to obtain such information, e.g., autoencoder-based distributed representation, and concatenate with the contingent-region information for improving exploration in sparse-reward problems.\\n\\n > Regarding using the total score as a proxy to important state information: In environments with sparse rewards it may be natural to assume that collecting a non-zero reward may indicate an important change of context or environmental information (e.g., obtaining a key in Montezuma\\u2019s Revenge). The addition of total score as extra state information improved the performance for Montezuma\\u2019s revenge. However, for other games, our method was still able to achieve high performance without such total score information. (Please see our ablative studies for details.) We will deemphasize the importance of this component in the final version. Thank you for your insightful comments.\"}",
"{\"title\": \"Novel idea for exploration in RL, good empirical results, can benefit from more clarity and evidence\", \"review\": \"Summary:\\n\\nThe paper proposes the novel idea of using contingency awareness (i.e. the agent\\u2019s understanding of the environment dynamics, its perception that some aspects of the environment are under its control and ability to locate itself within the state space) to aid exploration in sparse-reward reinforcement learning tasks. They obtain great results on hard exploration Atari games and a new SOTA on Montezuma\\u2019s Revenge (compared to methods which are also not using any external data). They use an inverse dynamics model with attention, (trained with self-supervision) to predict the agent\\u2019s actions between consecutive states. This allows them to approximate the agent\\u2019s position in 2D environments, which is then used as part of the state representation to encourage efficient exploration. One of the main strengths of this method is the fact that it achieves good performance on challenging tasks without the expert demonstrations or environment simulators. I also liked the discussion part of the paper and the fact that it emphasizes some of the limitations and avenues for future work.\", \"pros\": \"Good empirical results on challenging Atari tasks (including SOTA on Montezuma\\u2019s Revenge without extra supervision or information)\", \"tackles_a_long_standing_problem_in_rl\": \"efficient exploration in sparse reward environments\\nNovel idea, which opens up new research directions\\nComparison experiments with competitive baselines\", \"cons\": \"The choice of extra loss functions is not very well motivated \\nSome parts of the paper are not very clear\", \"main_comments\": \"\", \"motivation_of_extra_loss_terms\": \"It is not very clear how each of the losses (eq 5) will help mitigate all the issues mentioned in the paragraph above. I suggest providing more detailed explanations to motivate these choices. In particular, why are you not including an entropy regularization loss for the policy to mitigate the third problem identified? This has been previously shown to aid exploration. I also did not see how the second issue mentioned is mitigated by any of the proposed extra loss terms.\", \"request_for_ablation_studies\": \"It would be useful to gain a better understanding of how important is each of the losses used in equation 5, so I suggest doing some ablation studies.\", \"cell_loss_confusion\": \"Last paragraph of section 3.1: is there a typo in the formulation of the per cell cross-entropy losses? Is alpha supposed to be the action a? Otherwise, this part is confusing, so please explain the reasoning and what supervision signal you used.\", \"state_representation\": \"Section 3.2 can be improved by adding more details. For example, it is not explained at all what the function psi(s) contains and how it makes use of the estimated agent location. I would suggest moving some of the details in section 4.2 (such as the context representation and what psi contains) earlier in the text (perhaps in section 3.2).\", \"minor_comments\": \"\", \"plots\": \"It would be helpful to give more details about the plots. I suggest labeling the axes. Is the x-axis number of frames, steps or episodes? How many runs are used to compute the mean? What do the light and dark colors represent? What smoothing process did you use to obtain these curves if any? Figure 2, why is there such a large drop in performance on Montezuma\\u2019s Revenge after 80M? Something similar seems to happen in PrivateEye, but much earlier in training and the agent never recovers.\", \"tables\": \"I would suggest reporting results in the tables for more than 3 seeds given that these algorithms tend to have rather high variance. Or at least, provide the values for the variance.\\nAppendix A, Algorithm 1: I believe this can be written more clearly. In particular, it would be good to specify the loss functions that you are optimizing. There seems to be some mismatch between the notation of the losses in the algorithm and the paper. It would also help to define alpha, c, psi etc.\", \"footnote_on_page_4\": \"you may consider using a different variable instead of c_t to avoid confusion with c (used to refer to the context representation).\\nAppendix D, Algorithm 2: is there a reason for which you aren\\u2019t assigning the embeddings to the closest cluster instead of any cluster that is within some range?\", \"references\": \"\", \"the_related_work_section_on_exploration_and_intrinsic_motivation_could_be_improved_by_adding_more_references_such_as\": \"Gregor et al. 2016, Variational Intrinsic Control\\nAchiam et al. 2018, Variational Option Discovery Algorithms\\nFu et al. 2017, EX2: Exploration with Exemplar Models for Deep Reinforcement Learning\\nSukhbaatar et al. 2018, Intrinsic Motivation and Automatic Curricula via Asymmetric Self-Play\\nEysenbach et al. 2018, Diversity is all you need: learning skills without a reward function\", \"final_decision\": \"This paper presents a novel way for efficiently exploring environments with sparse rewards. \\nHowever, the authors use additional loss terms (to obtain these results) that are not very well motivated. I believe the paper can be improved by including some ablation experiments and making some parts of the paper more clear, so I would like to see these additions in next iterations of the paper. \\n\\nGiven the novelty, empirical results, and comparisons with competitive baselines, I am inclined to recommend it for acceptance.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"An Important Step Towards Self Awareness for RL Agents\", \"review\": \"This paper introduces contingency-aware exploration by employing attentive dynamics model (ADM). ADM is learned in self supervised manner in an online fashion and only using pure observations as the agents policy is updated. This approach has clear advantages to earlier proposed count based techniques where agent's curiosity is incentivized for exploration. Proposed technique provides an important insight into how to approach such challenging tasks where the rewards are very sparse. Not only it achieves state of the art results with convincing empirical evidence but also authors make a good job of providing details of their specific modelling techniques for training challenges. They make a good job of comparing and contrasting the contingency-awareness by ADM to earlier proposed methods such as intrinsic motivation and self-supervised dynamics model. Overall exposition is clear with well explained results. The proposed idea raises interesting questions for future work.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"An interesting - but somewhat limited - exploration technique for 2D arcade games\", \"review\": \"This paper investigates the problem of extracting a meaningful state representation to help with exploration in RL, when confronted to a sparse reward task. The core idea consists in identifying controllable (learned) features of the state, which in an Atari game for instance typically corresponds to the position of the player-controlled character / vehicle on the screen. Once this position is known (as x, y coordinates on a custom low-resolution grid), one can use existing count-based exploration mechanisms to encourage the agent to visit new positions (NB: in addition to the x, y coordinates, extra information is also used to disambiguate the states for counting purpose, namely the current score and the state\\u2019s cluster index obtained with a basic clustering scheme). To find the position, the algorithm trains one inverse dynamics model per x, y cell on the grid: each model tries to predict the action taken by the agent given two consecutive states, both represented by their feature map (at coordinate x, y) learned by a convolutional network applied to the pixel representation. The outputs of these inverse dynamics models are combined through an attention mechanism to output the final prediction for the action: the intuition is that the attention model will learn to focus on the grid cell with best predictive power (for a given state), which should correspond to where the controllable parts of the state are. Experiments on several Atari games (including Montezuma\\u2019s Revenge) indeed show that this mechanism is able to track the true agent\\u2019s coordinates (obtained from the RAM state) reasonably well. Using these coordinates for count-based exploration (in A2C) also yields significantly better results compared to vanilla A2C, and beats several previously proposed related techniques for exploration in sparse reward settings.\\n\\nThe topic being investigated here (hard-exploration tasks) is definitely very relevant to current RL research, and the proposed technique introduces some novel ideas to address it, notably the usage of an attention model combined with multiple inverse dynamics models so as to identify controllable features in the environment. The approach seems sound to me and is clearly explained. Combined with pretty good results on well known hard Atari games, I am leaning toward recommending acceptance at ICLR.\\n\\nI have a few significant concerns though, the first one being that the end result seems quite tailored to the specific Atari games of interest: trying to apply it to other tasks (or even just Atari games with different characteristics) may require significant changes (ex: the assumption that a single region of the screen is being controlled by the agent, the clustering to identify the various \\u201crooms\\u201d of a game, and using the total score as a proxy to important state information). I do believe that some components are more general though (in particular the main new ideas in the paper), so this is not necessarily a major issue, but another example of application of these ideas to a different domain could have strengthened the submission.\\n\\nIn addition, even if experiments definitely investigate relevant aspects of the algorithm, I wish there had been an ablation study on the three components of the state representation used for counting (coordinates, cluster and reward). In particular it would be disappointing if similar results could be obtained with just the cluster and reward... even if I do not expect it to be the case, an empirical validation would have been welcome to be 100% sure.\\n\\nThe good results obtained here from exploration alone also beg the question whether this state representation could be useful to train the agent, by plugging it directly as input to the policy network (which by the way may not be trivial due to the co-training, but you get the idea). I realize that the focus of the paper is on exploration, and this is fine, but it seems to me a bit of a waste to build such a powerful state abstraction mechanism and not give the agent access to it. I was surprised that it was not at least mentioned in the discussion or conclusion. Note by the way that the conclusion says the agent \\u201cbenefits from a compact, informative representation of the world\\u201d, which can be misinterpreted as using it in its policy.\\n\\nRegarding the algorithm itself, one potential limitation is the fact that the inverse dynamics models rely on a single time step to identify the action that was taken. This means that they can only identify controllable state features that change immediately after taking a given action. But if an action has \\u201ccascading\\u201d effects (the immediate state change causing further changes down the road), there may be other important state features that could be controlled (across longer timesteps), but the algorithm will ignore them (also, in a POMDP one may need to wait for more than one timestep to even observe a single change in the state). I suspect that a more generic variant of this idea, better accounting for long term effects of actions, may thus be needed in order to work optimally in more varied settings.\\n\\nFinally, I believe more papers deserve to be cited in the \\u201cRelated Work\\u201d section. In particular, the idea of controlling features of the environment, (even if not specifically for exploration), has also been explored in (at least) the following papers:\\n- \\u201cReinforcement Learning with Unsupervised Auxiliary tasks\\u201d (Jaderberg et al, 2017)\\n- \\u201cFeature Control as Intrinsic Motivation for Hierarchical Reinforcement Learning\\u201d (Dilokthanakul et al, 2017)\\n- \\u201cIndependently Controllable Factors\\u201d (Thomas et al, 2017)\\n- \\u201cDisentangling Controllable and Uncontrollable Factors of Variation by Interacting with the World\\u201d (Sawada, 2018)\", \"relying_on_the_position_of_the_agent_on_the_screen_to_drive_exploration_in_atari_games_has_also_been_used_in\": \"\\u201cDeep Curiosity Search: Intra-Life Exploration Improves Performance on Challenging Deep Reinforcement Learning Problems\\u201d (Stanton & Clune, 2018)\", \"other_remarks\": [\"Please share the code if possible\", \"In the Introduction, the sentence \\u201cit is still an open question on how to construct an optimal representation for exploration\\u201d seems to repeat \\u201cthere is an ongoing open question about the most effective way of using neural network representations for exploration\\u201d => I wonder if one was supposed to replace the other?\", \"On p.2, last line containing citations: Pathak et al should be in the parentheses\", \"Please explicitly refer to Fig. 1 (Right) in 3.1\", \"On p.4, three lines above eq. 5, there is a hat{alpha} that should probably be hat{a}\", \"Is the left hand side L in eq. 5 the same as L^inv in Alg. 1? If so please use the same notations\", \"\\u201cprivious\\u201d work in 3.2\", \"In 3.2 please briefly explain what psi is going to be. It is a bit confusing to have it appear \\u201cout of nowhere\\u201c, with no details on how it is constructed.\", \"Please explain what the different shades mean in Fig. 2-3\", \"In Table 2\\u2019s caption please add a reference for DQN-PixelCNN. Also what do the star and cross symbols mean next to the algorithms\\u2019 names?\", \"\\u201ccoule\\u201d at end of 4.6\", \"The \\u201cWatson\\u201d citation is duplicated in references\", \"Why are there games with no tau in Table 4? Is it because there was no such clustering on these games? (if yes, that was not clear in the paper). And how was tau chosen for other games? (in particular I want to make sure the RAM state was not used to optimize it)\"], \"update_2018_11_23\": \"I am reducing my rating to 5 (from 6) due to the absence of author response regarding a potential revision addressing my comments/questions as well as those from other reviewers\", \"update_2018_11_27\": \"I am increasing my rating to 7 (from 5) after the authors responded to reviewers' comments and uploaded a revised version of the paper\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
H1eMBn09Km | Using GANs for Generation of Realistic City-Scale Ride Sharing/Hailing Data Sets | [
"Abhinav Jauhri",
"Brad Stocks",
"Jian Hui Li",
"Koichi Yamada",
"John Paul Shen"
] | This paper focuses on the synthetic generation of human mobility data in urban areas. We present a novel and scalable application of Generative Adversarial Networks (GANs) for modeling and generating human mobility data. We leverage actual ride requests from ride sharing/hailing services from four major cities in the US to train our GANs model. Our model captures the spatial and temporal variability of the ride-request patterns observed for all four cities on any typical day and over any typical week. Previous works have succinctly characterized the spatial and temporal properties of human mobility data sets using the fractal dimensionality and the densification power law, respectively, which we utilize to validate our GANs-generated synthetic data sets. Such synthetic data sets can avoid privacy concerns and be extremely useful for researchers and policy makers on urban mobility and intelligent transportation. | [
"ride-sharing",
"generative modeling",
"parallelization",
"application"
] | https://openreview.net/pdf?id=H1eMBn09Km | https://openreview.net/forum?id=H1eMBn09Km | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"SkeGUd02yN",
"BylCid5VyV",
"B1l1garq0X",
"HyxzQNH9CX",
"SklwxdMcAm",
"S1log2v6nQ",
"SyezMqrt37",
"SyeIpvuunX"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544509514186,
1543968934401,
1543294183348,
1543291929859,
1543280623490,
1541401587044,
1541130762447,
1541076925779
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1519/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1519/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1519/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1519/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1519/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1519/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1519/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1519/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"While the reviewers all agree that this paper proposes an interesting application of GANs, they would like to see clearer explanations of the technical details, more convincing evaluations, and better justifications of the assumptions and practical values of the proposed algorithms.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Interesting application of GANs but weak evaluations\"}",
"{\"title\": \"Thanks for the clarification\", \"comment\": \"Thanks for clearing up some of the issues in the paper. I appreciate that you want to keep this paper 'simple' in terms of model and do other work in the future. However, I feel that you would need to do some of these other items to get this paper to a higher level for a top conference such as this one.\"}",
"{\"title\": \"Rebuttal\", \"comment\": \"[Thanks a lot for your valuable feedback and time.]\\n- A GAN approach is normally ... all the weeks available? \\n>>> Yes, using data from multiple weeks would have been better. However our initial analysis of the data, there is a very strong correlation of the ride request patterns from week to week for each city. We made the decision to use one week of real data for this paper since the actual amount of ride request data for each city is already quite sizeable. Future work can certainly examine data from multiple weeks.\\n\\n- It is not clear how the heat-maps ... this is not clearly defined. \\n>>> We have added text to the first paragraph under section 2.1. Essentially, after generating locations (pickup and drop-off) using GANs, we use graph generators to pair GANs generated pickup locations and drop-off locations to obtain synthetic data sets with complete ride request information. \\n\\n- The conversion of data to ... to this specific problem. \\n>>> Also, metrics (DPL and fractal dimension) to understand such a dynamic phenomena has not been studied before. (see responses to Reviewers 1 and 2)\\n\\n- \\\"Our real ride request data sets ... one model? \\n>>> We use the real data from each city to train the model for that city. From previous publications, we know that each city can exhibit different ride request patterns. Hence we adopted the city-specific approach. For each city, we do partition the city into blocks and perform the training process for the blocks in parallel and then stitched the end results from all the blocks of a city into one data set for that city. \\n\\n- \\\"Hence the week-long data should be quite representative.\\\" ... one of these? \\n>>> We have looked at weeks with holidays, rainy days, and other anomalies. Such events have an impact on the ride request pattern. Our goal in this work is to highlight how training GANs using a representative week of data can be used to generate realistic ride requests that capture the temporal and spatial properties for a typical week. To be able to account for (and potentially predict) specific anomalies, such as holidays, weather condition, special events, etc., is a very interesting topic for future research.\\n\\n- \\\"Hence we believe the ride request ... evidence to back it up.\\n>>> Yes, this belief is based on our optimistic assumption. We have carefully selected the four US cities where ride sharing/hailing services have very high penetration rates. Also these penetration rates have continued to increase in these cities. Based on previously published papers in related areas such as traffic flow modelling, the necessary sampling rate for sampled data set to be representative for overall traffic flow is usually lower than the penetration rates of ride sharing services in these cities.\\n\\n- \\\"and lump together all the ... of 5 minutes? \\n>>> Yes. \\n\\n- \\\"Each image of that block ... more difficult? \\n>>> Variability does make it difficult but due to paucity of data from multiple weeks we went with this approach. Although, in the footnote on page 4 we do highlight that the labels can easily be modified if data from multiple weeks is available. (see response to other reviewers)\\n\\n- \\\"We find that small networks ... to support this. \\n>>> We did experiment with increasing the number of hidden layers for MNIST and our data set. In both cases, the difference in the images generated was negligible. Also, work on a better network with potentially convolutional layers and/or RNNs is planned for future. \\n\\n- \\\"This network is ... are you referring to? \\n>>> Clarified in the revised paper. We are referring to the images with pixel values as the ridership and label as the time snapshot at which the image is captured. \\n\\n- \\\"This is found to ... - evidence? \\n>>> By experiments using MNIST and ride request data; and prior work by Lee, \\u201cControllable generative adversarial network.\\u201d\\n\\n- \\\"In this work we set ... value arrived at? \\n>>> We wanted to keep a simple model with image size similar to MNIST which has been studied extensively using GANs. (see response to Reviewer 2)\\n\\n- You state that GPUs ... analysis of this. \\n>>> We added our best result for training on GPUs to the revised version. Frankly the experimental results using GPUs did not give us better performance. We suspect that in the AWS infrastructure the efficiency of using large clusters of GPUs is not as scalable as clusters of CPUs, especially enhanced by Intel\\u2019s MKL and Berkeley\\u2019s Ray. It is also possible that the virtualization of GPUs comes with a higher overhead than that for the CPUs. This thread of work specifically for our application and other similar ones require more investigation. \\n\\n- \\\"and other useful functions\\\" - such as? \\n>>> Added reference to Intel\\u2019s MKL primitives. \\n\\n- \\\"Running times for sampling ... idea of scale should be provided.\\n>>> Sampling and stitching images takes a couple of minutes for any city on a laptop with a reasonable hardware configuration.\"}",
"{\"title\": \"Rebuttal\", \"comment\": \"[Thanks a lot for your valuable feedback and time.]\\n\\n- However, it can only generate the pickup location, which makes me a little disappointed. \\n>>> This has now been clarified in the paper. We did generate pickup points and drop-off points using separate GANs models. (see our response to Reviewer 1)\\n\\n- In real world applications, in riding hailing industry, the demand/supply estimation have been wide investigated.\\n>>> Having generated both pickup and drop-off points, we can generate an entire ride request (pickup location and time + drop-off location and time) as opposed to just predict how many requests are going to be generated from a particular area. This type of fine-grained (both temporally and spatially) generation of ride request has not been found in prior literature. We do this at very fine granular time intervals (5 min) and spatial areas (50mx50m) for very large geographical regions covering entire greater metro area of a large city.\\n \\n- Technically the contribution is not significant. \\n>>> Converting ride requests into images is a novel idea. Importantly, our application provides a very relevant and real-world application of GANs. Also, metrics (DPL and fractal dimension) to understand such dynamic phenomena like real-time on-demand ride requests have not been studied before. \\n\\n- The model is only trained for each small area in the city. The training set is very limited, which may make the model overfit. \\n>>> Our model is actually trained for the entire city and we use training data set for the entire city. The reason for partitioning the entire city into blocks that are trained separately is to exploit parallelism available in the AWS training infrastructure so as to reduce the overall training time. The final GANs generated data set covers the entire city by stitching together results from all the blocks of a city. The actual data set in terms of the total number of ride requests is quite sizable and much larger than most other research data sets.\\n\\n- Thus the experiments and the results are not convincing. \\n>>> We have added additional data in the revised version of the paper highlighting close similarity, based on the fractal dimension metric, for both the pickup and dropoff points. (see our response to Reviewer 1)\\n\\n- In current GIS or transportation area, usually we would use a unique model for the whole city. At least the authors should discuss their algorithm for the scale issue. \\n>>> Ideally we want to use one single model for the entire city, i.e. not partition into blocks. However, this will make the total training time unacceptable because of the inability to leverage parallelism of the training infrastructure. By partitioning the entire city into blocks, we create spatial parallelism that can take advantage of infrastructure parallelism to reduce the training time. Yes, by increasing the block size we can potentially capture larger patterns that span multiple blocks. However, this can significantly increase training time. So, it is a trade off. Based on our experimental results, our selected block size of 1200mx1200m seems to perform quite well in capturing the ride request patterns for SF, NY, & Chicago. For these three cities the fractal range that captures the spatial distribution patterns all start at much lower values. (see Table 2 and Table 3). Increasing the block size for these cities is not necessary. Only LA has a fractal range starting at a much higher value, with the lowest mean fractal dimensionality. For LA, increasing the block size can potentially improve these results by capturing larger (spatially) patterns.\"}",
"{\"title\": \"Rebuttal\", \"comment\": \"[Thanks a lot for your valuable feedback and time.]\\n- How is the data represented? It says that pixel represents the number of ride requests ... is not well explained. \\n>>> Pixel intensity is set to zero if there is no ride request in that 50mx50m square and to positive integers when present with values reflecting the number of ride requests in that 50mx50m square (in the time interval captured by that image). This has been clarified in Section 2.1 of the revised version. \\n\\n- y-axis in Figure 2 is not explained. \\n>>> y-axis is the total ridership in a city (for each time interval).\\n\\n- Metrics should be better explained. How are edges defined, when you only model requests, not destinations?\\n>>> Each edge is directed from source (pickup) node to destination (drop-off) node. Edge weight represents the number of requests originating and ending from the same pickup to destination nodes (a node represents a 50mx50m square). More details from prior work about the construction of the ride request graph have been added to section 2.2.\\n\\n - In addition, how is D2 defined? Do we compute one for each time, or how? What exactly is \\\"side e\\\", what is a \\\"side\\\" here? \\n>>> Yes, D2 is computed for each time snapshot. Side e (\\\\epsilon) is the side length of the bounding square area, e.g 50mx50m square. Footnotes (on page 3) and related sections have been modified. \\n\\n - \\\"We can claim strong similarity ...\\\", what is this justified by? \\n>>> It is justified in figure 2 in terms of temporal fluctuation of total ridership, and also by the two summary metrics -- DPL exponent, and D2 which we leverage from previous publications. For each week the deviation in these metrics is negligible which highlights a pattern. In general, human commuting properties if quantified by metrics wouldn\\u2019t differ a lot since people residing in urban cities tend to have a consistent pattern like visiting offices in the morning; visiting restaurants and bars in the evening. Anomalies are bound to happen but on average such anomalies, generally short lived in time and constrained in space, will not have much impact on the overall average of the two metrics we are computing. Essentially these two metrics serve as very useful first order characterizations of temporal and spatial variations in the mobility patterns within a city.\\n\\n- Second paragraph in Section 3.1 is not clear, reads very strangely. \\n>>> Corrected in revised version.\\n\\n- It is clear from Figure 2 ... chose to ignore that fact during modeling. \\n>>> This is our initial attempt at this problem. We train our models using data from weekdays and weekends without separating them. If we had access to data from multiple weeks, we would have trained weekdays and weekends separately leading to much better generators. Actually we can go even further to train a different model for each day of the week based on potential variations even within weekend and weekday groups.\\n\\n- The authors mention that cells as 1.2km ... resolution. \\n>>> The figure is not drawn to scale is only for illustration; this is now clarified in the paper.\\n\\n- For the classifier, it says that \\\"time sequence of the data\\\" is a label, what does this mean? ... simply the 5-min label was used. \\n>>> Yes, it is simply the label for each 5-min interval.\\n\\n- Could we add the metrics to the loss, to enforce them as the authors say that that would result in strong similarity? \\n>>> Not sure how helpful that would be. We would prefer the GANs to understand the pattern of requests without explicitly highlighting the metrics specific to the domain which capture the dynamism of the phenomena.\\n\\n- One of the major flaws of the paper is missing baseline. \\n>>> To the best of our knowledge, prior work has focused on demand and supply generation i.e. the number of people who will request at a particular location or the number of vehicles which will be required at some geographical location. We are generating the entire ride request (source to destination) and then proposing metrics which capture the spatial and temporal variations of such ride requests. This paper focuses on a novel way to generate synthetic data set and the validation of the data set using the two metrics for temporal and spatial attributes. The relevant comparison with any baseline would need to involve implementing some application on top of and using the two different data sets (real and synthetic). The results from the real data set then become the baseline. We are pursuing a number of such applications.\\n\\n- Again, I am not sure how results in Section 5.2 ... requests are modeled. \\n>>> This has been corrected. We did study both pickup points and drop-off points using separate GANs models. In the initial submitted version we only show the data for the pickup points. We have added D2 metric table to highlight the similarity between real and synthetic datasets for both pickup and drop-off locations. \\n\\n- Footnote 2 in the conclusion ... in the paper.\\n>>> Clarified in the revised paper.\"}",
"{\"title\": \"Interesting idea, weak eval\", \"review\": [\"The authors propose an interesting idea of generating synthetic data sets for ride sharing. In particular, they split the space/time into small spatial/temporal cells (50mx50m and 5min) each containing number of requests (or a scaled version of it), and train a conditional GAN to output these cell values given an input 5-min time label. They validate the results using metrics from graph and fractals theory.\", \"While the idea is interesting, the execution of the paper is lacking. Some details are missing and especially key things such as metrics should be explained better.\", \"How is the data represented? It says that pixel represents the number of ride requests, how exactly? Then, in the next paragraph it is said that pixel represents presence/absence of ride requests, so which one is it. This is a critical part of the proposal and is not well explained.\", \"y-axis in Figure 2 is not explained.\", \"Metrics should be better explained. How are edges defined, when you only model requests, not destinations? This is far from being clear.\", \"In addition, how is D2 defined? Do we compute one for each time, or how? What exactly is \\\"side e\\\", what is a \\\"side\\\" here? Basically both metrics are not well defined.\", \"\\\"We can claim strong similarity ...\\\", what is this justified by?\", \"Second paragraph in Section 3.1 is not clear, reads very strangely.\", \"Labels are being mentioned before being defined, adding to confusion.\", \"It is clear from Figure 2 that workdays and weekends are very different, yet the authors chose to ignore that fact during modeling. They do mention that we can choose any labeling we want, but still strange that for the experiments this was not taken into account.\", \"The authors mention that cells as 1.2km x 1.2km, but Figure 3 shows much different resolution. Seems that the figure is just given as an example, but reading the text one gets an impression that the figure was actually used in the paper. This needs to be clarified.\", \"For the classifier, it says that \\\"time sequence of the data\\\" is a label, what does this mean? You mean the actual label, or some time sequence? This is confusing, although it seems that simply the 5-min label was used.\", \"Could we add the metrics to the loss, to enforce them as the authors say that that would result in strong similarity?\", \"One of the major flaws of the paper is missing baseline. It is very difficult to appreciate the results without any reference result.\", \"Again, I am not sure how results in Section 5.2 are computed when only requests are modeled.\", \"Footnote 2 in the conclusion mentions baselines, yet there are none mentioned in the paper.\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"An interesting application of GAN\", \"review\": \"The paper works on a very interesting problem: generating ride hailing demand map using deep learning technique. The idea is novel and interesting. The paper adopts two metric to evaluate the performance of the algorithm and shows that the performance is good. However, the problem is a little far from real world cases, which limits the contribution of the paper.\\n\\nThe title of the paper is very attractive. Before reading the paper, I was very excited and wanted to see the algorithm could generate the driving trajectory by using GAN. However, it can only generate the pickup location, which makes me a little disappointed. In real world applications, in riding hailing industry, the demand/supply estimation have been wide investigated. And it can be very accurate. It is not clear why we need to generate it. On the other hand, the paper only adopts conventional GAN to this application. Technically the contribution is not significant. The paper only considers time slot for generating the new data. In this area, much more information has been used. The authors are suggested to survey the smart transportation or riding sharing research area. The training solution is not satisfied. The model is only trained for each small area in the city. The training set is very limited, which may make the model overfit. Thus the experiments and the results are not convincing. In current GIS or transportation area, usually we would use a unique model for the whole city. At least the authors should discuss their algorithm for the scale issue.\\n\\nOverall, the paper works on a very interesting problem. However, the current solution should be improved.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Application paper of GAN based on known techniques\", \"review\": \"The paper produces a heat-map of ride-share requests in four cities in the USA. For each city 'block' they produce a time-sequence of 2016 images representing a week-long run from combining each 5-minute interval. This is used with a GAN to produce new data. The techniques applied, although not commonly used in the context of ride sharing / hailing, have been used extensively in other literature.\", \"some_major_points_on_the_paper\": \"1) A GAN approach is normally used to generate more data when enough real data is not obtainable. However, here you only use one week of data from a much larger set. Surely, it would be better to make use of all the weeks available?\\n\\n2) It is not clear how the heat-maps once produced could be used in the future. There is a hint in the results section about how they can be converted back to ride requests, but this is not clearly defined.\\n\\n3) There are a number of cases where you state that some approach has been found to be better. However, no evidence is presented for how you determined this to be true.\\n\\n4) The conversion of data to heat-maps has been used extensively in prior research. Although I'm not directly aware of the use in machine learning I am aware of the use in transport - \\\"Interactive, graphical processing unit- based evaluation of evacuation scenarios at the state scale\\\". The novelty here seems to be the application to this specific problem.\", \"more_specific_points\": [\"\\\"Our real ride request data sets consist of all the ride requests for an entire week for the four cities.\\\" - it's not clear - are all four cities used to train one model?\", \"\\\"Hence the week-long data should be quite representative.\\\" - This fails to take into account such things as national holidays or other major events such as sports. Did your chosen week contain one of these?\", \"\\\"Hence we believe the ride request data sets also reflect the overall urban mobility patterns for these cities.\\\" - This is a huge assumption, which would seem to need evidence to back it up.\", \"\\\"and lump together all the ride requests within each interval.\\\" - Presumably you mean that all time values are to the granularity of 5 minutes?\", \"\\\"We arbitrarily sized each block to represent an image of 24\\u000224 pixels\\\" - this seems particularly small.\", \"\\\"Each image of that block is labeled with a time interval (for our experiments, the hour in a day).\\\" - Can the variability within an hour not make this more difficult?\", \"\\\"We find that small networks are appropriate for the training data\\\" - evidence to support this.\", \"\\\"This network is pre-trained on the training data\\\" - which training data are you referring to?\", \"\\\"This is found to increase the efficiency of the training process\\\" - evidence?\", \"\\\"In this work we set the block size for each of the four cities to be\", \"1200 \\u0002 1200 meters\\\" - how was this value arrived at?\", \"You state that GPUs were no more efficient, it would be good to see more analysis of this.\", \"\\\"To help enhancing the scalability\\\" -> \\\"To help enhance the scalability\\\"\", \"\\\"and other useful functions\\\" - such as?\", \"Figure 4 would probably work better as a speedup graph.\", \"\\\"Running times for sampling ride requests from the trained models and stitching the images of all the blocks together are significantly less than the training times, and are not included in these results.\\\" - at least some figures to give an idea of scale should be provided.\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
BkgWHnR5tm | Neural Graph Evolution: Towards Efficient Automatic Robot Design | [
"Tingwu Wang",
"Yuhao Zhou",
"Sanja Fidler",
"Jimmy Ba"
] | Despite the recent successes in robotic locomotion control, the design of robot relies heavily on human engineering. Automatic robot design has been a long studied subject, but the recent progress has been slowed due to the large combinatorial search space and the difficulty in evaluating the found candidates. To address the two challenges, we formulate automatic robot design as a graph search problem and perform evolution search in graph space. We propose Neural Graph Evolution (NGE), which performs selection on current candidates and evolves new ones iteratively. Different from previous approaches, NGE uses graph neural networks to parameterize the control policies, which reduces evaluation cost on new candidates with the help of skill transfer from previously evaluated designs. In addition, NGE applies Graph Mutation with Uncertainty (GM-UC) by incorporating model uncertainty, which reduces the search space by balancing exploration and exploitation. We show that NGE significantly outperforms previous methods by an order of magnitude. As shown in experiments, NGE is the first algorithm that can automatically discover kinematically preferred robotic graph structures, such as a fish with two symmetrical flat side-fins and a tail, or a cheetah with athletic front and back legs. Instead of using thousands of cores for weeks, NGE efficiently solves searching problem within a day on a single 64 CPU-core Amazon EC2
machine.
| [
"Reinforcement learning",
"graph neural networks",
"robotics",
"deep learning",
"transfer learning"
] | https://openreview.net/pdf?id=BkgWHnR5tm | https://openreview.net/forum?id=BkgWHnR5tm | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"Syx06uJmxV",
"Bkl1xB-3JE",
"B1xqGNaoyV",
"SJgfTweYk4",
"r1l6OwV-AQ",
"r1lAJc6xRm",
"rkeEvt6xA7",
"S1x7SdplAX",
"BkeBMdpg07",
"SygmVm7XpQ",
"HJg1qgpZTm",
"rJxUPOqhhX"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544906949784,
1544455398607,
1544438802280,
1544255417785,
1542698869305,
1542670821950,
1542670684150,
1542670394979,
1542670349440,
1541776171249,
1541685383109,
1541347421872
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1518/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1518/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1518/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1518/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1518/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1518/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1518/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1518/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1518/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1518/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1518/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1518/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"Lean in favor\", \"strengths\": \"The paper tackles the difficult problem of automatic robot design. The approach uses graph neural\\nnetworks to parameterize the control policies, which allows for weight sharing / transfer to new policies even\\nas the topology changes. Understanding how to efficiently explore through non-differentiable changes to the body\\nis an important problem (AC). The authors will release the code and environments, which will be useful in an area where there are \\ncurrently no good baselines (AC).\", \"weaknesses\": \"There are concerns (particularly R2, R1) over the lack of a strong baseline, and with the results\\nbeing demonstrated on a limited number of environments (R1) (fish, 2D walker). In response, the authors clarified the nomenclature and\\ndescription of a number of the baselines, and added others. AC: there is no submitted video (searches for \\\"video\\\" on the PDF text\\nproduces no hits); this is seen by the AC as being a real limitation from the perspective of evaluation. \\nAC agrees with some of the reviewer remarks that some of the original stated claims are too strong.\", \"ac\": \"the simplified fluid model of Mujoco (http://mujoco.org/book/computation.html#gePassive) is\\nunable to model the fluid state, in particular the induced fluid vortices that are responsible for a\\ngood portion of fish locomotion, i.e., \\\"Passive and active flow control by swimming fishes and\\nmammals\\\" and other papers. Acknowledging this kind of limitation will make the paper stronger, not weaker;\\nthe ML community can learn from much existing work at the interface of biology and fluid mechancis.\\n\\nThere remain points of contention, i.e., the sufficiency of the baselines. However, the reviewers R2 and R3 have\\nnot responded to the detailed replies from the authors, including additional baselines (totaling 5 at present) \\nand pointing out that baselines such as CMA-ES (R2) in a continuous space and therefore do not translate in any obvious way\\nto the given problem at hand. \\n\\nOn balance, with the additional baselines and related clarifications, the AC feels that this paper makes a \\nuseful and valid contribution to the field, and will help establish a benchmark in an important area.\\nThe authors are strongly encouraged to further state caveats and limitations, and to emphasize why some\\ncandidate baseline methods are not readily applicable.\", \"confidence\": \"3: The area chair is somewhat confident\", \"recommendation\": \"Accept (Poster)\", \"title\": \"borderline, but lean in favor\"}",
"{\"title\": \"We thank the reviewer for the responses\", \"comment\": \"We respect the reviewer's opinion and thanks again for the response. But still, we disagree with the claim that the experiment part is weak.\\n\\nIn terms of the quality of baselines, we already include 5 comparing baselines including previous state-of-the-art. And NGE has the best performance and efficiency by a large margin (2x of previous state-of-the-art).\\n\\nThe problem is novel / under-explored and there is no existing benchmark.\\nWe put in significant efforts to design 2 structure search environments and 3 fine-tuning environments, which requires weeks (even months) of engineering (robotics xml parser, graph xml generator, forward-kinematics, states mapping, etc.). We will release the code and environments after the reviewing period.\\n\\nWe argue the evaluation of research should not be constrained by the number of experiments. And more focus can be paid on the novelty of algorithms and the inspiration that can be brought to the community.\\n\\nWe would like to emphasize that our experiments show that, in the high-fidelity simulation like MuJoCo (previous research is conducted either in 2D environments or with simplified self-made engine), no previous approach can efficiently search for athletic walker or swimmer structures.\\nUnlike the previous approaches that optimize the graph and the controllers separately, our proposed method jointly optimize discrete graph structure and the continuous controller parameters at the same time. Our joint optimization is a novel formulation, and effective approach that outperforms all the other baseline methods. \\n\\nThis paper lies in the intersection of graph learning, reinforcement learning, robotics and structure search. Although it is a small step towards automatic robot structure search, we believe it will inspire following work in robotics, graph generation and neural architecture search.\"}",
"{\"title\": \"Response to rebuttal\", \"comment\": \"The response makes the paper clearer. The added comparisons are interesting, although they could be more in depth. I keep my response as it was, due to the interesting proposed approach, and the obtained results.\"}",
"{\"title\": \"Response to rebuttal\", \"comment\": \"Thank you for updating the paper with correct axis labels. Overall, I still feel the experiment section is very weak and the results are only shown in a few selected environments. Hence, I keep my review to be same, i.e., 6.\"}",
"{\"title\": \"Revised version available -- any updated opinions?\", \"comment\": \"Thanks to all for the detailed reviews and corresponding responses.\\n\\nA revised version has been posted. There is also a useful \\\"Compare Revisions\\\" choice when you get to the Revisions page.\\n\\nIt would be good to hear from the reviewers if their concerns have been addressed, and if they are going to make any score revisions. There is still some disparity, mainly surrounding the experimental evaluation.\\n\\nmany thanks (area chair)\"}",
"{\"title\": \"General Response to the Reviewers\", \"comment\": \"We thank the reviewers for their response and suggestions. We have updated the paper and summarized the modifications here based on their feedback.\\n\\n1. The abbreviation for \\u201cevolutionary structure search\\u201d is now changed from \\u201cES\\u201d to \\u201cESS\\u201d to reduce ambiguity. \\u201cES\\u201d is abbreviated for \\u201cevolutionary search\\u201d and \\u201cevolutionary structure search\\u201d simultaneously in our original submission. \\n\\n2. We rename \\u201cGraph Mutation (GM)\\u201d into \\u201cGraph Mutation with Uncertainty (GM-UC)\\u201d.\\n\\n3. We added additional baselines from previous literature to benchmark the performance of our algorithm, and show that our proposed algorithm has significant improvement both quantitatively and qualitatively.\\n\\nIn particular, we added the following baselines:\\n\\na. ESS-Sims (Sims, 1994)\\nThis method was proposed in (Sims, 1994), and applied in (Cheney, 2014), (Taylor, 2017), which has been the most classical and successful algorithm in automatic robotic design.\\nIn the original paper, the author used evolutionary strategy to train a human-engineered one-layer neural network. With the recent progress of the robotics and reinforcement learning, we replaced the network with a 3-layer MLP and trained it with PPO instead of evolutionary strategy.\\n\\nb. ESS-Sims-AF\\nIn the original (Sims, 1994), amortized fitness is not used.\\nAlthough amortized fitness could not be applied in ESS since the shape of network parameters is changing, amortized fitness could be applied among agents with the same topology. We named this variant of ESS-Sims as \\u201cESS-Sims-AF\\u201d.\\nThis algorithm is essentially the old \\u201cES\\u201d baseline in the earliest revision of the paper.\\n\\nc. ESS-GM-UC\\n\\u201cESS-GM-UC\\u201d is a variant of \\u201cESS-Sims-AF\\u201d combined with Graph Mutation with Uncertainty. We would also want to explore how will GM-UC affect the performance without the use of structured model like GNN.\\n\\nd. ESS-BodyShare\\nWe would also want to answer the question of whether the graph neural network is needed.\\nAs suggested by Reviewer 3, besides unstructured models like fully-connected network, we designed a structured model by removing the message propagation mode and named it \\u201cESS-BodyShare\\u201d\\n\\ne. RGS (random graph search)\\nThe same baseline as described in the earlier revision.\\n\\nThe final performance the NGE and baselines are now shown in Figure 2 in the latest revision, which we summarize as the following table. \\n\\n\\t | NGE | ESS-Sims | ESS-Sims-AF | ESS-GM-UC | ESS-BodyShare | RGS\\nFish | ** 70.21 ** | 38.32 | 51.24 | 54.40 | 54.97 | 20.96\\nWalker | ** 4157.9 ** | 1804.4 | 2486.9 | 2458.19 | 2185.1 | 1777.3\\n\\nThe results show that NGE is significantly better than previous approaches and baselines. \\n\\n4. We improved the writing of the paper.\\nIn particular, we added more literature review on related work as requested by the reviewers.\\nAnd we re-organized the writing of section 3.1, 3.2, 3.4, so that it is easier to understand and cause less confusion.\\n\\nSims, 1994, \\\"Evolving virtual creatures.\\\" Proceedings of the 21st annual conference on Computer graphics and interactive techniques. ACM, 1994.\\n\\nCheney, 2014, et al. \\\"Unshackling evolution: Evolving soft robots with multiple materials and a powerful generative encoding.\\\" ACM SIGEVOlution 7.1 (2014): 11-23.\\n\\nTaylor, 2017. \\\"Evolution in virtual worlds.\\\" arXiv preprint arXiv:1710.06055 (2017).\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"We are afraid that there seems to be some confusion regarding our paper. We apologize if this is caused by the lack of clarity in the use of abbreviation \\u201cES\\u201d (see general response). In the latest revision, \\u201cEvolutionary structure search\\u201d is abbreviated as \\u201cESS\\u201d for clarity. We emphasize that in the paper, NO \\u201cevolutionary strategy\\u201d but rather PPO is used to train the policy (see Section 2.1 and 3.2).\\n\\nWe hope the reviewer can take time to revisit the paper in the light of this inconsistency. Also, we now have 5 baselines from previous research and modern variants, which we believe further showcases our contributions.\", \"q1\": \"The experiments do not include any strong baseline\\n\\nWe added more baselines to further strengthen the significance of our work with respect to the previous approaches.\\n\\nThe baselines now include (a)\\u201cESS-Sims\\u201d (Sims, 1994), (Cheney, 2014), (Taylor, 2017), (b) ESS-Sims-AF, (c) ESS-GM-UC, (d) ESS-BodyShare and (5) Random graph search. We refer to the details of each baseline in the general response.\\n\\n\\t | NGE | ESS-Sims | ESS-Sims-AF | ESS-GM-UC | ESS-BodyShare | RGS\\nfish | **70.21** | 38.32 | 51.24 | 54.40 | 54.97 | 20.96\\nWalker | **4157.9** | 1804.4 | 2486.9 | 2458.19 | 2185.1 | 1777.3\\n\\nThe results show that NGE is significantly better than previous approaches and baselines. We did an ablation study by sequentially adding each sub-module of NGE separately. The table shows that submodules are effective and increase the performance of graph search.\", \"q2\": \"a) Optimizing both the controller and the hardware has been previously studied in the literature. Is it worth using a neural graph? b) All algorithms should optimize both G and theta for a fair comparison.\\n\\nBy \\u201coptimizing both G and theta\\u201d, we meant to indicate that the learned controllers can be transferred to the next generation even if the topologies are changed (instead of throwing away old controllers). We note that only NGE among all the baselines has the ability to do that. Graph neural network formulation is KEY here, enabling it to perform this efficient policy transfer.\\nTo the best of our knowledge, the traditional methods require re-optimizing theta from scratch for each different topology, which is computationally demanding and breaks the joint-optimization. \\nNGE approximately doubles the performance of previous approach (Sims, 1994) as shown in Q1.\\n\\nPlease refer to Section 3.1 and Section 3.4 for more details.\", \"q3\": \"You should use an existing ES implementation (e.g., from some well-known package) instead of a naive implementation, and as additional baseline also CMA-ES.\\n\\nAgain, we apologize for the confusing use of \\u201cES\\u201d abbreviation. Evolutionary strategy is not used in the paper. We invite the reviewer to re-read our paper, since it seems to have led to a major misunderstanding.\\nCMA-ES updates and utilize the covariance matrix of sampling distribution, which is not directly applicable to discrete structure optimization. We believe it will be a valuable future research direction.\", \"q4\": \"Providing the same computational budget seem rather arbitrary and depends on implementation.\\n\\nWe are unsure what the reviewer is indicating, and would appreciate the additional clarification.\\nIn terms of the computational budget for each experiment, we compared different algorithms under different computational budget metrics, more specifically, \\u201cwall-clock time\\u201d, \\u201cnumber of updates\\u201d, and the \\u201cfinal converged performance\\u201d. NGE performs best among all algorithms.\\nWe emphasize the fact that wall-clock time is a more common and realistic metric for comparing the structure search in practice. \\n\\nWe agree that computational budget depends on implementation, and the curves in the paper are plotted based on the number of iterations/parameter update, which is independent of the implementation.\", \"q5\": \"The writing of the paper\\n\\nWe sincerely thank the reviewer for the suggestions. We updated the changes in the latest version accordingly.\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"We thank the reviewer for the reading and suggestions of our paper.\", \"q1\": \"The exact difference between the proposed method and the ES baseline is not as clear as it could be.\\n\\nWe agree and apologize for the lack of clarity in some parts of our paper. We renamed all the models based on the original papers and their properties. We refer the reviewer to general response for further details of each baseline algorithms.\\nWe also improved clarity in the revised version.\", \"q2\": \"The second point is that the proposed approach seems to modify a few things from the ES baseline.\\n\\nWe thank the reviewer for the insightful suggestion. In the latest version, to test the efficacy of each submodule of NGE, the baselines now include the algorithm with the inclusion of the pruning step, and the algorithms with AF and without AF using MLP.\\n\\nMore specifically, the baselines are named:\\n\\n1. ESS-Sims\\nIt is the baseline algorithm without the use of AF, as use by (Sims, 1994), (Cheney, 2014) and (Taylor, 2017).\\n2. ESS-Sims-AF\\nThe modern variant of ESS-Sims with the inclusion of AF.\\n3. ESS-GM-UC\\nThe modern variant of ESS-Sims with the inclusion of AF and graph mutation with uncertainty (pruning).\\nFor this baseline, we included the pruning module on top of ESS-Sims-AF. Similar to the original baselines available, we performed a grid search of hyperparameters and plot the average performance of the best set of hyperparameters.\\n\\n\\t | NGE | ESS-Sims | ESS-Sims-AF | ESS-GM-UC | ESS-BodyShare | RGS\\nfish | **70.21** | 38.32 | 51.24 | 54.40 | 54.97 | 20.96\\nWalker | **4157.9** | 1804.4 | 2486.9 | 2458.19 | 2185.1 | 1777.3\\n\\nNotice that GM-UC has a lower performance gain with the fully-connected network (ESS-Sims) than with GNN. We speculate that this happens in ESS-Sims because the controller is less dependent on the graph structure, and thus the fitness does not well capture the information about the topology. Thus, GM-UC is not able to extract as much information as with GNN.\\n\\nOn the other hand, the use of AF can greatly affect the performance. The previous approach ESS-Sims can only get 38.32 / 1804 average final reward for fish and walker, respectively. The performance of walker is even very close to random graph search with no evolution. With the help of AF, the performance increases from 38.32 to 51.24 and 1804.4 to 2486.9, respectively.\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"We thank the reviewer for the suggestions.\", \"q1\": \"Robot design were explored in (Sims, 1994) etc. The novelty of the paper is fairly incremental.\\n\\nWe respectfully disagree and believe our contributions are significant. We note that only NGE among all the baselines has the ability to optimize both the graph G and the controller parameters. Graph neural network formulation is KEY here, enabling it to perform this efficient policy transfer. To the best of our knowledge, the traditional methods (such as (Sims, 1994)) require re-optimizing parameters of the controllers from scratch for each different topologies, which is computationally demanding and breaks the joint-optimization. \\n\\nTo further showcase our work with respect to prior art, we added (Sims, 1994) as an additional baseline in the latest revision. We refer the reviewer to the general response for details. NGE has about 2x performance of (Sims, 1994) in both fish and walker environments. Moreover, we argue the videos of (Sims, 1994) might be confusing as it mixes the results of policy evolution from human-designed robots and structure evolution.\", \"q2\": \"Can it be applied to more complex morphologies? Humanoid etc. maybe?\\nNGE can be applied to evolve humanoids, however, there are two major difficulties in doing that in practice.\\n1. Training humanoid controllers is of orders of magnitude more difficult than training cheetah (Schulman, 2017).\\n2. To evolve realistic humanoid structure (e.g. hands, symmetrical limbs), one would need to have more realistic environments that better reflect tasks and complexity in the real world.\\nHowever, we agree that this is a very interesting direction for the future.\", \"q3\": \"Comparison to more baseline, for example models with no message passing.\\n\\nWe thank the reviewer for pointing out the baseline of no message passing in GNN, which we named as ESS-BodyShare. \\n\\nIn the latest revision, we have 5 baselines from previous research and modern variants, which further showcases the significance of our work. In general, NGE has significant improvement both quantitatively and qualitatively. We refer the reviewer to the general response for further information.\", \"specifically_for_ess_bodyshare_baseline\": \"| NGE | ESS-BodyShare\\nfish | 70.21 | 54.97 (78.3% of NGE) \\nWalker | 4157.9 | 2185.1 (52.5% of NGE)\\n\\nIn environment where global information is needed (for example, walker with multiple rigid body contact), the performance is jeopardized. But in an easier environment, message passing is less needed.\", \"q4\": \"Clarification of Figure-4 (Section-4.2)\\n\\nOur aim was to show that in the case where the human-engineered topology needs to be preserved, it is better to co-evolve the attributes and controllers with NGE rather than only training the controllers (controllers are trained from scratch for both NGE and baselines).\\n\\nThe x-axis was scaled according to the number of updates. We apologize for the lack of clarity. We revised the x-axis from \\u201cgenerations\\u201d to parameter \\u201cupdates\\u201d in the latest revision.\\n\\nIn the latest revision, we also included the curve where the topologies are allowed to be changed, which leads to better performance, but does not necessarily preserve the initial structure.\\n\\nSchulman, 2017. \\\"Proximal policy optimization algorithms.\\\" arXiv preprint arXiv:1707.06347 (2017).\"}",
"{\"title\": \"Interesting approach, inconclusive experiments\", \"review\": \"This paper proposes an approach for automatic robot design based on Neural graph evolution.\\nThe overall approach has a flavor of genetical algorithms, as it also performs evolutionary operations on the graph, but it also allows for a better mechanism for policy sharing across the different topologies, which is nice.\\n\\nMy main concern about the paper is that, currently, the experiments do not include any strong baseline (the ES currently is not a strong baseline, see comments below). \\nThe experiments currently demonstrate that optimizing both controller and hardware is better than optimizing just the controller, which is not surprising and is a phenomenon which has been previously studied in the literature.\", \"what_instead_is_missing_is_an_answer_to_the_question\": \"Is it worth using a neural graph? what are the advantages and disadvantages compared to previous approaches?\\nI would like to see additional experiments to answer this questions.\\n\\nIn particular, I believe that any algorithms you compare against, you should optimize both G and theta, since optimizing purely the hardware is unfair.\\nYou should use an existing ES implementation (e.g., from some well-known package) instead of a naive implementation, and as additional baseline also CMA-ES. \\nIf you can also compare against one or two algorithms of your choice from the recent literature it would also give more value to the comparison.\", \"detailed_comments\": [\"in the abstract you say that \\\"NGE is the first algorithm that can automatically discover complex robotic graph structures\\\". This statement is ambiguous and potentially unsupported by evidence. how do you define complex? that can or that did discover?\", \"in the introduction you mention that automatic robot design had limited success. This is rather subject, and I would tend to disagree. Moreover, the same limitations that apply to other algorithms to make them successful, in my opinion, apply to your proposed algorithm (e.g., difficulty to move from simulated to real-world).\", \"The digression at the bottom of the first page about neural architecture search seem out of context and interrupts the flow of the introduction. What is the point that you are trying to make? Also, note that some of the algorithms that you are citing there have indeed applied beyond architecture search, eg. Bayesian optimization is used for gait optimization in robotics, and Genetic algorithms have been used for automatic robot design.\", \"The stated contributions number 3 and 5 are not truly contributions. #3 is so generic that a large part of the previous literature on the topic fall under this category -- not new. #5 is weak, and tell us more about the limitations of random search and naive ES than necessarily a merit of your approach.\", \"Sec 2.2: \\\"(GNNs) are very effective\\\" effective at what? what is the metric that you consider?\", \"Sec 3 \\\"(PS), where weights are reused\\\" can you already go into more details or refer to later sections?\", \"First line page 4 you mention AF, without introducing the acronym ever before.\", \"Sec 3.1: the statements about MB and MF algorithms are inaccurate. Model-based RL algorithms can work in real-time (e.g. http://proceedings.mlr.press/v78/drews17a/drews17a.pdf) and have been shown to have same asymptotic performance of MB controllers for simple robot control (e.g. https://arxiv.org/abs/1805.12114)\", \"\\\"to speed up and trade off between evaluating fitness and evolving new species\\\" Unclear sentence. speed up what? why is this a trade-off?\", \"Sec 3.4 can you recap all the parameters after eq.11? going through Sec 3.2 and 2.2 to find them is quite annoying.\", \"Sec 4.1: would argue that computational cost is rarely a concern among evolutionary algorithms. The cost of evaluating the function is typically more pressing, and as a result it is important to have algorithms that can converge within a small number of iterations/generations.\", \"Providing the same computational budget seem rather arbitrary at the moment, and it heavily depends from implementation. How many evaluations do you perform for each method? why not having the same budget of experiments?\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting paper on co-optimizing robot structure and control\", \"review\": \"This paper discusses the optimization of robot structures, combined with their controllers. The authors propose a scheme\\nbased on a graph representation of the robot structure, and a graph-neural-network as controllers. The experiments show\\nthat the proposed scheme is able to produce walking and swimming robots in simulation. The results in this paper are impressive, and the paper seems free of technical errors. \\n\\nThe main criticism I have is that I found the paper harder to read. In particular, the exact difference between the proposed method and the ES baseline is not as clear as it could be. This makes the contribution of this paper in terms of the method\\nhard to judge. Please include further description of the ES cost function and algorithm in the main body of the paper.\\n\\nThe second point is that the proposed approach seems to modify a few things from the ES baseline. The efficacy of the separate modifications should be tested. Therefore I would like to see experiments with the ES cost function, but with\\ninclusion of the pruning step, and experiments with the AF-function but without the pruning step.\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Direct application of ES with NerveNet for fitness evaluation\", \"review\": \"[Summary]:\\nThis paper tackles the problem of automatic robot design. The most popular approach to doing this has been evolutionary methods which work by evolving morphology of agents in a feed-forward manner using a propagation and mutation rules. This is a non-differentiable process and relies on maintaining a large pool of candidates out of which best ones are chosen with the highest fitness. In robot design for a given task using rewards, training each robot design using RL with rewards is an expensive process and not scalable. This paper uses graph network to train each morphology using RL. Thereby, allowing the controller to share parameters and reuse information across generations. This expedites the score function evaluation improving the time complexity of the evolutionary process.\\n\\n[Strengths]:\\nThis paper shows some promise when graph network-based controllers augmented with evolutionary algorithms. Paper is quite easy to follow.\\n\\n[Weaknesses and Clarifications]:\\n=> Robot design area has been explored extensively in classical work of Sims (1994) etc. using ES. Given that, the novelty of the paper is fairly incremental as it uses NerveNet to evaluate fitness and ES for the main design search.\\n=> Environment: The experimental section of the paper can be further improved. The approach is evaluated only in three cases: fish, walker, cheetah. Can it be applied to more complex morphologies? Humanoid etc. maybe?\\n=> Baselines: The comparison provided in the paper is weak. At first, it compares to random graph search and ES. But there are better baselines possible. One such example would be to have a network for each body part and share parameters across each body part. This network takes some identifying information (ID, shape etc.) about body part as input. As more body parts are added, more such network modules can be added. How would the given graph network compare to this? This baseline can be thought of a shared parameter graph with no message passing.\\n=> The results shown in Figure-4 (Section-4.2) seems unclear to me. As far as I understand, the model starts with hand-engineered design and then finetuned using evolutionary process. However, the original performance of the hand-engineered design is surprisingly bad (see first data point in any plot in Figure-4). Does the controller also start from scratch? If so, why? Also, it is not clear what is the meaning of generations if the graph is fixed, can't it be learned altogether at once?\\n\\n[Recommendation]:\\nI request the authors to address the comments raised above. Overall, this is a reasonable paper but experimental section needs much more attention.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
Bkxbrn0cYX | Selfless Sequential Learning | [
"Rahaf Aljundi",
"Marcus Rohrbach",
"Tinne Tuytelaars"
] | Sequential learning, also called lifelong learning, studies the problem of learning tasks in a sequence with access restricted to only the data of the current task. In this paper we look at a scenario with fixed model capacity, and postulate that the learning process should not be selfish, i.e. it should account for future tasks to be added and thus leave enough capacity for them. To achieve Selfless Sequential Learning we study different regularization strategies and activation functions. We find that
imposing sparsity at the level of the representation (i.e. neuron activations) is more beneficial for sequential learning than encouraging parameter sparsity. In particular, we propose a novel regularizer, that encourages representation sparsity by means of neural inhibition. It results in few active neurons which in turn leaves more free neurons to be utilized by upcoming tasks. As neural inhibition over an entire layer can be too drastic, especially for complex tasks requiring strong representations,
our regularizer only inhibits other neurons in a local neighbourhood, inspired by lateral inhibition processes in the brain. We combine our novel regularizer with state-of-the-art lifelong learning methods that penalize changes to important previously learned parts of the network. We show that our new regularizer leads to increased sparsity which translates in consistent performance improvement on diverse datasets. | [
"Lifelong learning",
"Continual Learning",
"Sequential learning",
"Regularization"
] | https://openreview.net/pdf?id=Bkxbrn0cYX | https://openreview.net/forum?id=Bkxbrn0cYX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"Hkl9GIxXe4",
"Bkgrm1dmkV",
"SJgEC5nlkV",
"SygxmvXCAQ",
"Hkl7uQV9C7",
"rke9cE2NAX",
"B1xX3QhE0m",
"SklPsoiEAm",
"rJeWYmDAnm",
"SJeKGqFth7",
"rJeJwe0d3m"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544910353771,
1543892764875,
1543715532233,
1543546647556,
1543287658758,
1542927506261,
1542927275240,
1542925214820,
1541464953183,
1541147152884,
1541099606916
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1517/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1517/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1517/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1517/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1517/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1517/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1517/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1517/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1517/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1517/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1517/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"Two of the reviewers raised their scores during the discussion phase noting that the revised version was clearer and addressed some of their concerns. As a result, all the reviewers ultimately recommended acceptance. They particularly enjoyed the insights that the authors shared from their experiments and appreciated that the experiments were quite thorough. All the reviewers mentioned that the work seemed somewhat incremental, but given the results, insights and empirical evaluation decided that it would still be a valuable contribution to the conference. One reviewer added feedback about how to improve the writing and clarity of the paper for the camera ready version.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"incremental but interesting contribution to life long learning for neural networks.\"}",
"{\"title\": \"Thanks & comment on neuron importance\", \"comment\": \"Thank you for checking the figures, kindly, we would like to draw your attention to the neuron importance experiment we newly added in Table 3. Accounting for neuron importance played a crucial role in reducing interference and even when not using a LLL regularizer.\"}",
"{\"title\": \"Thanks\", \"comment\": \"Thank you so much, we will try to shorten and move the figures in a final version.\"}",
"{\"title\": \"Performance of Finetuning in Figure 2\", \"comment\": \"To confirm the behavior of a network trained without LLL method (MAS) , we run the ReLU baseline without MAS on the sequence of 5 permuted mnist tasks and obtained the following accuracies at the end of the sequence:\\nFinetuning (ReLU NoMAS)= [40.79, 49.16, 72.5, 86.56, 97.08]\", \"compared_to\": \"ReLU (with MAS)=[95.8, 93.66, 93.32, 89.95, 89.89]\\nSLNID( Ours)=[96.46, 96.25, 95.86, 95.81, 94.77]\\n\\nAs the reviewer suggested Finetuning (ReLU without MAS) has a better performance on the last task while forgetting severely the first task. However, as we mentioned earlier all our baselines in Figures 2 &3 run with MAS as a LLL method.\"}",
"{\"title\": \"Figure 4,2\", \"comment\": \"On figure 4: I knew histograms are given in figure 4 (I said figure 5 mistakenly, but I meant figure 4). But showing overlap patterns across tasks (at different layers for instance) might be more informative.\\n\\nIn figure 8&9&10, we have shown the important neurons in the first layer after learning each task, color coded with the task id. First task important neurons are in Blue, second task important neurons in orange and 3rd task important neurons in green. Figure 11 & 12&13 show the important neurons ordered with respect to their importance estimated at the first task. It can be seen from these figures how neurons are re-reused (overlapped) and other are newly activated for each new task. If the reviewer has other suggestions we will be eager to add.\\n\\n- On figure 2: It looks weird to me because last task has the lowest accuracy even for ReLU (sequential learning w/o regularization); tuning for task 5 will lead catastrophic forgetting for previous tasks, meaning acc for task 1 be the lowest?\\n\\n All the baselines and compared methods in Figure 2 and 3 have importance weight regularizer (MAS), hence forgetting is minimized in all compared methods, accuracy in task 1 is preserved while scarifying accuracy in task 5. ReLU (sequential learning w/o regularization): we main no additional sparsity regularizer. We thank the reviewer for pointing this out, we have clarified it in the revised version. Note that our regularizer improves 4-8% over No-Reg that uses already MAS as LLL method.\\nWe will be happy to clarify any other points.\"}",
"{\"title\": \"No Task Boundaries exp and some clarificatoins\", \"comment\": \"We thank AnonReviewer1 for their suggestions and comments.\\nNote that we revised the paper, and renamed our full model to SLNID. Below are our comments to the main points:\\n\\n1) Task boundaries are still used, which limits applicability; in many scenarios which do have a continual learning problem there are no clear task boundaries, such as data distribution drift in both supervised and reinforcement learning.\\n\\n We agree with the reviewer on the importance of the mentioned setting where there are no clear task boundaries and distribution gradually drifts. Although this is orthogonal to the contribution of this work, we tested a setup where the data distribution drifts between tasks. When evaluating in this setting, we find, interestingly, that our proposed SNLID again works well in this setting compared to the LLL approach MAS (Aljundi et al.,2017), which benefits from hard task boundaries. Details can be found in the revised paper in Section 4.2 and Table 3. We believe this is an interesting setting to study further in future work.\\n\\n2) Since models used in the work are very different from SOTA models on those particular tasks, it is hard to determine from the paper how the proposed method influences these models. In particular, it is not clear whether these changes to the loss would still allow top performance on regular classification tasks, e.g. CIFAR-10 or MNIST even without sequential learning, or in multitask learning settings. \\n Improving the state of the art results on non sequential scenarios is not the aim of this proposed regularizer. Further, the studied setup of LLL where data from previous or future task is not available during the training of a given task is much harder and challenging than joint training or multi task training where all data is available at training time. In sec 4.2 Table 2 we compare against and outperform state of the art LLL methods under the same setting and models used in those methods.\"}",
"{\"title\": \"Updated version, Neuron Importance exp., Figures showing overlap and clarifications\", \"comment\": \"We thank AnonReviewer2 for their constructive comments, below is our reply to the main points. Note that we revised the paper, and renamed our full model to SLNID.\\n\\n1) Reasoning about why representation based regularization is more effective for life-long learning setting. \\n\\nPlease check our updated version.\\n\\n2) Importance of neurons in equation (6)\\n\\nThe importance of the neurons for a previous task is computed based on that previous task data right after training that task. This is in line with estimating the importance weight in LLL methods. While on permuted mnist, the neurons importance doesn\\u2019t seem crucial to achieve the best performance, it improves the performance on Cifar, Tiny Imagenet, and the 8 Object recognition sequence. In fact, permuted mnist is a simple scenario we use to compare all the studied methods in a setting where the differences between tasks are easily identified. The full permutation requires the network to instantiate a new representation for the new task that associates new collections of pixels to digits patterns. In such a simple case, the importance of the neuron doesn\\u2019t seem a crucial factor while in more complicated sequence where tasks overlap and relatedness is much higher the neuron importance term is a key component. In Sec2.4 Table3 we again compare our regularizer with and without neurons importance, \\nboth when evaluating the average performance using each task model and when using the last trained model. While both SLNI with and without neurons importance improve the individual models accuracy (73.03 and 72.14 respectively), the performance at the end of the sequence (using the lastly trained model) significantly drops for SLNI without neurons importance (72.14 to 63.54) compared to SLNI with the neuron importance (73.03 to 70.75). This is a clear indication of the role of neurons importance in the sequential learning scenario in excluding previously important neurons from the penalty and hence avoiding interference between tasks.\\n\\n 3) It would be great if authors can show some actual overlaps of activations across tasks (not just simple histogram as in Figure 5).\\n\\nFigure 4, bottom, shows the histogram of the mean activation on the first task achieved by each method. Figures 8 & 9 & 10 in the Appendix show the neurons importance after each task. It can be seen how new activations are initiated while reusing previous neurons. Also Figures 11&12&13, newly added, show the importance of the neurons sorted by the importance computed at the first task. It can be clearly seen which neurons are re-used and which are getting activated for the new task.\\n\\n4) And isn't g_i(x_m) a scalar? Explain why we need the norm when you get alpha.\\n\\nIn case of neurons in fully connected layers, g_i(x_m) is indeed a scalar. In the convolutional layers, importance of neurons is the norm of the gradient vector. While we only consider fully connected layers in this work, this was for sake of generality. \\nFurther, while estimating the importance, gradients are accumulated from different samples. We want to estimate how much a change in the previous task could happen when changing this neuron\\u2019s output. We are not interested in the sign of the change itself, hence we accumulate the absolute value of the gradients from different samples. \\nFor sake of clarity, we replaced the norm with absolute value sign in the new version.\\n\\n5) It would be nice to clarify what the task sequence looks like in Figure 2. It is hard to understand that task 5, which is the most recent learning task, has the lowest performance in all tasks.\\n\\nIn Figure 2, the sequence is first task mnist and remaining tasks permuted mnist with different permutations. Training individual models, results in similar accuracy in each task. In the sequential setting, the last task is the most recent task. The model has to learn this task without forgetting all the previous tasks. As such, little capacity is left for the very last task. This is a known phenomenon in Lifelong learning and explained in Section 2 second paragraph. For this reason our regularizer always achieves the best performance on the last task in the sequence as it aims at leaving capacity for later tasks. Also as mentioned in Section 4.1, Experimental Setup, we have used a high value of (\\\\lambda_omega) that ensures the least forgetting which allows us to test the effect on the later tasks performance. For example, in the experiments Section 4.6, the accuracy on the last task for Finetuning is 90.0% (as it forgets completely the previous tasks and only cares about the last task) while for MAS it is 68.2%. Our regularizer improves the accuracy on the last task to 77%, as more capacity is left for the last task. In the paper we only report the avg.acc at the end of the sequence due to space limits.\"}",
"{\"title\": \"We have updated our paper taking into account the suggested edits and here are some clarifications\", \"comment\": \"We thank AnonReviewer3 for their constructive comments, below is our reply to the main points.\\nNote that we revised the paper, and renamed our full model to SLNID.\\n\\n1) Changing the hat notation:\\n\\nFollowing the suggestion, we adapted the naming as follows an used them throughout the paper. Note that SLNID now corresponds to the complete version of our regularizer:\\n- Sparse coding through Neural Inhibition (SNI)\\n- Sparse coding through Local Neural Inhibition (SLNI)\\n- Sparse coding through Local Neural Inhibition and Discounting (SLNID)\\n\\n2) Results are presented and discussed in the introduction, and overall the intro is a bit long, resulting in parts later being repetitive.\\n\\nPlease check our updated version.\\n\\n3) Worth discussing sparsity vs. distributed representations in the intro, and how/where we want sparsity while still having a distributed representation.\\n\\nPlease check our updated version.\\n\\n4) Figure captions could use a lot more experimental insight and explanation - e.g. Figure 10 (in appendix B4)\\nWe have updated the figures captions accordingly.\\n\\nFrom Figure 8 & 9 & 10 we can deduce two main points:\\n- The important neurons are sparse, SLNID tolerates more active neurons than SNID.\\n- With each new task, new neurons are getting used and become important (Figure 9 & 10) .\\nFigures 11&12&13, newly added, where neurons are sorted w.r.t. their importance for the first task, show how new neurons are becoming important for the new tasks.\\nPrevious important neurons are also reused for the new tasks. This is not against our regularizer. Our regularizer avoids inhibiting neurons from previous tasks by excluding them, so they can be used freely (Equation 7, section 3.3). The LLL regularizer (Equation 1) ensures that their connections are not being changed drastically and hence performance preserved in previous tasks. So, both, achieving sparsity to leave space for future tasks and sharing important neurons, whenever possible, allowing forward transfer, are actually goals of our regularizer.\\n\\n5) How does multi-task joint training differ from \\\"normal\\\" classification? The accuracies especially for CIFAR seem very low.\\n\\nThe shown performance of joint training represents the average accuracy achieved on each task by masking out classifier scores of the other tasks when computing the arg max. However, the training was done using a shared 100-dimensional classification layer. We use a small network with only 128 or 256 neurons in the hidden layer, training it for 50 epochs with SGD optimizer and a learning rate of 0.01. No dropout was used, no batch normalization and no data augmentation. Our aim was to set a fair comparison between different regularizers without the interference of other mechanisms. We did not aim for state of the art results on learning jointly a dataset.\"}",
"{\"title\": \"Interesting work addressing an interesting class of problems, novel regularizer and good experiments, but writing and paper organization need work\", \"review\": \"REVISION AFTER REBUTTAL\\nWhile the revision does not address all of my concerns about clarity, it is much better. I still think that the introduction is overly long and the subsequent sections repeat information; if this were shortened there could be room for some of the figures that are currently in appendix. I appreciate the new figures; I think that it would be great if especially figure 10 were included in the main paper. \\nI agree with the other two reviewers that the work is somewhat incremental, but the differences are well explained, the experimental results are interesting (particularly the differences of parameter vs representation-based sparsity, and the plots in appendix showing neuron importance over tasks), and the progression from SNI to SLNID is well-presented. I think overall that this paper is a good contribution and I recommend acceptance. I have updated my review to a 7. \\n===============\\n\\\"Activations\\\" \\\"Representation\\\" and \\\"Outputs\\\" are used somewhat interchangably throughout the work; for anyone not familiar it might be worth mentioning something about this in the intro.\\n \\nProblem setting is similar to open set learning (classification); could be worth mentioning algorithms for this in the related work which attempt to set aside capacity for later tasks.\\n\\nResults are presented and discussed in the introduction, and overall the intro is a bit long, resulting in parts later being repetitive.\\n\\nWorth discussing sparsity vs. distributed representations in the intro, and how/where we want sparsity while still having a distributed representation.\\n\\nShould be made clear that this is inspired by one kind of inhibition, and there are many others (i.e. inhibition in the brain is not always about penalizing neurons which are active at the same time, as far as I know)\\n\\nChanges in verb tense throughout the paper make it hard to follow sometimes. Be consistent about explaining equations before or after presenting them, and make sure all terms in the equation are defined (e.g. SNI with a hat is used before definition). Improper or useless \\\"However\\\" or \\\"On the other hand\\\" to start a lot of sentences.\\n\\nFigure captions could use a lot more experimental insight and explanation - e.g. what am I supposed to take away from Figure 10 (in appendix B4), other than that the importance seems pretty sparse? It looks to me like there is a lot of overlap in which neurons are important or which tasks, which seems like the opposite of what the regularizer was trying to achieve. This is a somewhat important point to me; I think this interesting and I'm glad you show it, but it seems to contradict the aim of the regularizer.\\n\\nHow does multi-task joint training differ from \\\"normal\\\" classification? The accuracies especially for CIFAR seem very low.\", \"quality\": \"7/10 interesting and thoughtful proposed regularizer and experiments; I would be happy to increase this rating if the insights from experiments, especially in the appendix, are a bit better explained\", \"clarity\": \"6/10 things are mostly clearly explained although frequently repetitive, making them seem more confusing than they are. If the paper is reorganized and the writing cleaned up I would be happy to increase my rating because I think the work is good.\", \"originality\": \"8/10 to my knowledge the proposed regularizer is novel, and I think think identifying the approach of \\\"selfless\\\" sequential learning is valuable (although I don't like the name)\", \"significance\": \"7/10 I am biased because I'm interested in LLL, but I think these problems should receive more attention.\", \"pros\": [\"proposed regularizer is well-explained and seems to work well, ablation study is helpful\"], \"cons\": \"- the intro section is almost completely repetitive of section 3 and could be significantly shortened, and make more room for some of the experimental results to be moved from the appendix to main text\\n - some wording choices and wordiness make some sentences unclear, and overall the organization and writing could use some work\\n\\nSpecific comments / nits: (in reading order)\\n1. I think the name \\\"selfless sequential learning\\\" is a bit misleading and sounds like something to do with multiagent cooperative RL; I think \\\"forethinking\\\" or something like that that is an actual word would be better, but I can't think of a good word... maybe frugal? \\n2. Mention continual/lifelong learning in the abstract\\n3. \\\"penalize changes\\\" maybe \\\"reduce changes\\\" would be better?\\n4. \\\"in analogy to parameter importance\\\" cite and explain parameter importance\\n5. \\\"advocate to focus on selfless SL\\\" focus what? For everyone doing continual learning to focus on methods which achieve that through leaving capacity for later tasks? This seems like one potentially good approach, but I can imagine other good ones (e.g. having a task model)\\n6. LLL for lifelong learning is defined near the end of the intro, should be at the beginning when first mentioned\\n7. \\\"lies at the heart of lifelong learning\\\" I would say it is an \\\"approach to lifelong learning\\\"\\n8. \\\"fixed model capacity\\\" worth being specific that you mean (I assume) fixed architecture and number of parameters\\n9. \\\"those parameters by new tasks\\\" cite this at the end of the sentence, otherwise it is unclear what explanation goes with which citation\\n10. \\\"hard attention masks, and stored in an embedding\\\" unclear what is stored in the embedding. It would be more helpful to explain how this method relates to yours rather than just describing what they do.\\n11. I find the hat notation unclear; I think it would be better just to have acronyms for each setting and write out the acronyms in the caption\\n12.\\\"richer representation is needed and few active neurons can be tolerated\\\" should this be \\\"more active neurons\\\"?\\n13. Comparison with state of the art section is repetitive of the results sections\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Review for Selfless Sequential Learning\", \"review\": \"This paper deals with the problem of catastrophic forgetting in lifelong learning, which has recently attracted much attention from researchers. In particular, authors propose the regularized learning strategies where we are given a fixed network structure (without requiring additional memory increases in the event of new task arriving) in the sequential learning framework, without the access to datasets of previous tasks. Performance comparisons were performed experimentally against diverse regularization methods including ones based on representation, based on parameter itself, and the superiority of representation-based regularization techniques was verified experimentally. Based on this, authors propose a regularization scheme utilizing the correlation between hidden nodes called SNI and its local version based on Gaussian weighting. Both regularizers are even extended to consider the importance of hidden nodes. Through MNIST, CIFAR, and tiny Imagenet datasets, it has been experimentally demonstrated that the proposed regularization technique outperforms state-of-the-art in sequential learning.\\n\\nIt is easy to follow (and I enjoyed the way of showing their final method, starting from SNI to SLNI and importance weighting). Also it is interesting that authors obtained meaningful results on several datasets beating state-of-the-arts based on very simple ideas.\\n\\nHowever, given Cogswell et al. (2015) or Xiong et al. (2016), it seems novelty is somehow incremental (I could recognize that this work is different in the sense that it considers local/importance based weighting as well as penalizing correlation based on L1 norm). Moreover, there is a lack of reasoning about why representation based regularization is more effective for life-long learning setting. Figure 1 is not that intuitive and it does not seem clearly describe the reasons. \\n\\nMy biggest concern with the proposed regularization technique is the importance of neurons in equation (6). It is doubtful whether the importance of activation of neurons based on \\\"current data\\\" is sufficiently verified in sequential learning (in the experimental section, avg performance for importance weight sometimes appears to come with performance improvements but not always). It would be great if authors can show some actual overlaps of activations across tasks (not just simple histogram as in Figure 5). And isn't g_i(x_m) a scalar? Explain why we need the norm when you get alpha.\\n\\nIt would be nice to clarify what the task sequence looks like in Figure 2. It is hard to understand that task 5, which is the most recent learning task, has the lowest performance in all tasks.\\n\\n-----------------------------------------------------------------------------------------------------\\n- On figure 4: I knew histograms are given in figure 4 (I said figure 5 mistakenly, but I meant figure 4). But showing overlap patterns across tasks (at different layers for instance) might be more informative. \\n- On figure 2: It looks weird to me because last task has the lowest accuracy even for ReLU (sequential learning w/o regularization); tuning for task 5 will lead catastrophic forgetting for previous tasks, meaning acc for task 1 be the lowest?\\n\\n-----------------------------------------------------------------------------------------------------\\n- My concerns about figures are solved; I want to thank authors for their efforts.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Thorough continual learning work but limited to task-based case\", \"review\": \"[REVISION]\\nThe work is thorough and some of my minor concerns have been addressed, so I am increasing my score to 6. I cannot go beyond because of the incremental nature of the work, and the very limited applicability of the used continual learning setup from this paper.\\n\\n[OLD REVIEW]\\nThe paper proposes a novel, regularization based, approach to the sequential learning problem using a fixed size model. The main idea is to add extra terms to the loss encouraging representation sparsity and combating catastrophic forgetting. The approach fairs well compared to other regularization based approaches on MNIST and CIFAR-100 sequential learning variants.\", \"pros\": \"Thorough experiments, competitive baselines and informative ablation study.\\nGood performance on par or superior to baselines.\\nClear paper, well written.\", \"cons\": \"The approach, while competitive in performance, does not seem to fix any significant issues with baseline methods. For example, task boundaries are still used, which limits applicability; in many scenarios which do have a continual learning problem there are no clear task boundaries, such as data distribution drift in both supervised and reinforcement learning.\\nSince models used in the work are very different from SOTA models on those particular tasks, it is hard to determine from the paper how the proposed method influences these models. In particular, it is not clear whether these changes to the loss would still allow top performance on regular classification tasks, e.g. CIFAR-10 or MNIST even without sequential learning, or in multitask learning settings.\", \"summary\": \"Although the work is substantial and experiments are thorough, I have reservations about extrapolating from the results to settings which do have a continual learning problem. Although I am convinced results are slightly superior to baselines, and I appreciate the lengthy amount of work which went into proving that, the paper does not go sufficiently beyond previous work.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
|
BJe-Sn0ctm | Ain't Nobody Got Time for Coding: Structure-Aware Program Synthesis from Natural Language | [
"Jakub Bednarek",
"Karol Piaskowski",
"Krzysztof Krawiec"
] | Program synthesis from natural language (NL) is practical for humans and, once technically feasible, would significantly facilitate software development and revolutionize end-user programming. We present SAPS, an end-to-end neural network capable of mapping relatively complex, multi-sentence NL specifications to snippets of executable code. The proposed architecture relies exclusively on neural components, and is built upon a tree2tree autoencoder trained on abstract syntax trees, combined with a pretrained word embedding and a bi-directional multi-layer LSTM for NL processing. The decoder features a doubly-recurrent LSTM with a novel signal propagation scheme and soft attention mechanism. When applied to a large dataset of problems proposed in a previous study, SAPS performs on par with or better than the method proposed there, producing correct programs in over 90% of cases. In contrast to other methods, it does not involve any non-neural components to post-process the resulting programs, and uses a fixed-dimensional latent representation as the only link between the NL analyzer and source code generator. | [
"Program synthesis",
"tree2tree autoencoders",
"soft attention",
"doubly-recurrent neural networks",
"LSTM",
"nlp2tree"
] | https://openreview.net/pdf?id=BJe-Sn0ctm | https://openreview.net/forum?id=BJe-Sn0ctm | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"rylSGFEleE",
"HkgckHxE1N",
"BJx-ObjWkN",
"Sye37OZ1kV",
"SkeFWOWy1V",
"Hyea0s69C7",
"Sye_GBnt07",
"B1gekS2KR7",
"rkg3o43tA7",
"ryefdNhKA7",
"r1xvolu9nX",
"BJgAOA8q2Q",
"B1lYrsiB3X"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544730893130,
1543927010367,
1543774569084,
1543604260293,
1543604225421,
1543326677162,
1543255311959,
1543255256230,
1543255204434,
1543255146112,
1541206175493,
1541201525667,
1540893504731
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1516/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1516/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1516/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1516/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1516/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1516/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1516/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1516/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1516/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1516/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1516/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1516/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1516/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper presents a new neural program synthesis architecture, SAPS, which seems to produce accuracy improvements in some synthesis tasks. The reviewer consensus, even after discussion with the authors, was that the paper is not acceptable at the conference. Two concerns emerge during discussion, even considering the authors efforts to improve the paper. First, the system seems to have many \\\"moving parts\\\", but there is a lack of rigorous ablation studies to demonstrate which components of the system (or combination thereof) make significant contributions to the results. I agree with this assessment: it is not sufficient to demonstrate increased scores, even if the experimental protocol and clear and sound (more on this later), but there must be some evidence as to why this increase happens, both in the discussion and in the empirical segment of the paper, by conducting a thorough ablation study. Second, all reviewers had issues with proper and fair comparison with prior work, with the consensus being that the model is not adequately compared to convincing benchmarks in the paper.\\n\\nThe results of the paper sound like there is something promising going on, but the need for a clear presentation of what is the driving factor behind any improvement is not only a superficial stylistic requirement, but a key tenet of proper scholarship. This is one front on which the paper fails to make a successful case for the work and methods it describes, and unfortunately is not ready for publication at this time (despite having a cool title).\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"More experimental rigour is needed\"}",
"{\"title\": \"Thanks for the changes, but the key contributions are still not clear.\", \"comment\": \"Thanks for the revision and the additional ablation experiments, which help quite a bit.\\n\\nThe key contributions for the paper are still not that clear to me. One of the main contributions of using the auto-encoder for pre-training the decoder does not seem to help much (resulting in a slightly lower accuracy on the test set). I also think evaluating the Polosukhin & Skidanov (2018) model with pre-trained GloVe embeddings for embedding NL specifications would be important as well to make cleaner comparisons with the previous approaches. My score, therefore, unfortunately remains unchanged.\"}",
"{\"title\": \"Not updating score\", \"comment\": \"Thanks to the authors for their rebuttal and changes to the paper. The changes address most of the issues I had with clarity and reporting of experimental results.\\n\\nHowever, I don't think these changes address the main shortcoming, which is that the paper fails to make a clear contribution. The architectural modifications and training procedure (autoencoder pretraining) are not very strongly justified given the experimental results. On the other hand, the observation that a fixed size representation can work quite well does not seem very surprising (the same is true for models in, e.g., neural machine translation). And the final results on a relatively synthetic dataset don't suggest that this model achieves any large improvements compared to existing models.\"}",
"{\"title\": \"Paging R3\", \"comment\": \"Could R3 kindly please take a look at the rebuttal and changes to the paper the authors have made and consider whether they would like to revise their assessment of the paper, or if sticking to their position, give a quick explanation as to why their score would remain unchanged.\"}",
"{\"title\": \"Paging R1\", \"comment\": \"Could R1 kindly please take a look at the rebuttal and changes to the paper the authors have made and consider whether they would like to revise their assessment of the paper, or if sticking to their position, give a quick explanation as to why their score would remain unchanged.\"}",
"{\"title\": \"Updated comments after revision\", \"comment\": \"Thank you for the changes to the paper. I think they have already greatly improved the quality of your submission. First, let me directly answer two points that you've made:\\n\\nRe (1): The results in the new revision show that pre-training the decoder in the autoencoder setting has no noticeable effect on the effectiveness of the proposed model, meaning that there is no empirical evidence that the autoencoder has practical value. Without experiments on additional data used for this pretraining (1(b) in my original review), it thus remains unclear why this modelling choice was made.\\n\\nRe (2): The results in Table 2 are extremely surprising: Tree decoders with top-down/vertical propagation schemes (e.g., Rabinovich et al. 2017, Chen at al. 2018) are known to have results that are comparable or better than sequence-based baselines in the generation of programs. It is unclear why these do not work at all here, and the authors do not explain this disagreement with the literature.\", \"minor_note\": \"Encoding a NL spec to a latent representation and then decoding a program from that is not an original contribution; it is a well-known, central idea in many deep learning-based semantic parsing works of the last few years.\", \"the_combination_of_these_issues_is_reason_enough_for_me_keep_my_rating_at_4\": \"Of the contributions claimed by the authors (in the revised version), (i) is not original, (ii) is shown by their experiments to have no effect, (iii) seems to be not evaluated properly (and not compared to any non-trivial existing baselines in the space). This leaves contribution (iv) (attention over dimensions of latent spec), which seems to have a small effect in the experiments. For me, this is not enough to recommend acceptance of the paper, especially one that is dominated by other content that is less novel or interesting.\"}",
"{\"title\": \"Comment to AnonReviewer2\", \"comment\": \"Dear Reviewer, please pardon our brevity, shorting your comments, and skipping minor comments - all due to 5000 char limit. Our responses start with '>' character. -- The authors.\\n\\nThe authors could improve their paper by providing an explicit contribution list. \\n\\n> Thank you for your account of claimed contributions - we admit that the latter was too cursory in the original paper. The list you've provided above summarizes them very aptly, and the revised paper now features a very similar summary at the end of Introduction. \\n\\n\\nRe (1):\\n (a) Does the pre-training procedure help? Did you evaluate joint end-to-end training of the NL spec encoder and the tree decoder?\\n (b) You could imagine training the autoencoder on an additional corpus of programs without NL specs. Did you attempt this?\\n\\n> Suggestion (1a) has been voiced also by another reviewer, and we conducted now an additional experiment to verify this hypothesis. Suggestion (1b) is very interesting, and was indeed one of conceptual arguments in favor of engaging autoassociative learning. We will definintely try ot give it a go in future, however at the moment we don't have an auxiliary dataset that could be used for that purpose (or an appropriate program generator). Also, we present a range of other results in the revised version, and squeezing yet another experiment would be problematic due to page limit. Nevertheless, that's a very interesint hint for follow-ups, thank you. \\n\\n\\nRe (2):\\n (a) The tree decoder is unusual in that (one) part of the recurrence essentially enforces a breadth-first expansion order, whereas almost all other approaches use a depth-first technique (with the only exception of R3NN, as far as I remember). You cite the works of Yin & Neubig and Rabinovich et al.; did you evaluate how your decoder compares to their techniques? (or alternatively, you could compare to the absurdly complex graph approach of Brockschmidt et al. (arxiv 1805.08490)))\\n (b) Ablations on this model would be nice: How does the model perform if you set the horizontal (resp. the vertical) input to 0 at each step? (i.e., ablations to standard tree decoder / to pure BFS)\\n\\n> Thank you for this suggestion. Lack of ablation studies was a recurring theme in virtually all reviewes we've received, so we conducted a few of them and report them now in the revised version. Enabling and disabling the state propagation mechanism separately for vertical and horizontal propagation is now one of the main dimensions of the experimental part of the paper. \\n\\n\\nIf you run an experiment on end-to-training (without the autoencoder objective), you could use a standard attention mechanism that attends over the memories of the NL encoder. \\n\\n> This is indeed possible, once the decoder has been extracted from the autoencoder. Nevertheless, we decided not to experiment with this specific extension, as page limit was already problematic with the other new results we now include. \\n\\n\\nThis (and the rest of the paper) is completely ignoring the old and active field of semantic parsing. \\n\\n> Thank you for this very valuable pointer. We now cite a related work on semantic parsing. \\n\\n\\n- page2par3 / page6par4 contradict each other. \\n\\n> What we meant in page2par3 is that the dataset includes hardly any out-of-vocabulary terms. But we did not phrase that correctly indeed and the contradiction is rather evident - now fixed. \\n\\n\\n- page4par3: You state \\\"a reference to a previously used variable may require 'climbing up' the tree and then descending\\\" - something that your model, unlike e.g. the work of Yin & Neubig, does not support. How important is this really? Can you support your statement by data?\\n\\n> Admittedly not. This was indeed largely speculative. However, we did not mean to suggest that we can explicitly climb the tree, but that non-resetting of network's state provides the decoder with more context. We replaced that sentence with a more accurate statement. \\n\\n\\n- page5, (14) (and (13), probably): To avoid infinite recursion, the $h_i^{(pred)}$ on the right-hand-side should probably be $h_{i-1}^{(pred)}$\\n\\n> Formula 14 is used to modify h_i^{(pred)}, computed earlier in Formula 8. It indeed could be advantegous to emphasise it, however in the case of SAPS $h_{i-1}^{(pred)}$ would denote the hidden state of the previous node rather than the hidden state belonging to i-th node, but from previous timestep. Due to that, we decided to leave this formula unchanged. \\n\\n> Thank you for these very precise ad insightful remarks. We took them all into account.\"}",
"{\"title\": \"Comment to AnonReviewer1\", \"comment\": \"Dear Reviewer, please pardon our brevity, shorting your comments, and skipping minor comments - all due to the 5000 char limit. Our responses start with '>' character. -- The authors.\\n\\nThe results make it hard to compare the proposed model with other models, or with other training procedures.\\n\\n> In the original contribution, we made our best effort to make the approach as comparable as possible to Polosukhin & Skidanov (2018). In the attached revision, we sustain this perspective, but enrich the paper with additional results that will hopefully meet your expectations. \\n\\nFor example, the Seq2Tree model that is shown in table 2 was not necessarily intended to be used without a search algorithm. \\n\\n> Agreed, though we don't think we ever claimed so in the original paper. Arguably, any seq2seq and tree2tree model can be used both with or without a search algorithm, and we think that testing them in both these settings is valuable. Polosukhin & Skidanov must have been of the same opinion here, given that they reported results in both settings. \\n\\nIt is also not mentioned how many parameters both models have which makes it hard to judge how fair of a comparison it is. \\n\\n> Indeed. We include now the sizes of layers rather. The total number of parameters is 5.73 mln (SAPS256 + HV + Att). \\n\\nNo results are shown for how the model performs without pretraining. \\n\\n> We conducted accordingly additional experiments and report them now in the revised version in Section 4.1. \\n\\nOr do you first learn the mapping from sentences to the latent space, and only then finetune the entire model?\\n\\n> In the original submission, we did exactly as you suggest above: trained the sentence-tree mapping end-to-end, with fixed Glove embedding and starting from the decoder taken from the autoencoder. In the revised version, we lay out these variants more clearly: there are models trained from scratch, pretrained, and those that use pretrained decoder that remains fixed in further training. \\n\\nThe attention mechanism is not actually an attention mechanism. \\n\\n> Thank you, we added now `gating function' as an alternative term to ease understanding. However, we'd still claim that this _is_ a form of attention, though in a _soft_ meaning of that word - as it is usually meant for instance in attention mechanisms used for image analysis in convolutional NNs (soft vs hard attention). \\n\\nThe writing in the paper is passable. It lacks a bit in structure. \\n\\n> We revised the text thoroughly. Concerning your specific remark on Table 3 (now Table 4), we rephrased the corresponding paragraph so that it is now trying to anticipate reader's expectations. \\n\\nPlease reduce and harmonize the terminology in the paper.\\n\\n> We made the terminology more consistent; most of the terms used in the paper reflect now the building blocks shown in Fig. 1. \\n\\nFormula 14 has h_i^{(pred)} on both sides of the equation, and is used in the definition of A as well. \\n\\n> Formula 14 can be seen as a development of Formula 8: We first compute the 'raw' version of h_i^{(pred)} in Formula 8, then slightly modify it using both h^{latent} and previously computed h_i^{(pred)}.\\n\\nWhy does figure 1 have boxes for \\\"NL specification\\\", \\\"NL spec.\\\", and \\\"NL query\\\"? \\n\\n> Fig. 1 has been redrawn and the names of building blocks are now consistent with the text. \\n\\nIt is said that regularization is applied to layer normalization, which I assume means that the regularization is applied to the gain parameters of the layer normalization.\\n\\n> Indeed there was an inconsistency: we meant that the regularization is applied to all parameters excluding biases and parameters corresponding to layer normalization. \\n\\nThe authors claim this means their model \\\"significantly diverges from [other] works\\\", which seems hyperbolical.\\n\\n> Please notice that that sentence originally read: \\\"However it significantly diverges from those works in using only the latent vector for passing the information between encoder and decoder.\\\" As you can see, our emphasis was on the fact that that process uses only the latent layer as its source of information. Nevertheless, we removed the adverb `significantly' from that sentence. \\n\\nHowever, the model fails to improve on the Seq2Tree-guided search approach, so its main claimed benefit is that it is trained end-to-end. \\n\\n> Competing with search-based methods was not our main point. We find it rather obvious that explicit search is a powerful mechanism and should in principle always lead to improvements when used in combination with some base method. Our argument for focusing on neural end-to-end learning was primarily of philosophical nature: it seems very compelling to how far can we go by expressing a program as a point in a fixed-dimensional latent space. Some other arguments (possibility of training autoencoder on unlabeled examples, elegance, etc) are now discussed in Discussion.\"}",
"{\"title\": \"Comment to AnonReviewer3\", \"comment\": \"Dear Reviewer, our responses start with '>' character. -- The authors.\\n\\n\\nWhile overall the SAPS architecture achieves impressive practical results, some of the key contributions to the design of the architecture are not evaluated and therefore it makes it difficult to attribute the usefulness and impact of the key contributions. For example, what happens if one were to use pre-trained GloVe embeddings for embedding NL specifications in Polosukhin & Skidanov (2018). Such and experiment would lead to a better understanding of how much gain in accuracy one can obtain just by using pre-trained embeddings.\\n\\n> We admit that the experiments reported in the original submission provided limited insight into the inner workings of the method, in particular about the impact of individual components. In the revised paper, we examine more configurations (primarily ablations of the complete architecture) and analyze them from multiple angles. Unfortunately, the particular experiment you mention above could not be conducted, as we did not reimplement Polosukhin & Skidanov's method - we only quote the results from their paper (that was not only easier, but also arguably the only way to go, given (i) the complexity of the reference method, (ii) the fact that it's description of the paper was not precise enough at places, and (iii) the code provided by Polosukhin & Skidanov (2018) is complex and written in a framework we do not have experience in. Therefore it could be troublesome for us to modify their code and make sure that the crucial parts remain unchanged. Nevertheless, we hope that the presence other results we conducted compensates that shortcoming. \\n\\nOne of the key ideas of the approach is to use a tree2tree autoencoder to train the latent space of program trees. The decoder weights are then initialized with the learnt weights and fine-tuned during the end-to-end training. What happens if one keeps the decoder weights fixed while training the architecture from NL descriptions to target ASTs? Alternatively, if one were to not perform auto-encoding based decoder pre-training and learn the decoder weights from scratch, how would the results look?\\n\\n> Thanks for these interesting proposals. We conducted additional experiments that address these questions and report their results in Section 4. \\n\\n\\nAnother key point of the paper is to use a soft attention mechanism based on only the h^latent. What happens if the attention is also perform on the NL description embeddings? Presumably, it might be difficult to encode all of the information in a single h^latent vector and the decoder might benefit from attending over the NL tokens.\\n\\n> That's indeed an interesting point, raised also by the other reviewers. Let us just emphasize here that strict separation of the encoding and decoding was an important conceptual assumption behind our approach. In particular, it allowed us to take the trained AST decoder as-is and combine it with an NL encoder. Nevertheless, you're right that there are no technical obstacles for linking the AST decoder to an attention mechanism on of the NL encoder. However, as we conducted also a range of other new experiments, squeezing in yet another one would be problematic due to the page limit. We would like to consider this extension in follow-up studies.\\n\\n\\nIt was also not clear if it might be possible to perform some form of a beam search over the decoded trees to possibly improve the results even more?\\n\\n> We are quite positive, if not certain, that such an extension would bring further improvements, per analogy to the experience of Polosukhin & Skidanov. However, the whole point of our attempt was to find out if effective synthesis from NL is possible with bare neural means. Therefore, we decided to maintain this perspective in the revised version. Even if we decided to consider and present also some beam search result, that would be technically challenging due to the required page limit. The revised paper should convey now this viewpoint, hopefully clear enough. \\n\\n\\nThere are also other datasets such as WikiSQL and Spider for learning programs from natural language descriptions. It might be interesting to evaluate the SAPS architecture on those datasets as well to showcase the generality of the architecture.\\n\\n> Thank you for these interesting pointers, not all of which we were aware of. We would be very happy to do so in a further perspective (like follow-up journal paper), but, again, page limit seems to be precluding that. Also, we intentionally focused here on one-to-one comparison to Polosukhin & Skidanov (2018), as we find it most state-of-the-art, and actually the only study that used that valuable dataset at the time we had started this project.\"}",
"{\"title\": \"To all reviewers\", \"comment\": \"Dear Reviewers,\\n\\nWe'd like to thank you for thorough analysis of our submission and many insightful observations. We found them very useful for improving the paper. We made our best effort to address your requests, and hope that the revised version will be considered sufficiently valuable to be accepted for ICLR. \\n\\nMost importantly, in an attempt to address your most fundamental doubts, we conducted a range of new experiments and discuss them in the revised submission. Even though we do no not portray them as ablations in the paper, they may be seen as such w.r.t. to the most sophisticated SAPS configuration (pretraining, attention, and both directions of state propagation in decoder), so we hope they address the issues you pointed to in your reviews. \\n\\nIn connection to that, we revised the text, which at places required quite essential changes, in particular: \\n- shortening of Introduction, \\n- swapping of sections 2.2 and 2.3, \\n- reordering the presentation of experimental results and rewriting most of the comments to those results (note a new table and a different ordering of tables), \\n- revising some threads in Discussion. \\n\\nYou'll find our detailed responses in individual files. Please note that your comments are quoted as-is, while our responses start with the `>' mark. Excuse our brevity there - it was sometimes hard to meet the 5000 char limit, so in some cases we had to shorten or select your comments too. \\n\\nBest regards, \\nThe authors\"}",
"{\"title\": \"Interesting ideas, but no ablation experiments to attribute usefulness of the ideas\", \"review\": \"This paper proposes the Structure-Aware Program Synthesis (SAPS) system, which is an end-to-end neural approach to generate snippets of executable code from the corresponding natural language descriptions. Compared to a previous approach that used search in combination of a neural encoder-decoder architecture, SAPS relies exclusively on neural components. The architecture uses a pretrained GloVe embedding to embed tokens in the natural language description, which are then embedded into a vector representation using a bidirectional-LSTM. The decoder uses a doubly-recurrent neural network for generating tree structured output. One of the key ideas of the approach is to use a single vector point in the latent space to represent the program tree, where it uses a tree2tree autoencoder to pre-train the tree decoder. The results on the NAPS dataset show an impressive increase in accuracy of about 20% compared to neural-only baselines of the previous approach.\\n\\nWhile overall the SAPS architecture achieves impressive practical results, some of the key contributions to the design of the architecture are not evaluated and therefore it makes it difficult to attribute the usefulness and impact of the key contributions. For example, what happens if one were to use pre-trained GloVe embeddings for embedding NL specifications in Polosukhin & Skidanov (2018). Such and experiment would lead to a better understanding of how much gain in accuracy one can obtain just by using pre-trained embeddings.\\n\\nOne of the key ideas of the approach is to use a tree2tree autoencoder to train the latent space of program trees. The decoder weights are then initialized with the learnt weights and fine-tuned during the end-to-end training. What happens if one keeps the decoder weights fixed while training the architecture from NL descriptions to target ASTs? Alternatively, if one were to not perform auto-encoding based decoder pre-training and learn the decoder weights from scratch, how would the results look?\\n\\nAnother key point of the paper is to use a soft attention mechanism based on only the h^latent. What happens if the attention is also perform on the NL description embeddings? Presumably, it might be difficult to encode all of the information in a single h^latent vector and the decoder might benefit from attending over the NL tokens.\\n\\nIt was also not clear if it might be possible to perform some form of a beam search over the decoded trees to possibly improve the results even more? \\n\\nThere are also other datasets such as WikiSQL and Spider for learning programs from natural language descriptions. It might be interesting to evaluate the SAPS architecture on those datasets as well to showcase the generality of the architecture.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Reasonable model but unclear results\", \"review\": \"# Summary\\n\\nThis paper introduces a model called SAPS for the task of mapping natural language descriptions of programs to the AST tree of the corresponding program. The model consists of a variation of a double recurrent neural network (DRNN) which is pre-trained using an autoencoder. The natural language description is turned into a latent vector using pretrained word embeddings and a bidirectional stacked LSTM. The final model consists of training this sentence embedding model jointly with the decoder of the autoencoder.\\n\\n# Quality\\n\\nThe authors introduce a reasonable model which achieves good performance on a relevant task. The results section contains a fair amount of numbers which give some insight into the performance of the model. However, the results make it hard to compare the proposed model with other models, or with other training procedures.\\n\\nFor example, the Seq2Tree model that is shown in table 2 was not necessarily intended to be used without a search algorithm. It is also not mentioned how many parameters both models have which makes it hard to judge how fair of a comparison it is. (I couldn't find the dimensionality of the encoder in the text, and the decoder dimensionality is only shown in figure 2.)\\n\\nThe model proposed in this work uses decoder pretrained in an autoencoder setting. No results are shown for how the model performs without pretraining. Pretraining using autoencoders is a technique that fell out of favor years ago, so it seems worthwhile to investigate whether or not this pretraining is necessary, and if so, why and how it aids the final performance.\\n\\nIt is unclear to me what type of robustness the authors are trying to show in table 5. The use of robustness here is not clear (robustness is often used to refer to a network's susceptability to adversarial attacks or perturbations of the weights). It also seems that the type of \\\"simple replacements\\\" mentioned are very similar to the way the examples were generated in the first place (section 4 of Polosukhin). If the goal is to measure generalization, why do the authors believe that performance on the test set alone is not a sufficient measure of generalization?\", \"some_smaller_comments_and_questions\": \"* In section 4 you mention training the sentence-to-tree and the sentence-to-vector mappings. Isn't the sentence-to-vector model a subset of the sentence-to-tree model? Should I interpret this as saying that, given the pretrained decoder and the glove embeddings, you now train the entire model jointly? Or do you first learn the mapping from sentences to the latent space, and only then finetune the entire model?\\n* The attention mechanism is not actually an attention mechanism: Attention mechanisms are used to reduce a variable number of elements to a single element by learning a weighting function and taking a weighted sum. My understanding is that in this case, the input (the latent representation) is of fixed size. The term \\\"gating function\\\" would be more appropriate.\\n* You specify that the hidden states of the decoder are initialized to zero, but don't specify what the cell states are initialized to.\\n\\n# Clarity\\n\\nThe writing in the paper is passable. It lacks a bit in structure (e.g., I would introduce the problem and dataset before introducing the model) and sometimes fails to explain what insights the authors draw from certain results, or why certain results are reported. Take table 3 as an example: As a reader, I was confused at first why I should care about the reconstruction performance of the autoencoder alone, considering its only purpose is pretraining. Then, when looking at the numbers, I am even more confused since it is counterintuitive that it is harder to copy a program than it is to infer it. At the end of the paragraph the authors propose an explanation (the encoder isn't as powerful as the decoder) but leave it unclear as to why these numbers were being reported in the first place.\\n\\nIn general, the paper would do well to restructure the text so that the reader is explained what the goal of the different experiments is, and what insights should be drawn from them.\", \"a_variety_of_smaller_concerns_and_comments\": [\"Please reduce and harmonize the terminology in the paper: the terms latent-to-AST, NLP2Tree, NLP2Vec, tree2tree/tree-to-tree, sentence-to-tree, sentence-to-vector, NL-to-latent, and spec-to-latent all appear in the paper and several of them are redundant, making it significantly harder to follow along with the text.\", \"Avoid citing the same work multiple times within a paragraph; cite only the first use and use prose to make clear that future references are to the same work.\", \"Formula 14 has h_i^{(pred)} on both sides of the quation, and is used in the definition of A as well. I am assuming these two terms are actually the h_i^{(pred)} from equation 8, but this should be made clear in the notation.\", \"Why does figure 1 have boxes for \\\"NL specification\\\", \\\"NL spec.\\\", and \\\"NL query\\\"? In general, the boxes inconsistently seem to represent both values and operations.\", \"It is never explicitly stated that Seq2Tree is the model from the Polosukhin et al. paper, which is a bit confusing.\", \"Parameters is missing an -s in the first paragraph of section 4.\", \"It is said that regularization is applied to layer normalization, which I assume means that the regularization is applied to the gain parameters of the layer normalization.\", \"It says \\\"like a in the above example\\\" when table 1 is rendered at the bottom of the page by Latex.\", \"# Originality and significance\", \"The paper introduces model variations that the authors claim improve performance on this particular program synthesis problem. In particular, in the DRNN decoder the hidden state is never reset for the \\\"horizontal\\\" (breadth-first order) decoder, and each node is only allowed to attend over the latent representation of the program. The authors claim this means their model \\\"significantly diverges from [other] works\\\", which seems hyperbolical.\", \"The main contribution of this work is then the performance on the program synthesis task of Polosukhin and Skidanov. However, the model fails to improve on the Seq2Tree-guided search approach, so its main claimed benefit is that it is trained end-to-end. Although there is a strong trend in ML research to prefer end-to-end systems, it is worthwhile to ask when and why end-to-end systems are preferred. It is often clear that they are better than having separately learned components that are combined later. However, this does not apply to the model from Polosukhin et al., which consists of a single learned model being used in a search, which is a perfectly acceptable technique. In comparison, translations in neural machine translation are also produced by performing a beam search, guided by the probabilities under the model.\", \"The paper would be significantly stronger if it could show that some alternative/existing method (e.g., a standard DRNN, or a Seq2Tree model with the same number of parameters, or a non-pretrained network) would fail to solve the problem where the authors' proposed method does not. However, the single comparison with the Seq2Tree model does not show this.\", \"# Summary\"], \"pros\": [\"Reasonable model, extensive results reported\", \"Decently written\"], \"cons\": [\"Unclear how the performance compares to other models\", \"Not well justified why end-to-end methods would be better than guided-search based methods\", \"Model architectural differences seem relatively minor compared to original DRNN\", \"The pretraining using an autoencoder and the use of pretrained word embeddings seems arbitrary and is not critically evaluated\", \"Lack of coherent story to several results (the autoencoder performance, robustness analysis)\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Autoencoder used for program synthesis\", \"review\": \"The submission proposes to combine a tree2tree autoencoder with a sequence encoder for natural language. It uses the autoencoding objective to appropriately shape the latent space and train the decoder, and then uses a second training step to align the output of a sequence encoder with the input for the tree decoder. Experiments on a recent dataset for the natural language-to-code task show that the proposed model is able to beat simple baselines.\\n\\nThere's much to like about this paper, but also many aspects that are confusing and make it hard to tease out the core contribution. I'm trying to reflect my understanding here, but the authors could improve their paper by providing an explicit contribution list. Overall, there seem to be three novel things presented in the paper:\\n(1) (Pre)training the (program) tree decoder using an autoencoder objective\\n(2) The doubly-recurrent tree decoder, which follows a different signal propagation strategy from most other approaches.\\n(3) An \\\"attention\\\" mechanism over the point in latent space (that essentially rescales parts of the decoder input)\\n\\nHowever, the experiments do not evaluate these contributions separately; and so their relative merits remain unclear. Primarily, I have the following questions (for the rebuttal, and to improve the paper):\\n\\nRe (1):\\n (a) Does the pre-training procedure help? Did you evaluate joint end-to-end training of the NL spec encoder and the tree decoder? \\n (b) The auto-encoder objective would allow you to train on a larger corpus of programs without natural language specifications. Arguably, the size of the dataset is insufficient for most high-capacity deep learning models, and as you use word embeddings trained on a much larger corpus...), you could imagine training the autoencoder on an additional corpus of programs without NL specs. Did you attempt this?\\n\\nRe (2): \\n (a) The tree decoder is unusual in that (one) part of the recurrence essentially enforces a breadth-first expansion order, whereas almost all other approaches use a depth-first technique (with the only exception of R3NN, as far as I remember). You cite the works of Yin & Neubig and Rabinovich et al.; did you evaluate how your decoder compares to their techniques? (or alternatively, you could compare to the absurdly complex graph approach of Brockschmidt et al. (arxiv 1805.08490)))\\n (b) Ablations on this model would be nice: How does the model perform if you set the horizontal (resp. the vertical) input to 0 at each step? (i.e., ablations to standard tree decoder / to pure BFS)\\n\\nRe (3): This is an unusual interpretation of the attention mechanism, and somewhat enforced by your choice (1). If you run an experiment on end-to-training (without the autoencoder objective), you could use a standard attention mechanism that attends over the memories of the NL encoder. I would be interested to see how this would change performance.\\n\\nAs the experimental evaluation seems to be insufficient for other researchers to judge the individual value of the paper's contribution, I feel that the paper is currently not in a state that should be accepted for publication at ICLR. However, I would be happy to raise my score if (some) of the questions above are answered; primarily, I just want to know if all of the contributions are equally important, or if some boost results more than others.\", \"minor_notes\": [\"There are many spelling mistakes (\\\"snipped\\\" for \\\"snippet\\\", \\\"isomorhpic\\\", ...) -- running a spell checker and doing a calm read-through would help with these details.\", \"page1par2: Writing specifications for programs is never harder than writing the program -- a program is a specification, after all. What you mean is the hardness of writing a /correct/ and exact spec, which can be substantially harder. However, it remains unclear how natural language would improve things here. Verification engineers will laugh at you if you propose to \\\"ease\\\" their life by using of non-formal language...\", \"page1par3: This (and the rest of the paper) is completely ignoring the old and active field of semantic parsing. Extending the related work section to compare to some of these works, and maybe even the experiments, would be very helpful.\", \"page2par3 / page6par4 contradict each other. First, you claim that mostly normal english vocabulary is used, with only occasional programming-specific terms; later you state that \\\"NL vocabulary used in specifications is strongly related to programming\\\". The fact that there are only 281 (!!!) unique tokens makes it very doubtful that you gain anything from using the 1.9million element vocab of GLoVe instead of direct end-to-end training...\", \"page4par3: You state \\\"a reference to a previously used variable may require 'climbing up' the tree and then descending\\\" - something that your model, unlike e.g. the work of Yin & Neubig, does not support. How important is this really? Can you support your statement by data?\", \"page5, (14) (and (13), probably): To avoid infinite recursion, the $h_i^{(pred)}$ on the right-hand-side should probably be $h_{i-1}^{(pred)}$\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
|
HkeWSnR5Y7 | Provable Defenses against Spatially Transformed Adversarial Inputs: Impossibility and Possibility Results | [
"Xinyang Zhang",
"Yifan Huang",
"Chanh Nguyen",
"Shouling Ji",
"Ting Wang"
] | One intriguing property of neural networks is their inherent vulnerability to adversarial inputs, which are maliciously crafted samples to trigger target networks to misbehave. The state-of-the-art attacks generate adversarial inputs using either pixel perturbation or spatial transformation. Thus far, several provable defenses have been proposed against pixel perturbation-based attacks; yet, little is known about whether such solutions exist for spatial transformation-based attacks. This paper bridges this striking gap by conducting the first systematic study on provable defenses against spatially transformed adversarial inputs. Our findings convey mixed messages. On the impossibility side, we show that such defenses may not exist in practice: for any given networks, it is possible to find legitimate inputs and imperceptible transformations to generate adversarial inputs that force arbitrarily large errors. On the possibility side, we show that it is still feasible to construct adversarial training methods to significantly improve the resilience of networks against adversarial inputs over empirical datasets. We believe our findings provide insights for designing more effective defenses against spatially transformed adversarial inputs. | [
"adversarial inputs",
"attacks",
"provable defenses",
"impossibility",
"findings",
"networks",
"possibility results",
"property",
"neural networks"
] | https://openreview.net/pdf?id=HkeWSnR5Y7 | https://openreview.net/forum?id=HkeWSnR5Y7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"r1g_ZvFXgE",
"S1lrfwa327",
"H1xG3oOtnX",
"r1xiO7ruhX"
],
"note_type": [
"meta_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544947456106,
1541359373467,
1541143465862,
1541063538837
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1515/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1515/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1515/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1515/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper conducts a study on provable defenses to spatially transformed adversarial examples. In general, the paper pursues an interesting direction, but reviewers had many concerns regarding the clarity of the presentation and the depth of the experimental results, which the authors did not address in a rebuttal.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"The clarity of presentation and evaluation is a concern\"}",
"{\"title\": \"Review comments\", \"review\": \"This paper proposed a defense against spatially transformed adversarial inputs and give the two main results on possibility (still possible to construct adversarial training methods to improve robustness) and impossibility (always exist spatially-transformed adversarial examples for any given networks and thus no certified defense)\\n\\nThe topic of studying certified defenses on adversarial examples is important, and I think the direction of dealing with spatially-transformed adversarial examples is interesting. However, this paper only analyze a simple one hidden layer neural network and the technique (e.g. sec 4, possibility result) does not seem to easily scale to deeper networks and networks with other types of layers (e.g pooling layers). Also, \\n\\nI also feel the clarity of the paper should be improved.\", \"here_are_some_questions\": \"1. Are there other metrics to measure spatial transformation? For the current setting as introduced in sec 2.1, it looks like there is no a uniform spatial transformation on the full image but rather different transformation applied on different local areas. Does it make more sense to say rotate the full image by some angle or shift it by some distance?\\n \\n2. What is the pi_infty and pi_2 in Theorem 1? Why is it called Lower bound attack in sec 3.1? \\n\\n3. What is the difference between f_fro, f_spe and f_sdp? \\n\\n4. In Figure 6 (b), is the classification accuracy the nominal test accuracy of a classifier? If so, then the accuracy is too low (<90% for mnist) and thus considering the corresponding attack rates (Fig 6(a)) on these models are not meaningful. Please explain.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Limited novelty compared to earlier work; poor presentation of results and conceptual differences between the proposed spatial transformation attack model vs. existing lp norm attack model\", \"review\": \"Summary: The paper studies a new attack model based on spatial transformations. The authors first formalize an attack model based on spatial transformation and then study attacks and defenses for this model.\", \"clarity\": \"While the paper studies an important problem -- it's important to move out of the norm ball based attack models and consider different attacks like spatial transformations, in the current version, the presentation lacks clarity in both the formulation of the attack model, attacks, defenses and explanation of the results. For example, the impossibility result isn't clear: the claim is that any classifier has adversarial spatial transformations that are successful in causing misclassifcation for some threshold on the size of transformation. There is no explanation of how large this threshold is in practice. Is it small enough to be called an \\\"impossibility result\\\"? What does this threshold intuitively depend on?\", \"originality\": \"The key contribution seems to be the formalization of some notion of spatial transformation. However, the final expression (Proposition 1) basically looks just like an l_p norm but after transforming it by some \\\"fixed\\\" matrix M. The expressions for this new attack model where || M r|| < \\\\eps for some perturbation \\\\eps look pretty similar to the case previously considered (where M was essentially identity). For example, Raghunathan et al. 2018 and Hein & Andriushchenko 2017. The paper is also missing discussion on the structure of this matrix M, and how it changes the attacks and defenses in practice\", \"significance\": \"I think the problem of spatial transformation based adversarial examples is important and the authors have the right goals. However, the current presentation makes it hard to understand the main results provided and hence I would rate that the contribution is not very significant.\", \"overall\": \"I highly recommend the authors to revise the presentation and clarify a) the main conceptual differences of the new attack model (matrix M of proposition 1) b) Formalize the impossibility and possibility results carefully with concrete theoretical/empirical results to back the claims\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Can the proposed defense is secure against pixel-based AEs?\", \"review\": \"The presented analysis well characterizes the behavior of the spatially transformed adversarial inputs and the proposed defense is empirically confirmed to achieve more accurate and robust classification under attacks.\\n\\nOne concern is that the defender cannot learn whether the adversary employs spatially transformed AEs or pixel-based AEs (or some others). What happens if the classifier trained with the proposed defense accept pixel-based AEs? I recommend the authors to associate spatially transformed AEs with pixel-based AEs to learn whether the proposed defense performs more robustly compared to existing defenses. If the proposed defense method performs well for spatially transformed AEs but is vulnerable to pixel-based AEs, it is useless.\\n\\nIt should be better to discuss more on computational efficiency of the proposed defense since it contains SDP solving. Is the proposed deense works with larger datasets such as CIFAR100 or ImageNet?\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
rJeZS3RcYm | Simple Black-box Adversarial Attacks | [
"Chuan Guo",
"Jacob R. Gardner",
"Yurong You",
"Andrew G. Wilson",
"Kilian Q. Weinberger"
] | The construction of adversarial images is a search problem in high dimensions within a small region around a target image. The goal is to find an imperceptibly modified image that is misclassified by a target model. In the black-box setting, only sporadic feedback is provided through occasional model evaluations. In this paper we provide a new algorithm whose search strategy is based on an intriguingly simple iterative principle: We randomly pick a low frequency component of the discrete cosine transform (DCT) and either add or subtract it to the target image. Model evaluations are only required to identify whether an operation decreases the adversarial loss. Despite its simplicity, the proposed method can be used for targeted and untargeted attacks --- resulting in previously unprecedented query efficiency in both settings. We require a median of 600 black-box model queries (ResNet-50) to produce an adversarial ImageNet image, and we successfully attack Google Cloud Vision with 2500 median queries, averaging to a cost of only $3 per image. We argue that our proposed algorithm should serve as a strong baseline for future adversarial black-box attacks, in particular because it is extremely fast and can be implemented in less than 20 lines of PyTorch code. | [
"target image",
"image",
"simple",
"adversarial attacks simple",
"adversarial attacks",
"construction",
"adversarial images",
"search problem",
"high dimensions",
"small region"
] | https://openreview.net/pdf?id=rJeZS3RcYm | https://openreview.net/forum?id=rJeZS3RcYm | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"r1l1Pn-VlN",
"rJeUWhL3TX",
"ryxurJRsTX",
"HygHF_6spX",
"Bklu_v6spQ",
"SkxPHvTiaQ",
"BklIEDTs6X",
"rkxkTpcJTQ",
"SJeaN4LihX",
"SkesQhRO2Q"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544981591138,
1542380542061,
1542344512246,
1542342781106,
1542342512240,
1542342463173,
1542342445866,
1541545399105,
1541264436568,
1541102626573
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1514/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1514/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1514/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1514/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1514/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1514/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1514/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1514/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1514/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1514/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper considers a procedure for the generation of adversarial examples under a black box setting. The authors claim simplicity as one of the main selling points, with which reviewers agreed, while also noting that the results were impressive or \\\"promising\\\". There were concerns over novelty and some confusion over the contribution compared to Guo et al, which I believe has been clarified.\\n\\nThe highest confidence reviewer (AnonReviewer2), a researcher with significant expertise in adversarial examples, raised issues of inconsistent threat models (and therefore unfair comparisons regarding query efficiency), missing baselines. A misunderstanding about comparison against a concurrent submission to ICLR 2019 was resolved on the basis that the relevant results are mentioned but not originally presented in the concurrent submission. \\n\\nWhile I disagree with AnonReviewer2 that results on attacking a particular image from previous work (when run against the Google Cloud Vision API) would be informative, the reviewer has remaining unaddressed concerns about the fairness of comparison (comparing against results reported in previous work rather than re-run in the same setting), and rightly points out that as many variables should be controlled for as possible when making comparisons. Running all methods under the same experimental setting with the same *collection* of query images is therefore appropriate. \\n\\nThe authors have not responded to AnonReviewer2's updated post-rebuttal review, and with the remaining sticking point of fairness of comparison with respect to query efficiency I must recommend rejection at this point in time, while noting that all reviewers considered the method promising; I thus would expect to see the method successfully published in the near future once issues of the experimental protocol have been solidified.\", \"confidence\": \"3: The area chair is somewhat confident\", \"recommendation\": \"Reject\", \"title\": \"A method that is appealing for its simplicity, but reviewer concerns regarding fairness of comparison persist.\"}",
"{\"title\": \"Re: Clarification on NES attack performance (aka QL attack)\", \"comment\": \"We understand that the NES attack is the same as the QL-attack evaluated in our paper. We would like to point out that the evaluation in Ilyas et al. 2018 (https://arxiv.org/pdf/1807.07978.pdf) does not compromise our claim of unprecedented efficiency. Although QL-attack can achieve average query count close to that of our method, its failure rate is extremely high -- 41.7% -- whereas SimBA and SimBA-DCT achieve failure rate of <2% within around the same number of queries.\"}",
"{\"title\": \"Clarification on NES attack performance (aka QL attack)\", \"comment\": \"I appreciate the authors' prompt response. But the authors apparently misunderstood my comment 1. I am NOT asking to compare with the new method proposed in an unpublished work (Ilyas 2018 - https://arxiv.org/pdf/1807.07978.pdf). Rather, I am asking the performance comparison with NES attack (it is called QL attack in this paper), which is a published paper at ICML 2018 (https://arxiv.org/pdf/1804.08598.pdf). I did explicitly point out the \\\"NES column\\\" in the table of the unpublished work because it's basically the same setting to be compared with the results presented in this paper. I am very surprised to see the response that \\\"we think it is generally inappropriate to ask for a comparison with a concurrent submission to the same conference\\\", as QL attack was already compared (but in a different setting) in this paper.\"}",
"{\"title\": \"Revision\", \"comment\": [\"We have uploaded a revision with the following changes:\", \"Significantly improved attack when using the standard basis (SimBA). In particular, Table 1 and Figures 2-4 were updated.\", \"Supplementary material containing 10 additional sample images for Google Cloud Vision attack.\"]}",
"{\"title\": \"Re: interesting black-box adversarial attack using DCT basis; performance evaluation on targeted attack is insufficient and threat model is inconsistent (unfair comparison)\", \"comment\": \"Our detailed response is below. We are happy to include the additional results and comparisons that you ask for, however we do want to emphasize that we think it is generally inappropriate to ask for a comparison with a concurrent submission to the same conference [Ilyas et al. 2018].\", \"detailed_response\": \"We believe there may have been several possible misunderstandings that resulted in R2\\u2019s concerns.\\n\\n1. We agree that the threat model varies across the various baselines. However, our inclusion of boundary attack and Opt attack are not meant to diminish their results, but rather to be comprehensive in our evaluation of prior work. As for other score-based attacks, although it is true that ZOO can sometimes achieve a very low L2 distortion, it also suffers from an order of magnitude higher failure rate (11.1% rather than <2% of SimBA) and requires several orders of magnitude more queries (192,000!!! rather than 1,232 for SimBA-DCT or 1,665 SimBA). In the preprint https://arxiv.org/pdf/1807.07978.pdf, the QL-attack is comparable to our attack in terms of query efficiency but at a high failure rate of 41.7%!! We believe that our evaluation is as fair to the other baselines as possible by achieving a lower L2 distortion than all methods other than ZOO and maintaining a success rate close to 100%, while requiring far fewer queries. \\n\\n2. We performed targeted attack against random classes, similar to Tu et al. 2018 (https://arxiv.org/abs/1805.11770) and Cheng et al. 2018 (https://arxiv.org/pdf/1807.04457.pdf). We tested our attack against the least likely class as well and found that our method is less efficient but remains very competitive. More precisely, the average query count for SimBA increases from 7,899 to 12,256, while the average query count for SimBA-DCT increases from 9,275 to 17,272.\\n\\nWe also tested our method on CIFAR-10 (targeted attack against the least likely class) and found that restricting to the low frequency basis does not affect the attack\\u2019s efficiency. SimBA achieves an average query count of 522 with average L2 norm = 1.41, while SimBA-DCT achieves an average query count of 606 with average L2 norm = 1.60. Both attacks are successful 100% of the time and are very competitive with state-of-the-art attack algorithms such as AutoZOOM. While using low frequency perturbations does not improve the attack for CIFAR-10, it does not hinder the attack\\u2019s efficiency either. As for the comment regarding limiting search space, Guo et al. have found that restricting to the low frequency subspace does not hinder adversarial optimality, which is empirically demonstrated in both theirs and our work.\\n\\n3. The basic SimBA attack uses axis-aligned directions rather than the DCT basis when picking random directions of descent, and provides the majority of the query efficiency compared to other methods (see Table 1). The choice of the DCT basis further improves our attack in the untargeted case and demonstrates that it can generalize to other orthonormal bases, but is not crucial. In this regard, our paper differs significantly from the work of Guo et al.\\n\\n4. Since our attack does not begin with an image of the adversarial class, it is not designed for targeted attacks on GCV. However, as GCV is the most widely used real world image classification platform for black-box adversarial attacks, we would like to demonstrate the efficacy of our method despite this limitation. Thus, we chose removing the top 3 original classes as a reasonable objective. In comparison with the QL-attack, the query efficiency of our method allows substantially more adversarial images to be created within the same cost budget, and our work is the first to show aggregate statistics for attacking a deployed machine learning service. We included 10 additional random samples in the Supplementary Material. For a non-trivial example, the second image in the Supplementary Material shows a case where a set of camera instruments is misclassified as a weapon after perturbation.\"}",
"{\"title\": \"Re: simple and effective blackbox attack based on random directions\", \"comment\": \"Comparison with Guo et al. 18:\\nThere may have been a misunderstanding. We do compare directly to the exact method by Guo et al. It is algorithm \\u201cLFBA\\u201d in Figure 2 and Table 1. LFBA stands for \\u201cLow Frequency Boundary Attack\\u201d, which is the terminology used by Guo et al.\\n\\nNovelty over Guo et al. 18: \\nWhile black-box adversarial examples in DCT space has certainly been studied in Guo et al.,\\nthe core component that makes SimBA drastically more efficient than other black-box attacks is that we take numerous small steps in random orthonormal directions, whether that be random pixel perturbations or the DCT basis. The fact that SimBA (without DCT) is already very competitive compared to all previous untargeted and targeted black-box attacks supports this claim. We consider this insight important to be shared with the community, because it shows that the problem of adversarial attacks may be much simpler than most of us had assumed.\"}",
"{\"title\": \"Re: simple algorithm, intriguing message\", \"comment\": \"Thank you for your review. We are equally surprised that the SimBA attack has not been discovered earlier - and that it can even outperform far more sophisticated approaches.\"}",
"{\"title\": \"simple algorithm, intriguing message\", \"review\": \"This paper demonstrates that a simple greedy random search algorithm in DCT space based on score feedback is able to synthesize adversarial examples with quite good query efficiency. The algorithm is demonstrated on ImageNet with three common architectures, showing much higher efficiency when sampling from the DCT basis. The algorithm is also shown to outperform state of the art attacks in terms of query count. Finally, a successful attack is demonstrated on Google Cloud Vision.\\n\\nWhile not particularly heavy on technicalities, this work does make a couple intriguing points, namely that adversarial attacks can potentially be quite easy to perform due to the inherent nature of high-dimensional classification, and that the space is in which the search is perform might be more important than the sophistication of the search itself. I interpret the proposal not so much as a claim to a state-of-the-art algorithm (even though the results are impressive) but as a very reasonable baseline in the evaluation of attack efficiency -- one might even wonder why it has not been common practice thus far to evaluate against such kinds of algorithms by default.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"simple and effective blackbox attack based on random directions\", \"review\": \"This paper presents a simple and effective black box adversarial attack on sota deep nets for image classification tasks. It is based on randomly picking a low frequency component of the DC Transform. It is claimed to be most efficient when compared to the sota methods in terms of number of queries required for the attack. It is shown that a median of 600 queries for resnet-50 for imagenet dataset, and 2500 for google cloud vision. Due to its simplicity, it is also claimed that the attack is quite simple to implement in code. The paper presents a detailed analysis of their attack in pixel and DCT space, targeted vs untargeted attack, comparison over different architecture such as Densenet, resnet, and inception.\\n\\nThough the work is quite important and presents a simple and effective baseline black box attack. My concern is primarily on the novelty and originality of the idea, as it is mainly based on the work of Guo etal 2018, which this paper says is the motivation behind their work. So, it is not clear what is the contribution of this paper, as a similar study seems to have been carried out in that paper as well. The authors do not clearly give the relative comparison wrt Guo etal 2018.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"interesting black-box adversarial attack using DCT basis; performance evaluation on targeted attack is insufficient and threat model is inconsistent (unfair comparison)\", \"review\": \"This paper proposed a simple query-efficient \\\"score-based\\\" black-box attack based on iteratively perturbing an input image with a direction randomly sampled (w/o replacement) from a set of orthonormal bases. In particular, the authors proposed the use of low-frequency parts of DCT (discrete cosine transformation) as in (Guo 2018) to perform this task. Experimental results on ImageNet and three different classification models demonstrate the query efficiency of the proposed method -- able to achieve high attack success rate within fewer query budgets, where the visual distortion has an L2 norm threshold set to be 10. The authors also demonstrate an untargeted score-based black-box attack on Google CloudVision API.\\n\\nWhile the results seem promising, there are several issues that may potentially weaken the query-efficient claims made in this paper, especially due to the lack of sufficient attack comparisons (on smaller datasets) and inconsistent threat models when compared to existing works. My main concerns are summarized as follows.\\n\\n1. Unfair comparison due to inconsistent threat models (knowledge known to an attacker): the proposed method (simBA) is a \\\"score-based\\\" black-box attack, not a \\\"decision-based\\\" black-box attack. The proposed method assumes knowing the prediction likelihood (or prediction score) as the model output when performing black-box attacks, whereas the compared methods in black-box settings, such as Opt Attack and Boundary Attack, are \\\"decision-based\\\" attack that assumes only knowing the top-1 prediction label. Therefore, the query count comparison is meaningless and unfair, since these two methods require far less information from the model.\\n\\nOn the other hand, ZOO/AutoZOOM is a score-based attack. But ZOO can achieve a very low L2 distortion due to its coordinate descent nature. A fair comparison is to set the same L2 distortion for all score-based methods, and compare the median/avg query counts of each image to reach the same L2 distortion. The comparison to Opt-Attack / Boundary attack makes sense only if the proposed method (simBA) can also perform decision-based attack. Nonetheless, the query count to same-distortion comparison argument still holds. The authors should specify whether simBA can apply to the decision-only attack scenario. If so, how to implement and what is the performance?\\n\\nLastly, the QL attack (Ilyas 2018) can perform both score-based and decision-based attacks. So the authors should make the query comparison (to same L2 distortion) as well. According to a recent report (Table-1, NES column) in https://arxiv.org/pdf/1807.07978.pdf, the QL-attack has a comparable performance in terms of query counts as reported in this paper.\\n\\n2. More experiments on targeted black-box attacks: While untargeted attacks on Imagenet is a relatively easy task, I was a bit skeptical on the attacking performance of simBA in targeted attacks - since the selection of low-frequency bases directly limits the search space of adversarial examples, as opposed to arbitrary random directions adopted in QL-attack, Boundary-attack, and Opt-attack. It is also not clear how the target label is chosen in the targeted attack experiment.\", \"i_suggest_including_two_more_experiments_to_validate_the_function_of_simba\": \"(i) compare the performance of least-likely targeted attack (ii) show results on smaller datasets such as Cifar-10. As pointed out by the authors, Imagenet has too many image dimensions and make it more vulnerable to attack. Showing attacking results on smaller datasets can properly justify the value of the proposed attack, rather than the benefit from high dimensionality.\\n\\n\\n3. Novelty relative to LFBA (Guo) should be better differentiated: The idea of using DCT is originated from the LFBA paper. Since in that paper the authors also leveraged low-frequency DCT to perform black-box attacks, it is not clear to me what makes the proposed method perform better than the LFBA paper. The novelty and difference between this paper and the LFBA paper should be addressed.\\n\\n4. The Google Cloud Vision API attack is not too appealing - the tree label is still there and the trees are obviously present in the picture, while I appreciate the effect of removing the original top-3 labels. Can the authors show another set of non-trivial (more surprising) and targeted-attack experiments? Or simply do the same experiment using the same image (men snowing -> dog) as in the QL-attack.\\n\\n----\\nPost-rebuttal review\\n\\nI appreciate the authors' efforts in clarifying some of my concerns. However, I am still not convinced the comparison has been made fair. Many numbers from Table 1, such as ZOO, Opt-attack, QL-attack and AutoZOOM seem to be directly adapted from the papers rather than implemented and reproduced based on the same setting as the proposed attack. In particular, given that QL-attack is a published work, one of the state-of-the-art method and its codes has been released, I would really love to see a direct comparison using the same data samples and threat model. I would also like to emphasize that implementing all attacks under the same setting is crucial, since different attack methods may have a different criterion to determine attack successfulness. For example, QL-attack has some pre-defined distortion (L2 or Linfinity) for determining an adversarial example is successful, in addition to a different predicted class.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
|
rJgbSn09Ym | Learning Particle Dynamics for Manipulating Rigid Bodies, Deformable Objects, and Fluids | [
"Yunzhu Li",
"Jiajun Wu",
"Russ Tedrake",
"Joshua B. Tenenbaum",
"Antonio Torralba"
] | Real-life control tasks involve matters of various substances---rigid or soft bodies, liquid, gas---each with distinct physical behaviors. This poses challenges to traditional rigid-body physics engines. Particle-based simulators have been developed to model the dynamics of these complex scenes; however, relying on approximation techniques, their simulation often deviates from real-world physics, especially in the long term. In this paper, we propose to learn a particle-based simulator for complex control tasks. Combining learning with particle-based systems brings in two major benefits: first, the learned simulator, just like other particle-based systems, acts widely on objects of different materials; second, the particle-based representation poses strong inductive bias for learning: particles of the same type have the same dynamics within. This enables the model to quickly adapt to new environments of unknown dynamics within a few observations. We demonstrate robots achieving complex manipulation tasks using the learned simulator, such as manipulating fluids and deformable foam, with experiments both in simulation and in the real world. Our study helps lay the foundation for robot learning of dynamic scenes with particle-based representations. | [
"Dynamics modeling",
"Control",
"Particle-Based Representation"
] | https://openreview.net/pdf?id=rJgbSn09Ym | https://openreview.net/forum?id=rJgbSn09Ym | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"SJxZ5WeXgV",
"BJxjLCJFJV",
"SJxC3glQyE",
"HyliBvymJN",
"H1gkmwJmJE",
"Bkey7AnqC7",
"HJlJ9385Cm",
"SkxqaaJbC7",
"rJg_Ehy-C7",
"BkxztqkZR7",
"HylEhDyWAm",
"S1gcPWZphX",
"ryehJ1s2n7",
"B1eZWv4q2m"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544909193076,
1544253011456,
1543860406273,
1543857987313,
1543857943396,
1543323159391,
1543298183444,
1542680001840,
1542679600336,
1542679162047,
1542678443641,
1541374305561,
1541349091570,
1541191416652
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1513/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1513/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1513/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1513/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1513/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1513/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1513/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1513/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1513/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1513/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1513/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1513/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1513/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1513/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"The paper proposes a particle based framework for learning object dynamics. A scene is represented by a hierarchical graph over particles, edges between particles are established dynamically based on Euclidean distance. The model is used for model predictive control, and there is also one experiment with a particle graph built from a real scene as opposed to simulation.\\n\\nAll reviewers agree that the architectural changes over previous relational networks are worthwhile and merit publication. They also suggest to tone down the ``dynamic\\u201d part of the graph construction by stating that edges are determined based on a radius. In particular, previous works also consider similar addition of edges during collisions, quoting Mrowca et al. \\\"Collisions between objects are handled by dynamically defining pairwise collision relations ... between leaf particles...\\\" which suggests that comparison against a baseline for Mrowca et al. that uses a static graph is not entirely fair. The authors are encouraged to repeat the experiment without disabling such dynamic addition of edges.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"interesting model learning framework\"}",
"{\"title\": \"Follow-up on the rebuttal\", \"comment\": \"I went through the author's reply and the updated paper in detail. The major part of the authors' response above mainly argues about the differences of proposed method from Mrowca et.al. 2018. However, this is neither the question I asked nor had any concerns about.\\n\\nMy main concern is that the approach seems to be a direct application of the Interaction Graph Networks to the particle-based simulator. The only difference is that instead of maintaining a fully-connected graph, each particle is only connected to the near-by particles within distance d. However, the paper is written in a way which lefts the reader wonder about what is the new concept that is being introduced in \\\"Dynamic Particle Interaction\\\" networks. The authors' response above or the updated paper did not acknowledge this as an issue.\\n\\nThe authors have added error bars in Fig-3,5 which basically show that the results are within the error bars and hence not significant.\\n\\nHowever, I appreciate the to Mrowca et.al. 2018 and updating my rating from 5 to 6. But I still believe that the paper should tone down the emphasis on the \\\"dynamic\\\" graph part.\"}",
"{\"title\": \"Looking forward to your feedback\", \"comment\": \"Dear Reviewer 1,\\n\\nThanks again for your constructive comments. We have made substantial changes in the revision according to your review. In particular, we\\u2019ve included detailed comparisons with (Mrowca et al, 2018), ablation studies, and errors bars. As the discussion period is about to end, please don\\u2019t hesitate to let us know if there are any additional clarifications that we can offer, as we would love to convince you of the merits of the paper. Thanks!\"}",
"{\"title\": \"Thanks for your comments\", \"comment\": \"Dear Reviewer 2,\\n\\nWe would like to thank you again for your supportive response. Your comments have helped us improve the quality of the paper significantly.\"}",
"{\"title\": \"Thanks for your comments\", \"comment\": \"Dear Reviewer 3,\\n\\nThat\\u2019s great to hear. We\\u2019d like to thank you again for your very constructive comments, which have helped us improve the quality of the paper significantly.\"}",
"{\"title\": \"The Revision is a Significant Improvement\", \"comment\": \"Dear Authors,\\n\\nThank you very much for revising the paper and addressing my concerns.\", \"the_new_results_in_the_revision_look_quite_positive\": \"1) The comparison to (Mrowca et al., [1]) indicates that your model has higher performance. I particularly like the visualization in Figure 2-a highlighting the drawback of the unified dynamics in (Mrowca et al.).\\n2) The additional generalization results are very nice.\\n3) Figure 3 addresses the hyperparameter robustness question and unified dynamics question well.\\n\\nI also appreciate the more minor revisions (errorbars, text, ...).\\n\\nOverall I find the paper much more persuasive now, and am changing my review from a 5 (Marginally Below Acceptance Threshold) to an 8 (Top 50% of accepted papers, clear accept).\"}",
"{\"title\": \"General Response Cont.\", \"comment\": \"We thank all reviewers for their comments. We would like to emphasize again that our two main contributions are\\n1) A particle-based dynamics model that integrates multi-step propagation, hierarchical structure, and dynamic interaction graphs. Our model can simulate objects of different states (rigid bodies, deformable objects, and fluids), significantly outperforming previous methods.\\n2) Its application to control, with good results on deformable object manipulation both in simulation and on a real robot. In particular, previous papers on learning particle dynamics did not attempt to apply the learned model on control tasks.\", \"we_have_updated_our_paper_to_include_the_following_changes\": \"1) We have added qualitative and quantitative comparisons with Mrowca et al. [1] on all four environments (Figure 2 and Table 1). Our model significantly outperforms [1], especially on fluid and rigid body simulation. Please see our updated video for a side-by-side comparison.\\n2) We have included results on generalizing to fluids, deformable objects, and rigid objects that are larger than those in the training set. The results on extrapolation are in Appendix B. Our model generalizes well.\\n3) We have conducted ablation studies on hyperparameters (Section 4.2). We considered\\n a) the propagation step L,\\n b) the number of roots in the hierarchy, and\\n c) the neighborhood distance d.\\n4) We have included results on using a unified motion predictor for all objects (Section 4.2).\\n5) We have also added error bars and confidence intervals for all quantitative metrics.\\n6) We have updated Figure 1 and the introduction as suggested by reviewers.\\n\\n[1] Damian Mrowca, Chengxu Zhuang, Elias Wang, Nick Haber, Li Fei-Fei, Joshua B. Tenenbaum, Daniel L. K. Yamins. \\\"Flexible Neural Representation for Physics Prediction.\\\" In NIPS, 2018.\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"Thank you so much for the encouraging comments!\\n\\nOur paper is novel, because it first explores learning particle dynamics among fluids and rigid bodies, with a dynamically-built graph, and because it first applies the learned particle dynamics model for control, in both synthetic environments and on a real robot. There are also key differences between our paper and the inspiring early work from Mrowca et al. [1]: \\n1) We learn dynamics between particles that are in different physical states (e.g. between rigid bodies and fluids), while Mrowca et al. [1] focused on scenarios where all objects are in the same state, e.g. all soft or rigid bodies. \\n2) While Mrowca et al. used a static graph, our model uses an interaction graph built dynamically. Since maintaining a dynamic graph is crucial for simulating objects undergoing large deformation like fluids, this modification has enabled our model to work better in the more challenging cases mentioned above.\\n3) We have applied our model to more challenging control problems, including one on a real-world robot. In the other paper (Mrowca et al. [1]), however, this non-trivial task was not explored at all. Demonstrating the power of learned particle dynamics models on downstream control tasks is an important contribution to the learning and robotics community. It should not be undervalued. \\n\\nWe agree that it\\u2019ll be important to clearly discuss the differences and include additional comparisons. In our revision by Nov. 26, we will compare with Mrowca et al. [1] in all four environments we used, contrasting the two models\\u2019 capacity in simulating rigid bodies, elastic deformation, and fluids.\\n\\nWe have also listed all other planned changes in our general response above. Please don\\u2019t hesitate to let us know for any additional comments on the paper or on the planned changes.\\n\\n[1] Damian Mrowca, Chengxu Zhuang, Elias Wang, Nick Haber, Li Fei-Fei, Joshua B. Tenenbaum, Daniel L. K. Yamins. \\\"Flexible Neural Representation for Physics Prediction.\\\" In NIPS, 2018.\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"Thank you so much for your insightful and detailed comments!\\n\\n1. Our contributions and comparisons with baselines\\nThanks for the suggestions. Our paper is novel, because it first explores learning particle dynamics among fluids and rigid bodies, with a dynamically-built graph, and because it first applies the learned particle dynamics model for control, in both synthetic environments and on a real robot. Here, we summarize the three key differences between our paper and earlier works that serve as our inspiration [1, 2]: \\n\\n1) We learn dynamics between particles that are in different physical states (e.g. between rigid bodies and fluids), while other works focused on scenarios where all objects are in the same state, e.g. all soft or rigid bodies (Mrowca et al. [1]), or all fluids (Schenck and Fox [2]). \\n2) While Mrowca et al. used a static graph, our model uses an interaction graph built dynamically. Since maintaining a dynamic graph is crucial for simulating objects undergoing large deformation like fluids, this modification has enabled our model to work better in the more challenging cases mentioned above.\\n3) We have applied our model to more challenging control problems, including one on a real-world robot. In the other paper (Mrowca et al. [1]), however, this non-trivial task was not explored at all. Demonstrating the power of learned particle dynamics models on downstream control tasks is an important contribution to the learning and robotics community. It should not be undervalued.\", \"our_dynamics_model_includes_many_improvements_compared_with_the_vanilla_interaction_networks\": \"the use of dynamic graphs, multi-step propagations, and the hierarchical structure. While the hierarchical structure and multi-step propagations have been explored (Mrowca et al and Li et al), dynamically-built interaction graphs are new, the combination of these ideas is new, and most importantly, applying them to control is new.\\n\\nIn particular, a dynamically-built graph can better capture the inductive bias for many common physical systems like fluids and gases, as well as extrapolations to unseen environments (Fig. 3). We agree with the reviewer that learning to build dynamic graphs is definitely an important future direction, which we\\u2019re currently pursuing, but we\\u2019d also like to emphasize that the idea of updating the interaction graph based on the states of the particles is conceptually important and empirically useful.\\n\\nWe agree that it\\u2019ll be important to clearly discuss the differences and include additional comparisons. In our revision by Nov. 26, we will compare with Mrowca et al. [1] in all four environments we used, contrasting the two models\\u2019 capacity in simulating rigid bodies, elastic deformation, and fluids.\\n\\n2. Ablation studies\\nWe\\u2019ll add confidence intervals and error bars to Fig. 3 and 5. The improvements of the simulation results in Fig. 3 are significant: on average, the rollout loss decreases by 85% with the use of dynamic graphs. We will also add more comparisons with related works demonstrating the effectiveness of our introduced techniques.\\n\\nThe hyperparameters used in the paper are not specific to environments. The supplementary material shows that most hyperparameters are kept the same across scenarios. We will provide systematic analyses regarding the sensitivity of the hyperparameters in the revision, for\\n1) the propagation step L,\\n2) the number of roots in the rigid/deformable bodies, and\\n3) the neighborhood distance d.\\n\\nWe have also listed all other planned changes in our general response above. Please don\\u2019t hesitate to let us know for any additional comments on the paper or on the planned changes.\\n\\n[1] Damian Mrowca, Chengxu Zhuang, Elias Wang, Nick Haber, Li Fei-Fei, Joshua B. Tenenbaum, Daniel L. K. Yamins. \\\"Flexible Neural Representation for Physics Prediction.\\\" In NIPS, 2018.\\n[2] Connor Schenck, Dieter Fox. \\u201cSPNets: Differentiable Fluid Dynamics for Deep Neural Networks.\\u201d In CoRL 2018.\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"Thank you so much for the insightful and detailed comments!\\n\\n1. Novelty and Baselines\\nThanks for the suggestions. Our paper is novel, because it first explores learning particle dynamics among fluids and rigid bodies, with a dynamically-built graph, and because it first applies the learned particle dynamics model for control, in both synthetic environments and on a real robot. Here, we summarize the three key differences between our paper and earlier works that serve as our inspiration [1, 2]: \\n\\n1) We learn dynamics between particles that are in different physical states (e.g. between rigid bodies and fluids), while other works focused on scenarios where all objects are in the same state, e.g. all soft or rigid bodies (Mrowca et al. [1]), or all fluids (Schenck and Fox [2]). \\n2) While Mrowca et al. used a static graph, our model uses an interaction graph built dynamically. Since maintaining a dynamic graph is crucial for simulating objects undergoing large deformation like fluids, this modification has enabled our model to work better in the more challenging cases mentioned above.\\n3) We have applied our model to more challenging control problems, including one on a real-world robot. In the other paper (Mrowca et al. [1]), however, this non-trivial task was not explored at all. Demonstrating the power of learned particle dynamics models on downstream control tasks is an important contribution to the learning and robotics community. It should not be undervalued.\\n\\nWe agree that it\\u2019ll be important to clearly discuss the differences and include additional comparisons. In our revision by Nov. 26, we will compare with Mrowca et al. [1] in all four environments we used, contrasting the two models\\u2019 capacity in simulating rigid bodies, elastic deformation, and fluids.\\n\\n2. Presentation\\nThanks. As suggested, we will update figure 1 to include a diagram of the model demonstrating our key contribution: the joint use of dynamic graphs, hierarchical structure, and multi-step propagation. We\\u2019ll add confidence interval and error bars and revise the introduction.\\n\\n3. Rigid body representation\\nState-specific dynamics models perform better and are equally efficient. We use state-specific models because there are only a few states of interest (solids, liquids, and soft bodies), and their physical behaviors are drastically different. We agree with the reviewer that additional comparisons are important: we\\u2019ll add experiments with a unified dynamics model.\\n\\nThe motion of a rigid body only has six degrees of freedom. Predicting per-particle movement does not work well, compared with predicting a global rigid motion, because particles will deform and scatter. We will also add a comparison to the revision.\\n\\n4. Generalization\\nWe will show more (extrapolate) generalization results in RiceGrip and BoxBath to demonstrate that our method can generalize to novel environments that are larger than the training ones.\\n\\n5. Hyperparameters\\nThe hyperparameters used in the paper are not specific to environments. The supplementary material shows that most hyperparameters are kept the same across scenarios. We will provide systematic analyses regarding the sensitivity of the hyperparameters in the revision, for\\n1) the propagation step L,\\n2) the number of roots in the rigid/deformable bodies, and\\n3) the neighborhood distance d.\\n\\nWe have also listed all other planned changes in our general response above. Please don\\u2019t hesitate to let us know for any additional comments on the paper or on the planned changes.\\n\\n[1] Damian Mrowca, Chengxu Zhuang, Elias Wang, Nick Haber, Li Fei-Fei, Joshua B. Tenenbaum, Daniel L. K. Yamins. \\\"Flexible Neural Representation for Physics Prediction.\\\" In NIPS, 2018.\\n[2] Connor Schenck, Dieter Fox. \\u201cSPNets: Differentiable Fluid Dynamics for Deep Neural Networks.\\u201d In CoRL 2018.\"}",
"{\"title\": \"General Response\", \"comment\": \"We want to thank all the reviewers for their insightful comments. Here, we will explain again the contribution of our paper, address some common concerns, and summarize the changes we intend to include in our revisions.\\n\\nThis paper presents a model for learning particle dynamics using a dynamically-built interaction graph. The model works well in simulating and controlling objects under many different states, including rigid bodies, deformable objects, and fluids, both in simulation and on a real robot. We believe that demonstrating the power of learned particle dynamics models on downstream control tasks is an important contribution to the learning and robotics community that should not be undervalued.\\n\\nWe agree with the reviewers that it\\u2019s important to compare our paper with other related papers and clarify their differences. Here, we summarize the three key differences between our paper and earlier works that serve as our inspiration [1, 2]: \\n1) We learn dynamics between particles that are in different physical states (e.g. between rigid bodies and fluids), while other works focused on scenarios where all objects are in the same state, e.g. all soft or rigid bodies (Mrowca et al. [1]), or all fluids (Schenck and Fox [2]). \\n2) While Mrowca et al. [1] used a static graph, our model uses an interaction graph built dynamically. Since maintaining a dynamic graph is crucial for simulating objects undergoing large deformation like fluids, this modification has enabled our model to work better in the more challenging cases mentioned above.\\n3) We have applied our model to more challenging control problems, including one on a real-world robot. In the other paper (Mrowca et al. [1]), however, this non-trivial task was not explored at all. \\n\\nIn our revision by Nov. 26, we will include\\n1) A comparison with Mrowca et al. [1] in all four environments we used, contrasting the two models\\u2019 capacity in simulating rigid bodies, elastic deformation, and fluids.\\n2) Experiments with a unified dynamics model.\\n3) Systematic analyses of the sensitivity of the hyperparameters. \\n\\nPlease don\\u2019t hesitate to let us know if there is any additional comment on the intended changes.\\n\\n[1] Damian Mrowca, Chengxu Zhuang, Elias Wang, Nick Haber, Li Fei-Fei, Joshua B. Tenenbaum, Daniel L. K. Yamins. \\\"Flexible Neural Representation for Physics Prediction.\\\" In NIPS, 2018.\\n[2] Connor Schenck, Dieter Fox. \\u201cSPNets: Differentiable Fluid Dynamics for Deep Neural Networks.\\u201d In CoRL 2018.\"}",
"{\"title\": \"Little Novelty over Prior Work\", \"review\": \"The authors present an algorithm for learning the dynamics prediction of deformable and fluid bodies by modeling them as (potentially hierarchical) systems of many interacting particles. This model applies a shared encoder to the particle states (positions and velocities), a shared relation network to nearby pairs, and a shared propagator network to the summed relation network outputs. In some cases this process is applied in a multi-scale hierarchical fashion. The authors demonstrate accurate rollouts of system dynamics and usefulness for manipulative control of deformable objects.\\n\\nI find the motivation in the introduction persuasive and the algorithmic approach sound. I also like the application to RL. However, I do have some concerns, as follows:\\n\\n1) The novelty of the method is questionable. Specifically, the hierarchical interaction network proposed here seems extremely similar to the prior (and cited) paper (Mrowca et al., 2018), which the authors do not directly compare against. If there is a non-negligible difference between the two algorithms, then the authors should explicitly discuss the difference and empirically compare the two, in order to benefit others in the community who otherwise would not know which to use.\\n\\n2) The paper would benefit a lot from a diagram of the model. Specifically, it would be good to have a diagram of the hierarchical interaction network demonstrating the multiscale propagation. This could go in Figure 1, perhaps replacing elements (b) and (d) of the current Figure 1, which in my opinion are unnecessary and can be removed.\\n\\n3) The paper uses domain-specific hyperparameters, yet does not discuss or analyze the effects of these hyperparameters much. Specifically, for this method to be useful to others, we would like to know how to choose (i) the propagation step L, (ii) the number of roots, and (iii) the neighborhood distance d. In the paper, these numbers are chosen differently for the different environments without explanation. Graphs showing performance on each task over a range of values of these parameters would be good (perhaps in the supplementary material). Also, using the same hyperparameters for all environments (or at least a common generating function) would help support the generality of this model.\\n\\n4) The treatment of rigid bodies seems a bit hand-held. Specifically, to determine the dynamics of rigid bodies, there is a ground-truth calculation which calculates computes the velocity and angular velocity of the body from the model predictions for its constituent particles. Furthermore, if I understand correctly, there is a different motion predictor network for those particles in a rigid body than those in the surrounding fluid --- is this correct? If so, this raises the questions: (i) What happens if the same motion-predictor network is used for all particles, and (ii) What happens if the ground-truth rigid dynamics calculation is not done, so the model has to do all the work? It would be interesting to have these as baselines.\\n\\n5) It would be nice to see more generalization results. There is only one generalization experiment, testing for generalization over particle number in FluidShake. However, the FluidShake model is not hierarchical. The hierarchical models are a big emphasis in the paper, so showing generalization on BoxBath or RiceGrip would be much more meaningful.\\n\\n6) No confidence intervals for the quantitative results. Confidence intervals would be good to see in the table in Figure 3-a. Also, the bar graph in Figure 5 really would benefit from errorbars --- it is difficult to determine if the results are significant.\\n\\n7) While the text is generally clear and definitely understandable, I have a couple of comments about it: (i) The last three paragraphs of the introduction are repetitive and I think they can be removed, or at least shortened a lot. There are also quite a number of grammatical errors throughout the paper, though it is still comprehensible.\", \"edit\": \"In their revision the authors addressed these concerns well and the paper is much more convincing (see longer comment below). In light of this I have changed my rating from a 5 to an 8.\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Similar to Interaction Networks; Missing Comparisons to Baselines\", \"review\": \"[Paper Summary]\\nThis paper tackles the problem of learning dynamics of non-rigid objects in a physics simulator. This learned dynamics can then be used for planning later. The non-rigid objects are represented via a particle-based system. The dynamics model is learned using NVIDIA's particle-based simulator \\\"Flex\\\". The main idea is to adapt Interaction Networks [Battaglia, 2016] which was earlier proposed for rigid-body simulators to particle-based simulators. Instead of maintaining interactions at the level of objects as in [Battaglia, 2016], the proposed approach models interaction at the level of particles.\\n\\n[Paper Strengths]\\nThe paper is clearly written and tackles an important research problem. The existing literature is presented well.\\n\\n[Paper Weaknesses]\\n=> The introduction and the text in the first two pages seem to be introducing a new way to model \\\"dynamic\\\" interactions between particles for handling non-rigid transformations. However, upon reading the method section, the approach seems to be a direct application of the Interaction Graph Networks (originally applied to the rigid-body simulator) to the particle-based simulator. The only difference is that instead of maintaining a fully-connected graph (memory and computational bottleneck), each particle is only connected to the near-by particles within distance d.\\n\\n=> One of the major issue with the paper is the experimental section of the paper. Since the proposed method is quite incremental over the prior work, a strong empirical section is must to justify the approach. Here are the comments:\\n - Since the proposed approach is an adaptation of [Battaglia, 2016], it should be compared to other existing methods. The experiment section in its current state does not compare to any baseline. The well-written related work (section-2) talks about (Mrowca et.al. 2018) and (Schenck and Fox, 2018) as the works which investigate learning dynamics of deformable objects using a particle-based simulator. However, no comparison is provided to either of the methods. Hence, it is not possible to judge the quality of the presented results.\\n\\n - All results in Figure-5 or Figure-3 are quite close to each other. It is not clear whether the improvement is significant or not since the error bars are not provided at all.\\n\\n - No ablation is performed to test the sensitivity of the proposed method with respect to the hyper-parameters introduced; for instance, the distance 'd'.\\n\\n=> The name \\\"Dynamic Particle Interaction\\\" is overloaded with terms, especially, the use of word 'dynamic' here just refers to the interaction of particles to model deformable objects. This \\\"dynamic\\\" interaction is not \\\"learned\\\" but simply hard-coded by deleting the edges which are farther than d distance apart and adding near ones. Something like \\\"Particle-level Interaction Networks\\\" would be a more honest description of the approach.\\n\\n[Final Recommendation]\\nI request the authors to address the comments raised above and will decide my final rating based on that. With the current set of experiments, the paper doesn't seem to be ready yet.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Impressive work\", \"review\": \"This work demonstrates that a particle dynamics model can be learned to approximate the interaction of various objects. The resulting differentiable simulator has a strong inductive bias, which makes it possible to efficiently solve complex manipulation tasks over deformable objects.\\n\\n# Quality\\n\\nThis work is an impressive proof-of-concept of the capabilities of differentiable programming for learning complex (physical) processes, such as particle dynamics. In my opinion, the resulting particle interaction network would deserve publication for itself. However, this work goes already one step further and demonstrates that the resulting differentiable simulator can be used for the manipulation of deformable objects.\\n\\nThe method is evaluated on a well-rounded set of experiments which demonstrates its potential. More real-world experiments would be welcome to leave any doubt.\", \"edit\": \"This work is actually quite similar to 1806.08047. A proper discussion of the differences should be included.\\n\\n# Significance\\n\\nThis work will certainly be of interest for several research communities, including deep learning, physics, control and robotics.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
H1zeHnA9KX | Representing Formal Languages: A Comparison Between Finite Automata and Recurrent Neural Networks | [
"Joshua J. Michalenko",
"Ameesh Shah",
"Abhinav Verma",
"Richard G. Baraniuk",
"Swarat Chaudhuri",
"Ankit B. Patel"
] | We investigate the internal representations that a recurrent neural network (RNN) uses while learning to recognize a regular formal language. Specifically, we train a RNN on positive and negative examples from a regular language, and ask if there is a simple decoding function that maps states of this RNN to states of the minimal deterministic finite automaton (MDFA) for the language. Our experiments show that such a decoding function indeed exists, and that it maps states of the RNN not to MDFA states, but to states of an {\em abstraction} obtained by clustering small sets of MDFA states into ``''superstates''. A qualitative analysis reveals that the abstraction often has a simple interpretation. Overall, the results suggest a strong structural relationship between internal representations used by RNNs and finite automata, and explain the well-known ability of RNNs to recognize formal grammatical structure.
| [
"Language recognition",
"Recurrent Neural Networks",
"Representation Learning",
"deterministic finite automaton",
"automaton"
] | https://openreview.net/pdf?id=H1zeHnA9KX | https://openreview.net/forum?id=H1zeHnA9KX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"Hye-GnHgl4",
"BygbNmtukV",
"B1gaaOqVk4",
"BkeSqOcN14",
"HJg-yN9V1N",
"BkewlgpRAQ",
"HylORAYn0Q",
"ByxyXEWcA7",
"H1lkffW5R7",
"SygggMZ90X",
"S1eeAxb9Am",
"B1xVWwh3hm",
"HJllpRIqn7",
"Bkl2Tb__nX",
"S1eW-odEn7"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"comment"
],
"note_created": [
1544735753188,
1544225576963,
1543968964954,
1543968908716,
1543967704535,
1543585774978,
1543442127641,
1543275543195,
1543275014670,
1543274984143,
1543274696398,
1541355259550,
1541201592493,
1541075395969,
1540815608864
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1511/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1511/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1511/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1511/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1511/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1511/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1511/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1511/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1511/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1511/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1511/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1511/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1511/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1511/AnonReviewer3"
],
[
"(anonymous)"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper presents experiments showing that a linear mapping existing between the hidden states of RNNs trained to recognise (rather than model) formal languages, in the hope of at least partially elucidating the sort of representations this class of network architectures learns. This is important and timely work, fitting into a research programme begun by CL Giles in 92.\\n\\nDespite its relatively low overall score, I am concurring with the assessment made by reviewer 1, whose expertise in the topic I am aware of and respect. But more importantly, I feel the review process has failed the authors here: reviewers 2 and 3 had as chief concern that there were issues with the clarity of some aspects of the paper. The authors made a substantial and bona fide attempt in their response to address the points of concern raised by these reviewers. This is precisely what the discussion period of ICLR is for, and one would expect that clarity issues can be successfully remedied during this period. I am disappointed to have seen little timely engagement from these reviewers, or willingness to explain why they are stick by their assessment if not revisiting it. As far as I am concerned, the authors have done an appropriate job of addressing these concerns, and given reviewer 1's support for the paper, I am happy to add mine as well.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Acceptable\"}",
"{\"title\": \"concerns remain\", \"comment\": \"To summarize my understanding of the author's rebuttal, they're saying that the key result isn't that linear decoders achieve high accuracy in decoding the abstract DFA states, but is instead that the abstract DFAs that are recovered from the \\\"hierarchical clustering\\\" process bear some kind of resemblance to the original DFA. Three points about this\\n\\n1.) If this interpretability of the \\\"clusters\\\" is the real crux of the paper, instead of the decodeability referred to in the title, then the title and introduction of the paper really should reflect this.\\n2.) I'm not sure what the integers and percentages inside the DFA state diagrams in figure 6 are (I asked about this in my original review but I didn't see an answer unfortunately). As a result, I don't know how the authors mean to interpret the dendrograms built on top of the state diagrams.\\n3.) Without knowing exactly what interpretation the authors intend to draw from those dendrograms I don't want to be too categorical about this, but I will say that whatever the interpretation is, its seems very likely to be subject to the cherry-picking issue that R2 brought up. It seems to me like drawing any useful general conclusion from these two examples would be challenging.\\n\\nTo summarize, (1) the authors responses to my and R2s questions/criticisms suggest that main text of the paper obscures the basic logic of the work, and (2) that basic logic seems to rest entirely on the interpretation of just two examples. \\n\\nBoth of these points seem quite problematic, so at this time my score remains below the acceptance recommendation threshold.\"}",
"{\"title\": \"Thank you\", \"comment\": \"Thank you for pointing us in the direction of these recent works. Both of them encompass the general connection between RNNs and Automata in some way which we believe is a fruitful area of research relating to interpretable models. One extension of our work would be to utilize other RNN frameworks, such as Giles 2nd order RNN which we would hypothesis would likely encode an automata with more accuracy than vanilla RNNs because of it's original intention to do so. We believe that our work can be thought of a parallel to the work in [1]. The work in [2] seems equally important and relevant. Although our work is not focused on extracting Automata from RNNs but rather relating the underlying representations, we believe our work resonates with this paper as well. We will cite both of these papers in our related works section.\"}",
"{\"title\": \"Please consider our latest response to reviewer 2\", \"comment\": \"Reviewer 3, we have addressed many of your concerns in our response to reviewer 2 above. We would like to emphasize that there are 2 significant misunderstanding about the core of our logical conclusions of our paper. We have clarified them in our response to reviewer 2 above. We ask that reviewer 2, reviewer 3, and the area chair please consider these clarifications as we believe that they will significantly affect the evaluation of our work.\"}",
"{\"title\": \"Clarifying some significant misunderstandings\", \"comment\": \"We believe there are two significant misunderstandings here. First, Reviewer 3 states \\u201cif we find the output classes the decoder is most often confused between, then merge them into one class, the decoder's performance increases -- trivially.\\u201d The word \\u201ctrivially\\u201d is the problem here, as merging two classes that are easily confused by a highly trained classifier can actually be quite informative. Consider the example of a trained face recognition classifier that easily confuses identical twins. If we merge Twin1 and Twin2 into a single new superclass Twins = {Twin1, Twin2} then the resulting classifier will certainly perform better and for good reason: the twins are highly related and thus have similar looks. Iterating this kind of confusion-based merging is a valid form of hierarchical clustering (e.g. merging plants together, and then animals together, etc. to learn a taxonomy). In short, the increase in prediction accuracy after merging is \\u201ctrivial\\u201d, but the interpretation for why an increase occurs is certainly not: finding classes that are easily confused is important information about the similarity metric learned by the classifier.\\n\\nThe second misunderstanding involves our earlier response to Reviewer 3 where we state \\u201cour paper\\u2019s intention was to not make a logical connection between RNNs and automata (\\u2026).\\u201d This statement has been taken out of context. The critical part of that sentence is in the \\u201c(...)\\u201d, namely \\u201cbased on this observation...\\u201d. Without that context, it seems like we are negating the core conclusion of our paper -- that there is indeed a connection between the hidden state space of the RNN and that of the MDFA. However, that was not our intent. We were just trying to convey that our conclusion is not based on that particular observation(\\u201cIt is true that in a classification problem, if you merge the most confused classes together, classification accuracy increases...\\u201d); instead, it is based on our experimental results, namely, that merging the most confusable MDFA states yields dendrograms that really tell us important information about how the RNN hidden states are organized. We show two of these dendrograms in the paper (EMAILS and DATES). We emphasize strongly that these are not in any way cherry-picked examples. As stated in our response to Reviewer 1 above: \\u201cOur intention behind showing an EMAILS and DATES regular expressions that were formed outside of the aforementioned framework was to show how a typical, easily interpretable recognition algorithm is encoded by the RNN. We didn\\u2019t want the reader to be distracted by the regular expression itself but rather bring light to the interpretation of the dendrograms in section 4.5.\\u201d In order to alleviate any concerns, we will include figures of the randomly sampled MDFAs and corresponding dendrograms in the Appendix in the final version of the paper.\\n\\nAs to recognition accuracy, 81% of the RNNs in the linear decoding experiments met the minimum language recognition test accuracy of 95%. If we reduce the threshold to 90%, the fraction increase to 89%.\"}",
"{\"title\": \"Reviewer 3 please consider this response\", \"comment\": \"Reviewer 3, thank you for your review. Does the author response (above) address your major concern? If not, please take the remaining few days to follow on in this discussion. If you are in a position to reconsider your assessment, please do so, and if you stand by your score, please provide a short explanation as to where the rebuttal falls short.\"}",
"{\"title\": \"Thank you for your response\", \"comment\": \"As to the definition of \\u201clow\\u201d, the fact that it\\u2019s lower than a random baseline doesn\\u2019t mean that it\\u2019s absolutely *low* (this term is not even well-defined). I am not sure if this is that important, but it seems to be a major claim of this paper which is repeated several times and feels at the very least inaccurate. More importantly, this relates to AnonReviewer3\\u2019s concern: there is a simpler explanation to these results which the authors are not addressing. I have seen the authors\\u2019 response to this concern and was not convinced: if (to cite the authors\\u2019 response), \\u201cour paper\\u2019s intention was to not make a logical connection between RNNs and automata (\\u2026)\\u201d, then significant parts of the paper need to be re-written. Based on the response, the contribution of this paper largely relies on two cherry-picked examples.\", \"as_to_the_response\": \"\\u201cFor a recognizer RNNs to be included in the decoding experiments, we required a minimum classification test accuracy of 95%.\\u201d: which proportion of the cases meet this threshold?\"}",
"{\"title\": \"Thank you for the in-depth review\", \"comment\": \"We thank the reviewer for a careful and thorough review of our paper.\\n \\nIt is true that in a classification problem, if you merge the most confused classes together, classification accuracy increases trivially. However, our paper\\u2019s intention was to not make a logical connection between RNNs and automata based on this observation, but rather to show that on a per-example basis, the most confused states that are merged reveal geometric interpretations behind how the RNN encodes the MDFA. By analyzing the accuracy vs coarseness curves (Figure 7) alongside the dendrograms (Figure 6) for two regular expressions that have a real-world interpretation, we gain a novel interpretation of the similarity between the internal representation of the RNN and the MDFA. We consistently find that MDFA states that are linearly inseparable by the decoder often refer to the same pattern in the original regular expression. Our abstraction method provides an interpretable relationship between these two states as evidenced by our dendrograms. We provide two dendrogram results specifically for regular expressions with clear meaning to showcase these patterns. We will provide more dendrograms in the final version to show how consistent the patterns are.\\n\\n-Why is the definition of the \\u201caccuracy\\u201d measurement \\\\rho more complicated than expected at first glance? \\nThe accuracy measure is a quantitative measure predicated on \\\\delta, f(h_t) and f(h_{t+1}), because we need these mappings to capture the structural similarities between RNNs and the abstraction of the MDFA. The accuracy is an average of averages where we calculate the average over a dataset of strings D, with strings of varying lengths. For each individual string we are interested in the number of alphabets for which the decoding f(.) respects the transition \\\\delta in the MDFA, when transitioning from h_t to h{t+1}.\\n\\nWe agree with all of the minor comments and clarity concerns that the reviewer has and will address them in the final version of our paper.\"}",
"{\"title\": \"Thank you for the in-depth questions and comments 2/2\", \"comment\": \"-The regular expression in Figure 6 is incorrect.\\nWe thank the reviewer for finding this error. We will replace it with the correct regular expression \\u201c[a-d]+@[a-d]+.[v-z]{2,3}\\u201d in the final version.\\n \\n-How come Figure 3a goes up to 1.1? Isn\\u2019t it bounded by 1?\\nYou are correct that the decoding accuracy mean chart in Figure 3a is bounded by 1. The reason for the unbounded nature is that the error bars represent one standard deviation above and below the estimate of the mean accuracy, which doesn\\u2019t necessarily respect the bound as we modeled it as a Gaussian random variable. We agree with the reviewer that the top error bars should be bounded by 1 and will fix this in the final version by using a more appropriate representation such an interquartile ranges. \\n\\n \\n-It is not clear how the shuffling of the characters is considered an independent distribution. The negative sampling procedure should appear in the main text.\\nWe believe the reviewer is referring to the term \\u201cindependent\\u201d used in the Appendix under the \\u201cData Generation\\u201d section, which is unclear. We did not intend to evoke the statistical meaning, but rather to explain how the two sampling procedures are different. In the camera-ready version of the paper we will replace the word in the sentence with \\u201cmuch different\\u201d to clarify.\"}",
"{\"title\": \"Thank you for the in-depth questions and comments 1/2\", \"comment\": \"We thank the reviewer for the in-depth questions and comments, and look forward to any follow-up questions or concerns.\\n \\n-The authors claim that the RNN states map to FSA states with *low* coarseness, but Figure 3b (which is never referred to in text\\u2026) shows that in most cases the ratio of coarseness is at least 1/3, and in some cases > 1/2.\\nWe define coarseness to be \\u201clow\\u201d when the number of abstractions needed to reach 90% decoding accuracy, as in Figure 3b, is low relative to the number of abstractions needed to reach such a decoding accuracy when abstractions are formed randomly, as opposed to our greedy method of abstracting states. In figure 4a, the area under each plotted curve will be higher if the decoder is able to reach higher accuracies with a fewer number of abstractions (\\u201clower coarseness\\u201d.) Following this logic, we have plotted the average area under the curve (AUC) for our strategy, along with the strategy of randomly abstracting states in the appendix of our paper. The added benefit of our method can be seen over random by the increase in average AUC for each collection of MDFAs with M states. We show that the AUC is highest when employing our greedy strategy, indicating that the coarseness is indeed \\u201clow\\u201d with respect to other abstraction strategies.\\n\\n-What is the conceptual difference between the two accuracy definitions?\\nThe conceptual difference between decoding accuracy and transitional accuracy are two levels of abstraction to viewing the map \\\\hat{f}. Decoding accuracy asks how well \\\\hat{f} can map the RNN state to the abstracted NFA state, which is essentially asking a membership query, while preserving the MDFA transitions. Transitional accuracy asks if the mapping accurately preserves the transitions from state s_t to s_{t+1} on the given input a_t in the abstracted NFA. The decoding accuracy requires that the transitions of the MDFA are preserved by the mapping \\\\hat{f}, while the transitional accuracy considers the transitions in the abstraction.\\n \\n-Which RNN was used? Which model? Which parameters? Which training regime?\\nWe performed an extensive hyperparameter search, varying number of hidden units and layers, mini-batch size, dropout rates, learning rates, and max number of training epochs. The best performing architecture -- one that is able to achieve high validation accuracies across the wide range of regular languages used in our framework -- is a 2 layer, 50 hidden unit vanilla RNN, trained via SGD for 100 epochs with a mini-batch size of 30, dropout probability of 0.4, and learning rate of 0.0003. The inputs to the model was optimized to predict a binary variable under a cross entropy loss. We will include these details in the final paper.\\n \\n-How were the regular expressions sampled?\\nWe randomly sample expressions using a probabilistic context free grammar based on the specification in the bk.brics.automata java documentation (http://www.brics.dk/automaton/doc/dk/brics/automaton/RegExp.html).Two examples are shown in the appendix of the expressions sampled by our framework. Our intention behind showing an EMAILS and DATES regular expressions that were formed outside of the aforementioned framework was to show how a typical, easily interpretable recognition algorithm is encoded by the RNN. We didn\\u2019t want the reader to be distracted by the regular expression itself but rather bring light to the interpretation of the dendrograms in section 4.5.\\n \\nFor transparency and reproducibility, we will release the source code for our framework.\\n \\n-What is the basic accuracy of the RNN Recognizer?\\nFor a recognizer RNNs to be included in the decoding experiments, we required a minimum classification test accuracy of 95%. We will add this detail in the final version of the paper.\"}",
"{\"title\": \"Thank you for your feedback\", \"comment\": \"We appreciate the reviewers' comments and suggestions. If the reviewer has any additional follow-up comments or questions, we welcome them.\\n\\n-Why are the testing accuracies not generally proportional to the complexity of the MDFA? The most complex MDFA of 14 nodes does not have the lowest testing accuracies.\\nIn Figure 4, the testing accuracies are not proportional to the complexity of the MDFA due to our method of generating MDFAs in our experiments. Regular expressions are randomly generated by our pipeline and the resulting MDFA is created from the regular expression. We choose to sample in the space of regular expressions as opposed to the space of DFAs because sampling in regular expression space is more meaningful; that is, a valid regular expression that is generated is guaranteed to result in a DFA with desired behavior. If we were to sample in DFA space, it is possible that the resulting DFAs may have had unreachable states and other undesirable behavior. There is, however, no straightforward relationship in terms of complexity between MDFAs and their corresponding regular expressions, leading to the slight differences in proportionality seen in Figures 4 and 5.\\n\\n-Why not use a simple CFG or PCFG to generate training sequences?\\nWe choose regular expressions to generate training sequences for their simplicity, as they allow us to interpret the hidden state of the RNN in terms of the clearly defined states that constitute a regular expressions\\u2019 corresponding DFA. There is a substantial amount of literature on the relationship between RNNs and DFAs, but given the little literature surrounding complex regular expressions and DFAs, we want to rigorously explore this space before moving to grammars further up the Chomsky Hierarchy, such as CFGs. Using a CFG or PCFG is a logical next step for our work and is indeed a motivating example.\\n\\n-Is it possible to generate a regular expression randomly to feed into the RNN?\\nYes, is it possible to randomly generate the regular expressions. In our paper, we have developed a framework (Figure 1) for randomly generating regular expressions. At the bottom of section 4.1, we mention that the experiments and results we present are utilizing a dataset of ~500 randomly generated regular expressions in order to get the statistically significant results required in section 4.2, 4.3, and 4.4. \\n\\n-It would be nice to provide more examples?\\nWe agree with this suggestion. Due to space constraints, we did not include more in the main text. We will add more examples to the appendix of the final version of the paper.\"}",
"{\"title\": \"Interesting exploratory research, some more examples are desired\", \"review\": \"This paper investigates internal working of RNN, by mapping its hidden states\\nto the nodes of minimal DFAs that generated the training inputs and its \\nabstractions. Authors found that in fact such a mapping exists, and a linear\\ndecoder suffices for the purpose. \\nInspecting some of the minimal DFAs that correspond to regular expressions, \\ninduced state abstractions are intuitive and interpretable from a viewpoint of\\ntraining RNNs by training sequences.\\n\\nThis paper is interesting, and the central idea of using formal languages to\\ngenerate feeding inputs is good (in fact, I am also doing a different research\\nthat also leverages a formal grammar with RNN).\\n\\nMost of the paper is clear, so I have only a few minor comments:\\n\\n- In Figures 4 and 5, the most complex MDFA of 14 nodes does not have the\\n lowest testing accuracies. In other words, testing accuracies is not\\n generally proportional to the complexity of MDFA. Why does this happen?\\n\\n- As noted in the footnote in page 5, state abstraction is driven by the idea\\n of hierarchical grammars. Then, as briefly noted in the conclusion, why not\\n using a simple CFG or PCFG to generate training sequences? \\n In this case, state abstractions are clear by definition, and it is curious\\n to see if RNN actually learns abstract states (such as NP and VP in natural\\n language) through mapping from hidden states to abstracted states.\\n\\n- Because this paper is exploratory, I would like to see more examples\\n beyond only the two in Figure 6. Is it possible to generate a regular \\n expression itself randomly to feed into RNN?\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Interesting idea, serious clarity problems\", \"review\": \"This paper aims to show that an RNN trained to recognize regular languages effectively focuses on a more abstract representation of the FSA of the corresponding language.\\n\\nUnderstanding the type of information encoded in the hidden states of RNNs is an important research question. Recent results have shown connections between existing RNN architectures and both weighted (e.g., Chen et al., NAACL 2018, Peng et al., EMNLP 2018) and unweighted (Weiss et al., ACL 2018) FSAs. This paper asks a simple question: when trained to recognize regular languages, do RNNs converge on the same states as the corresponding FSA? While exploring solutions to this question is potentially interesting, there are significant clarity issues in this paper which make it hard to understand it. Also, the main claim of the paper \\u2014 that the RNN is focusing on a low level abstraction of thew FSA \\u2014 is not backed-up by the results.\", \"comments\": [\"\\u2014 The authors claim that the RNN states map to FSA states with *low* coarseness, but Figure 3b (which is never referred to in text\\u2026) shows that in most cases the ratio of coarseness is at least 1/3, and in some cases > 1/2.\", \"\\u2014 Clarity:\", \"While the introduction is relatively clear starting from the middle of section 3 there are multiple clarity issues in this paper. In the current state of affairs it is hard for me to evaluate the full contribution of the paper.\", \"The definitions in section 3 were somewhat confusing. What is the conceptual difference between the two accuracy definitions?\", \"When combining two states, does the new FSA accept most of the strings in the original FSAs? some of them? can you quantify that? Also, figure 6 (which kind of addresses this question) would be much more helpful if it used simple expressions, and demonstrated how the new FSA looks like after the merge.\", \"section 4 leaves many important questions unanswered:\", \"1. Which RNN was used? which model? which parameters? which training regime? etc.\", \"2. How were the expressions sampled? the authors mention that they were randomly sampled, so how come they talk about DATE and EMAIL expressions?\", \"3. What is the basic accuracy of the RNN classifier (before decoding)? is it able to learn to recognize the language? to what accuracy?\", \"Many of the tables and figures are never referred to in text (Figure 3b, Figure 5)\", \"In Figure 6, there is a mismatch between the regular expression (e.g., [0-9]{3}\\u2026.) and the transitions on the FSA (a-d, @).\", \"How come Figure 3a goes up to 1.1? isn\\u2019t it bounded by 1? (100%?)\", \"The negative sampling procedure should be described in the main text, not the appendix. Also, it is not clear how come shuffling the characters is considered an independent distribution.\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Well written paper -- One major concern\", \"review\": \"Paper Summary -\\nThe authors trained RNNs to recognize formal languages defined by random regular expressions, then measured the accuracy of decoders that predict states of the minimal deterministic finite automata (MDFA) from the RNN hidden states. They then perform a greedy search over partitions of the set of MDFA states to find the groups of states which, when merged into a single decoder target, maximize prediction accuracy. For both the MDFA and the merged classes prediction problems, linear decoders perform as well as non-linear decoders.\\nClarity - The paper is very clear, both in its prose and maths.\\nOriginality - I don't know of any prior work that approaches the relationship between RNNs and automata in quite this way.\\nQuality/Significance - I have one major concern about the interpretation of the experiments in this paper.\", \"the_paper_seems_to_express_the_following_logic\": \"1 - linear (and non-linear) decoders aren't so good at predicting MDFA states from RNN hidden states\\n2 - if we make an \\\"abstract\\\" finite automata (FA) by merging states of the MDFA to optimize decoder performance, the linear (and non-linear) decoders are much better at predicting this new, smaller FA's states.\\n3 - thus, trained RNNs implement something like an abstract FA to recognize formal languages.\\n\\nHowever, a more appropriate interpretation of these experiments seems to be:\\n1 - (same)\\n2 - if we find the output classes the decoder is most often confused between, then merge them into one class, the decoder's performance increases -- trivially. in other words, you just removed the hardest parts of the classification problem, so performance increased. note: performance also increases because there are fewer classes in the merged-state FA prediction problem (e.g., chance accuracy is higher).\\n3 - thus, from these experiments it's hard to say much about the relationship between trained RNNs and finite automata.\\n\\nI see that the \\\"accuracy\\\" measurement for the merged-state FA prediction problem, \\\\rho, is somewhat more complicated than I would have expected; e.g., it takes into account \\\\delta and f(h_t) as well as f(h_{t+1}). Ultimately, this formulation still asks whether any state in the merged state-set that contains f(h) transitions under the MDFA to the any state in the merged state-set that contains f(h_{t+1}). As a result, as far as I can tell the basic logic of the interpretation I laid out still applies.\\n\\nPerhaps I've missed something -- I'll look forward to the author response which may alleviate my concern.\\n\\nPros - very clearly written, understanding trained RNNs is an important topic\\nCons - the basic logic of the conclusion may be flawed (will await author response)\\n\\nMinor -\\nThe regular expression in Figure 6 (Top) is for phone numbers instead of emails.\\n\\\"Average linear decoding accuracy as a function of M in the MDFA\\\" -- I don't think \\\"M\\\" was ever defined. From contexts it looks like it's the number of nodes in the MDFA.\\n\\\"Average ratio of coarseness\\\" -- It would be nice to be explicit about what the \\\"ratio of coarseness\\\" is. I'm guessing it's (number of nodes in MDFA)/(number of nodes in abstracted DFA).\\nWhat are the integers and percentages inside the circles in Figure 6?\\nFigures 4 and 5 are difficult to interpret because the same (or at least very similar) colors are used multiple times.\\nI don't see \\\"a\\\" (as in a_t in the equations on page 3) defined anywhere. I think it's meant to indicate a symbol in the alphabet \\\\Sigma. Maybe I missed it.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"comment\": \"This is a nice piece of work, well-written, on a hot topic, providing an interesting novel approach and some important insights.\", \"i_would_like_to_point_out_2_recent_works_on_the_matter_that_could_be_interesting_to_discuss_in_the_paper_if_accepted\": \"- In [1], the authors prove the equivalence between linear 2-order RNN and weighted automata. The linearity restriction clearly echoes the one of this paper.\\n\\n- In [2], the authors show that non-linear RNN can be efficiently approximated by weighted automata, suggesting as strong link between the states of the automata and the inner representation of RNN, as in this paper.\\n\\n[1] Connecting Weighted Automata and Recurrent Neural Networks through Spectral Learning, Guillaume Rabusseau, Tianyu Li, Doina Precup, https://arxiv.org/abs/1807.01406\\n\\n[2] Explaining Black Boxes on Sequential Data using Weighted Automata, Stephane Ayache, Remi Eyraud, Noe Goudian, https://arxiv.org/abs/1810.05741\", \"title\": \"Nice paper\"}"
]
} |
|
B1exrnCcF7 | Disjoint Mapping Network for Cross-modal Matching of Voices and Faces | [
"Yandong Wen",
"Mahmoud Al Ismail",
"Weiyang Liu",
"Bhiksha Raj",
"Rita Singh"
] | We propose a novel framework, called Disjoint Mapping Network (DIMNet), for cross-modal biometric matching, in particular of voices and faces. Different from the existing methods, DIMNet does not explicitly learn the joint relationship between the modalities. Instead, DIMNet learns a shared representation for different modalities by mapping them individually to their common covariates. These shared representations can then be used to find the correspondences between the modalities. We show empirically that DIMNet is able to achieve better performance than the current state-of-the-art methods, with the additional benefits of being conceptually simpler and less data-intensive. | [
"cross-modal matching",
"voices",
"faces"
] | https://openreview.net/pdf?id=B1exrnCcF7 | https://openreview.net/forum?id=B1exrnCcF7 | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"HkxYjkGKxN",
"r1gPRr2jT7",
"r1lCSghsam",
"ByxLt0iipQ",
"r1e5d52hhQ",
"Hkxxm_Qsh7",
"HkxvRZesnQ"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1545310113192,
1542337998986,
1542336582248,
1542336126394,
1541356145597,
1541253144146,
1541239246566
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1510/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1510/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1510/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1510/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1510/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1510/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1510/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"All reviewers agree that the proposed method interesting and well presented. The authors' rebuttal addressed all outstanding raised issues. Two reviewers recommend clear accept and the third recommends borderline accept. I agree with this recommendation and believe that the paper will be of interest to the audience attending ICLR. I recommend accepting this work for a poster presentation at ICLR.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"metareviw\"}",
"{\"title\": \"Rebuttal for reviewer 2\", \"comment\": \"We sincerely appreciate the review for the recognition of our novelty and many valuable suggestions.\\n\\nOur main contribution mainly lies in proposing a cross modal matching framework called DIMNet, which learns a shared representation for different modalities by mapping them individually to their common covariates. Our basic intuition is that if the learned embeddings of voices and faces can be correctly classified by a unified (linear) classifier, the embeddings of the same class should be in a common decision region and close to each other.\\nCompared to the existing work [3,4], the supervision could be any combination of covariates, which enables us to isolate and analyze the effect of the individual covariate to the learned embeddings. Moreover, DIMNet makes better use of the multiple covariates in the course of training. \\n\\nIn order to perform fair comparisons, we exactly follow the experimental setup in pioneering work [3,4], and achieve significant improvements compared to these strong baselines [3,4].\\n\\nQ1. In my opinion, perhaps the only exception ... in order to make the article self-contained.\\nA1. We thank the reviewer for this suggestion. We do mention the two scenarios in the paper, but the reviewer is right, we do not explicitly introduce them. We now do so in the updated paper.\\n\\nIn summary, the audio data we used in Section 3.4 is the same as those in other experiment sections, while the visual data is extracted from the video frames in VoxCeleb dataset at 25/6 fps. For fair comparison, we follow the train/val/test split strategy from [4] and evaluate our DIMNet models under Seen-Heard (closed-set) and Unseen-Unheard (open-set)scenarios. More details can be found in the updated paper.\", \"action_taken\": \"Added the above discussions about covariates to introduction section.\\n\\nQ4. In my opinion, this calls into question the hypothesis ..., thanks to not requiring (face image, audio recording) pairs as input.\\nA4. More efficient usage of the data is indeed one of the advantages of our DIMNet framework, as we state in both the introduction and the discussion. And this is achieved, by design, by exploiting (and explicitly modelling) the dependence between the modalities and covariates in a generalizable manner. The outcomes we observe in our experiments are entirely to be expected, from our hypothesis, and we believe that the rather detailed set of experiments (and the analyses in our appendix) show that the results are not merely fortuitous. As indicated by our experiments, DIMNet-I achieves 83.45% accuracy on 1:2 matching task since ID is undoubtedly the most informative covariate. Even using less informative covariates, DIMNet-G still achieves 72% matching accuracy.\\n \\nQ5. Typos\\nA5. We thank the reviewer for the pointing out the typos. All the typos are fixed in the updated paper.\\n\\\"... image.mGiven ...\\\" -> \\\"... image. Given ...\\\"\\n|Fv||Ff| -> ||Fv||_2||Ff||_2\\n\\\"Here we are give a probe input ...\\\" -> Here we are given a probe input \\u2026\\u201d\\n\\n[3] Nagrani, Arsha, et al. \\\"Seeing voices and hearing faces: Cross-modal biometric matching.\\\" IEEE CVPR 2018.\\n[4] Nagrani, Arsha, et al. \\\"Learnable PINs: Cross-Modal Embeddings for Person Identity.\\\" arXiv preprint arXiv:1805.00833 (2018).\\n[5] Chung, Joon Son, et al. \\\"Out of time: automated lip sync in the wild.\\\" ACCV, 2016.\"}",
"{\"title\": \"Rebuttal for reviewer 3\", \"comment\": \"We thank the reviewer for the very positive and encouraging review.\\n\\nQ1. My feeling is that paired positive examples are easier to obtain (e.g., from unlabeled video) than inputs labeled with these covariates, although paired negative examples require labeling and so may be as difficult to obtain.\\n\\nA1. We agree with the reviewer. Compared to covariates, the pairwise label is usually easier to obtain. However, some challenges still exist for collecting the examples from video, making it a non-trivial problem. For example, the cases of reaction shots, flashbacks and dubbing in videos may result in noisy labels. Previous work [6] investigated the use of the paired data in self-supervised learning manner, where SyncNet [7] is adopted to obtain the speaking faces.\\n\\nFor our paper, we focus on proposing a DIMNet framework to learn embeddings for cross-modal matching with the given cross-modal data and their labeled covariates. How to collect data is perhaps beyond the scope of this paper but could be an interesting direction for our future work.\\n\\nQ2. Typos\\nA2. We thank the reviewer for pointing out the typos. All the typos are fixed in the updated paper.\", \"citations\": \"we have carefully checked the citations and accordingly fixed them one by one .\", \"figures\": \"The waveforms have been replaced by log Mel-spectrograms.\\n\\u201cstate or art\\u201d -> \\u201cstate-of-the-art\\u201d\\n\\u201cmGiven\\u201d -> \\u201cGiven\\u201d\\n\\\"Nagrani et al. Nagrani et al. (2018b)\\\" -> \\u201cNagrani et al. (2018b)\\u201d; typo in Table 2 is fixed\\n\\u201cG,N\\u201d -> \\\"G, N\\\"\\n\\n[6] Nagrani, Arsha, Samuel Albanie, and Andrew Zisserman. \\\"Learnable PINs: Cross-Modal Embeddings for Person Identity.\\\" arXiv preprint arXiv:1805.00833 (2018).\\n[7] Chung, Joon Son, and Andrew Zisserman. \\\"Out of time: automated lip sync in the wild.\\\" Asian Conference on Computer Vision. Springer, Cham, 2016.\"}",
"{\"title\": \"Rebuttal for reviewer 1\", \"comment\": \"We thank the reviewer for the recognition of the novelty and the detailed experimental evaluation in our contribution.\\n\\nQ1. Fixing the output dimension to d (for both voice and image-based CNN outputs) could lead to unstable results. Indeed, the comparison of voice and face-based covariate estimates are not entirely fair due to the intrinsic dimensionality can vary for each domain. Alternatives as canonical correlation analysis can be coupled to joint properly both domains.\\nA1. In order to compare embeddings from two modalities (domains), the dimensionality of the embeddings need to be the same. We agree with the reviewer that the intrinsic dimensionality of data in different modalities (domains) could vary. However, it does not contradict the fact that these data can be well represented by the identical-dimensioned embeddings through CNNs, and most importantly, the performance (in the following table) is very stable within a wide range of embedding dimension, showing that the accuracy is not sensitive to the embedding dimension. The idea of using the identical-dimensioned embeddings is also adopted by [1] and [2].\", \"the_accuracies_of_dimnet_i_with_different_embedding_dimensions_on_1\": \"2 matching experiments\\n-------------------------------------------------------------------------------\\nDimension 32 64 128 256 512\\n-------------------------------------------------------------------------------\\nDIMNet-I 82.20 83.45 83.87 83.43 83.16\\n-------------------------------------------------------------------------------\", \"action_taken\": \"Added one row of chance level results to Table 4 with analysis.\\n\\n[1] Nagrani, Arsha, Samuel Albanie, and Andrew Zisserman. \\\"Learnable PINs: Cross-Modal Embeddings for Person Identity.\\\" arXiv preprint arXiv:1805.00833 (2018).\\n[2] Kim, Changil, et al. \\\"On Learning Associations of Faces and Voices.\\\" arXiv preprint arXiv:1805.05553 (2018).\"}",
"{\"title\": \"Covariates factors are learned from voice and image data using CNNs. A logistic classifier is trained for cross-modal matching from covariates.\", \"review\": \"Authors aim to reveal relevant dependencies between voice and image data (under a cross-modal matching framework) through common covariates (gender, ID, nationality). Each covariate is learned using a CNN from each provided domain (speak recordings and face images), then, a classifier is determined from a shared representation, which includes the CNN outputs from voice-based and image-based covariate estimations. The idea is interesting, and the paper ideas are clear to follow.\", \"pros\": [\"New insights to support cross-modality matching from covariates.\", \"Competitive results against state-of-the-art.\", \"-Convincing experiments.\"], \"cons\": \"-Fixing the output dimension to d (for both voice and image-based CNN outputs) could lead to unstable results. Indeed, the comparison of voice and face-based covariate estimates are not entirely fair due to the intrinsic dimensionality can vary for each domain. Alternatives as canonical correlation analysis can be coupled to joint properly both domains.\\n- Table 4 - column ID results are not convincing (maybe are not clear for me).\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Review of Disjoint Mapping Network for Cross-modal Matching of Voices and Faces\", \"review\": \"# Summary\\n\\nThe article proposes a deep learning-based approach aimed at matching face images to voice recordings belonging to the same person. \\n\\nTo this end, the authors use independently parametrized neural networks to map face images and audio recordings -- represented as spectrograms -- to embeddings of fixed and equal dimensionality. Key to the proposed approach, unlike related prior work, these modules are not directly trained on some particular form of the cross-modal matching task. Instead, the resulting embeddings are fed to a modality-agnostic, multiclass logistic regression classifier that aims to predict simple covariates such as gender, nationality or identity. The whole system is trained jointly to maximise the performance of these classifiers. Given that (face image, voice recording) pairs belonging to the same person must share equal for these covariates, the neural networks embedding face images and audio recordings are thus indirectly encouraged to map face images and voice recordings belonging to the same person to similar embeddings.\\n\\nThe article concludes with an exhaustive set of experiments using the VGGFace and VoxCeleb datasets that demonstrates improvements over prior work on the same set of tasks.\\n\\n# Originality and significance\\n\\nThe article follows-up on recent work [1, 2], building on their original application, experimental setup and model architecture. The key innovation of the article, compared to the aforementioned papers, lies on the idea of learning face/voice embeddings to maximise their ability to predict covariates, rather than by explicitly trying to optimise an objective related to cross-modal matching. While the fact that these covariates are strongly associated to face images and audio recordings had already been discussed in [1, 2], the idea of actually using them to drive the learning process is novel in this particular task.\\n\\nWhile the article does not present substantial, general-purpose methodological innovations in machine learning, I believe it constitutes a solid application of existing techniques. Empirically, the proposed covariate-driven architecture is demonstrated to lead to better performance in the (VGGFace, VoxCeleb) dataset in a comprehensive set of experiments. As a result, I believe the article might be of interest to practitioners interested in solving related cross-modal matching tasks.\\n\\n# Clarity\\n\\nThe descriptions of the approach, related work and the different experiments carried out are written clearly and precisely. Overall, the paper is rather easy to read and is presented using a logical, easy-to-follow structure.\\n\\nIn my opinion, perhaps the only exception to that claim lies in Section 3.4. If possible, I believe the Seen-Heard and Unseen-Unheard scenarios should be introduced in order to make the article self-contained. \\n\\n# Quality\\n\\nThe experimental section is rather exhaustive. Despite essentially consisting of a single dataset, it builds on [1, 2] and presents a solid study that rigorously accounts for many factors, such as potential confounding due to gender and/or nationality driving prediction performance in the test set. \\n\\nMultiple variations of the cross-modal matching task are studied. While, in absolute terms, no approach seems to have satisfactory performance yet, the experimental results seem to indicate that the proposed approach outperforms prior work.\\n\\nGiven that the authors claimed to have run 5 repetitions of the experiment, I believe reporting some form of uncertainty estimates around the reported performance values would strengthen the results.\\n\\nHowever, I believe that the success of the experimental results, more precisely, of the variants trained to predict the \\\"covariate\\\" identity, call into question the very premise of the article. Unlike gender or nationality, I believe that identity is not a \\\"covariate\\\" per se. In fact, as argued in Section 3.1, the prediction task for this covariate is not well-defined, as the set of identities in the training, validation and test sets are disjoint. In my opinion, this calls into question the hypothesis that what drives the improved performance is the fact that these models are trained to predict the covariates. Rather, I wonder if the advantages are instead a \\\"fortunate\\\" byproduct of the more efficient usage of the data during the training process, thanks to not requiring (face image, audio recording) pairs as input.\\n\\n# Typos\\n\\nSection 2.4\\n1) \\\"... image.mGiven ...\\\"\\n2) Cosine similarity written using absolute value |f| rather than L2-norm ||f||_{2}\\n3) \\\"Here we are give a probe input ...\\\"\\n\\n# References\\n\\n[1] Nagrani, Arsha, Samuel Albanie, and Andrew Zisserman. \\\"Learnable PINs: Cross-Modal Embeddings for Person Identity.\\\" arXiv preprint arXiv:1805.00833 (2018).\\n[2] Nagrani, Arsha, Samuel Albanie, and Andrew Zisserman. \\\"Seeing voices and hearing faces: Cross-modal biometric matching.\\\" Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Networks that predict covariates of multimodal inputs like identity and gender produce better representations for cross-modal matching and retrieval tasks than directly predicting cross-modal matches. Paper and well written and experiments are thorough.\", \"review\": \"This paper aims at matching people's voices to the images of their faces. It describes a method to train shared embeddings of voices and face images. The speech and image features go through separate neural networks until a shared embedding layer. Then a classification network is built on top of the embeddings from both networks. The classification network predicts various combinations of covariates of faces and voices: gender, nationality, and identity. The input to the classification network is then used as a shared representation for performing retrieval and matching tasks.\\n\\nCompared with similar work from Nagrani et al (2018) who generate paired inputs of voices and faces and train a network to classify if the pair is matched or not, the proposed method doesn't require paired inputs. It does, however, require inputs that are labeled with the same covariates across modalities. My feeling is that paired positive examples are easier to obtain (e.g., from unlabeled video) than inputs labeled with these covariates, although paired negative examples require labeling and so may be as difficult to obtain.\\n\\nSeveral different evaluations are performed, comparing networks that were trained to predict all subsets of identity, gender, and nationality. These include identifying a matching face in a set of faces (1,2 or N faces) for a given voice, or vice versa. Results show that the network that predicts identity+gender tends to work best under a variety of careful examinations of various stratifications of the data. These stratifications also show that while gender is useful overall, it is not when the gender of imposters is the same as that of the target individual. The results also show that even when evaluating the voices and faces not shown in the training data, the model can achieve 83.2% AUC on unseen/unheard individuals, which outperforms the state-of-the-art method from Nagrani et al (2018).\\n\\nAn interesting avenue of future work would be using the prediction of these covariates to initialize a network and then refine it using some sort of ranking loss like the triplet loss, contrastive loss, etc.\", \"writing\": [\"Overall, ciations are all given in textual form Nagrani et al (2018) (in latex this is \\\\citet{} or \\\\cite{}), when many times parenthetical citations (Nagrani et al, 2018) (in latex this is \\\\citep{}) would be more appropriate.\", \"The image of the voice waveform in Figures 1 and 2 should be replaced by log Mel-spectrograms in order to illustrate the network's input.\", \"\\\"state or art\\\" instead of \\\"state-of-the-art\\\" on page 3.\", \"In subsection 2.4: \\\"mGiven\\\" is written instead of \\\"Given\\\".\", \"On Page 6 Section 3.1 \\\"1:2 matching\\\" paragraph. \\\"Nagrani et al.\\\" is written twice. * * Page 6 mentions that there is a row labelled \\\"SVHF-Net\\\" in table 2, but there is no such row is this table.\", \"Page 7 line 1, \\u201cG,N\\u201d should be \\\"G, N\\\".\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
B1lxH20qtX | Learning to control self-assembling morphologies: a study of generalization via modularity | [
"Deepak Pathak",
"Chris Lu",
"Trevor Darrell",
"Philip Isola",
"Alexei A. Efros"
] | Much of contemporary sensorimotor learning assumes that one is already given a complex agent (e.g., a robotic arm) and the goal is to learn to control it. In contrast, this paper investigates a modular co-evolution strategy: a collection of primitive agents learns to self-assemble into increasingly complex collectives in order to solve control tasks. Each primitive agent consists of a limb and a neural controller. Limbs may choose to link up to form collectives, with linking being treated as a dynamic action. When two limbs link, a joint is added between them, actuated by the 'parent' limb's controller. This forms a new 'single' agent, which may further link with other agents. In this way, complex morphologies can emerge, controlled by a policy whose architecture is in explicit correspondence with the morphology. In experiments, we demonstrate that agents with these modular and dynamic topologies generalize better to test-time environments compared to static and monolithic baselines. Project videos are available at https://doubleblindICLR19.github.io/self-assembly/ | [
"modularity",
"compostionality",
"graphs",
"dynamics",
"network"
] | https://openreview.net/pdf?id=B1lxH20qtX | https://openreview.net/forum?id=B1lxH20qtX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"S1ekS70gxV",
"r1evvCoYyV",
"HJxIz-sYkV",
"BklxiCctkN",
"rJgsB2uYkE",
"HklWuqsM0Q",
"HJxzS5iGRQ",
"Syldx5oMC7",
"Skx_oPizAm",
"HJerl2q3h7",
"HkgwSoPq2Q",
"HJlv_HUI2Q",
"ryxNdJrpiQ",
"HyxDkjCwjm"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_comment",
"comment"
],
"note_created": [
1544770358745,
1544302174846,
1544298765999,
1544298135653,
1544289347216,
1542793833355,
1542793785981,
1542793712476,
1542793119790,
1541348333273,
1541204799271,
1540937071327,
1540341612473,
1539988191486
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1509/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1509/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1509/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1509/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1509/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1509/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1509/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1509/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1509/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1509/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1509/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1509/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1509/Authors"
],
[
"(anonymous)"
]
],
"structured_content_str": [
"{\"metareview\": \"Strengths: A co-evolution of body connectivity and its topology mimicing control policy is presented.\", \"weaknesses\": \"Reviewers found the paper to be lacking in detail. The importance of message passing in achieving the given results is clear on one example but not some others. Some reviewers had questions regarding the baseline comparisons.\\nThe authors provided lengthy details in responses on the discussion board, but reviewers likely had limited time to fully reread the many changes that were listed.\", \"ac\": \"The physics in the motions shown in the video require signficant further explanation. It looks like the ball joints can directly attach themselves to the ground, and make that link stand up. Thus it seems that the robots are not underactuated and can effectively grab arbitrary points in the environment. Also it is strange to see the robot parts dynamically fly together as if attracted by a magnet. The physics needs significant further explanation.\", \"points_of_contention\": \"The R2 review is positive on the paper (7), with a moderate confidence (3).\\nR1 contributed additional questions during the discussion, but R2 and R3 were silent.\\n\\nThe AC further examined the submission (paper and video). \\nThe reviewers and the AC are in consensus regarding\\nthe many details that are behind the system that are still not understood. The AC is also skeptical\\nof the non-physical nature of the motion, or the unspecified behavior of fully-actuated contacts\\nwith the ground.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"many missing details; strange physics;\"}",
"{\"title\": \"Final thoughts? R3?\", \"comment\": \"We are coming to the end of the discussion phase.\\nThank you for the discussion around R1 comments.\\nHearing back from R3 would be very useful.\\nWe do realize that everyone's time is limited.\\n-- area chair\"}",
"{\"title\": \"Added Pseudo-code of the algorithm in reply to R1\", \"comment\": \"Dear Reviewers:\\n\\nR1 suggested us to provide a pseudo-code of the overall algorithm so as to improve the understanding of the algorithm. We have provided the pseudo-code of our DGN algorithm below in reply to R1. We will add this pseudo-code to the paper as soon as open-review permits update. We will also make our code publicly available.\\n\\nLooking forward to your comments!\", \"psuedo_code_in_reply_to_r1\": \"https://openreview.net/forum?id=B1lxH20qtX¬eId=BklxiCctkN\"}",
"{\"title\": \"Pseudo Code of the DGN Algorithm\", \"comment\": \"Thank you for the suggestion of adding a pseudo-code of the overall algorithm. We provide the pseudo-code of our DGN algorithm below exactly as it is implemented in code line-by-line. Note that all these equations and parameters are already defined in Section 3.3. We will add this pseudo-code to the final version of the paper as open-review allows update. Hope this addresses your concern. We will also make our code publicly available.\\n\\nLooking forward to your reply!\\n\\n----------------------------------------------------\\n[Notation Summary as already defined in Section 3.3]\\n----------------------------------------------------\", \"for_each_node_i\": \"First compute \\\\pi_{\\\\theta_1}^i (s_t^i, m_t^{C_i}) = m_t^i\\n Then compute \\\\pi_{\\\\theta_2}^i (m_t^i, m_t^{p_i}) = [a_t^i, \\\\hat{m}_t^i]\", \"where\": \"s_t^i: observation state of agent limb i\\n a_t^i: action output of agent limb i: {3 torques, attach, detach}\\n m_t^{C_i}: aggregated message from children nodes input to agent i (bottom-up-1)\\n m_t^i: output message that agent i passes to its parent (bottom-up-2)\\n m_t^{p_i}: message from parent node to agent i (top-down-1)\\n \\\\hat{m}_t^i: message from agent i to its children\\n => \\\\theta: {\\\\theta_1, \\\\theta_2}\\n => messages are 32 length floating point vectors.\\n\\n----------------------------------------------------\\n[Pseudo-code: Bottom-up, Top-down DGN]\\n----------------------------------------------------\\n1. Initialize parameters {\\\\theta_1, \\\\theta_2} randomly.\\n Initialize all message vectors {m_t^{C_i}, m_t^i, m_t^{p_i}, \\\\hat{m}_t^i} to be zero.\\n\\n2. Represent graph connectivity $G$ as [collection of nodes i, edges between nodes i]\", \"note\": \"In the beginning, all edges are zeros, i.e., non-existent\\n\\n3. Begin loop {for each time step t}\\n4. Each limb agent i observes its own state vector s_t^i\\n5. Begin loop {for each agent i}\\n6. # Compute incoming child messages\\n m_t^{C_i} = 0\\n for each child node c of agent i in $G$:\\n m_t^{C_i} += m_t^c\\n\\n7. # Compute message to parent p of agent i in $G$:\\n m_t^i := \\\\pi_{\\\\theta_1}^i (s_t^i, m_t^{C_i})\\n\\n8. # Compute action and messages to children of agent i:\\n a_t^i, \\\\hat{m}_t^i := \\\\pi_{\\\\theta_2}^i (m_t^i, m_t^{p_i})\\n\\n9. # Execute morphology change as per a_t^i\\n if a_t^i[3]==attach:\\n find closest agent j within distance d from agent i, otherwise j=NULL\\n add edge (i,j) in $G$\\n also make physical joint between (i,j)\\n if a_t^i[4]==detach:\\n delete edge (i, parent of i) in $G$\\n also delete physical joint between (i,j)\\n\\n10. # Execute torques from a_t^i\\n Apply torques a_t^i[0], a_t^i[1], a_t^i[2]\\n11. End loop\\n\\n12. # Update message variables\\n Begin loop {for each agent i}\\n let p be parent of agent i in $G$\", \"if_p_is_null\": \"set m_t^{p_i} to be zero\", \"else\": \"m_t^{p_i} = \\\\hat{m}_t^i\\n End loop\\n\\n13. # Update graph and agent morphology\\n Find all connected components in $G$\\n Begin loop {for each connected component}\\n Begin loop {for each agent i in the connected component}\\n reward r_t^i = reward of corresponding connected component (e.g. max height)\\n End loop\\n End loop\\n\\n14. End loop\\n\\n15. Update \\\\theta = {\\\\theta_1,\\\\theta_2} to maximize joint discounted reward:\\n \\\\max_{\\\\theta} \\\\mathbb{E} \\\\sum_{agent i} [\\\\sum_t r_t^i]\\n This is exactly same as an ordinary reinforcement learning objective.\\n\\n => We optimize it by using off-the-shelf PPO to update \\\\theta, i.e., {\\\\theta_1,\\\\theta_2} as follows:\\n let \\\\vec{a}_t = [a_t^1, a_t^2.. a_t^n]\\n \\\\vec{s}_t = [s_t^1, s_t^2.. s_t^n]\\n \\\\hat{A}_t = advantage of discounted rewards, r_t = \\\\sum_{agent i} r_t^i\", \"ppo\": \"\\\\max_{\\\\theta} \\\\mathbb{E} [\\n \\\\hat{A}_t\\\\frac{ \\\\pi_\\\\theta(\\\\vec{a}_t|\\\\vec{s}_t) }{ \\\\pi_\\\\thetaOld(\\\\vec{a}_t|\\\\vec{s}_t) }\\n - \\\\beta KL(\\\\pi_\\\\thetaOld(.|\\\\vec{s}_t) || \\\\pi_\\\\theta(.|\\\\vec{s}_t)) ]\", \"hyper_parameters\": \"Section 4, Para 1\\n\\n16. Repeat Step-3 to Step-15 until training converges\\n\\n\\n----------------------------------------------------\\n[Pseudo-code: Bottom-up DGN]\\n----------------------------------------------------\\n=> Force set m_t^{p_i}=0 by removing Step-12.\\n\\n\\n----------------------------------------------------\\n[Pseudo-code: Bottom-up DGN]\\n----------------------------------------------------\\n=> Force m_t^{C_i}=0 by removing Step-6.\\n\\n\\n----------------------------------------------------\\n[Pseudo-code: No-message DGN]\\n----------------------------------------------------\\n=> Force m_t^{C_i}=0 by removing Step-6.\\n=> Force set m_t^{p_i}=0 by removing Step-12.\"}",
"{\"title\": \"Thanks for clarification.\", \"comment\": \"I appreciate your effort to improve the paper.\\n\\nI still believe that a clear mathematical explanation is incomplete. How exactly is GCN integrated into PPO-based learning (end of page 5)? I'm not sure how exactly the messages are represented. A clear, step-by-step description of the algorithm (potentially with pseudocode) would help a lot.\"}",
"{\"title\": \"[Authors' Response to R3] Clarifying our contribution; Updated paper with details\", \"comment\": \"We thank the reviewer for the constructive feedback and are glad that the reviewer found the general motivation \\\"interesting\\\" and the video results \\\"attractive\\\". Here we address your specific concerns. Please also see the \\\"common response\\\" posted separately.\", \"r3\": \"\\\"detail of the sensor inputs, action spaces, and the whole algorithm \\u2026 not explained well.\\\"\\n=> Thank you for valuable feedback. We have added full algorithm details in Section 3.3, and implementation details in Section 4 (first paragraph) in the updated draft of the paper. The following sensor/action details have been added to Section 2 (last 2 paragraphs):\", \"action_space\": \"The output action space of each primitive agent contains the 3 continuous torque values (for 3 degrees of freedom) that are to be applied to the motor connected to the agent. In addition, the agent also outputs two binary actions which denote whether to connect or disconnect.\", \"sensory_space\": \"Each agent limb only has access to its local sensory information including: (a) own dynamics, i.e., the location of the limb in 3-D euclidean coordinates, its velocity, angular rotation and angular velocity; (b) a trinary touch sensor at each end to detect whether the end is touching the floor, another limb, or nothing; (c) a very simple point depth sensor that captures the surface height on a 9x9 grid around the limb.\\n\\n\\nFurthermore, we have significantly improved the presentation quality of the overall paper, and would like to request the reviewer to take a second look at it. Thank you!\"}",
"{\"title\": \"[Authors' Response to R2] Discussing message-passing and it's value\", \"comment\": \"We thank the reviewer for the constructive feedback and are glad that the reviewer found the results \\\"interesting\\\" and \\\"quite an encouraging\\\" demonstration of the proof of concept. Here we address your specific concerns. Please also see the \\\"common response\\\" posted separately.\", \"r2\": \"\\\"does message passing lead to a faster training? ... add an experimental evidence\\\"\\n=> Our empirical observation suggests that the message passing is helpful in scenarios where the space of morphologies that perform well at a task is small. In such cases (e.g. standing), message passing indeed leads to faster training (as shown in Figures 3(a), 4(a) in the updated draft). However, message passing does not seem to have any effect on the training speed when many morphological structures can perform well at the same time (e.g., locomotion), as shown in Figure 3(b) of updated draft.\\n\\nFurthermore, we have significantly improved the presentation quality of the overall paper, and would like to request the reviewer to take a second look at it. Thank you!\"}",
"{\"title\": \"[Authors' Response to R1] Updated paper with method, agent, environment details.\", \"comment\": \"We thank you for the constructive feedback and are glad you found the modular morphologies and the proposed idea interesting. Here we address your specific concerns. Please also see the \\\"common response\\\" posted separately.\", \"r1\": \"\\\"certain behaviors are very unphysical or unrealistic eg parts jumping around and linking\\\"\\n=> We implement linking action by attaching the closest limb within a small radius around the parent-node. If no other limb is present within the threshold range, the linking action has no effect. (see Section 2 of updated draft). The linking mechanism is difficult to implement realistically in simulation and it makes things look somewhat unrealistic. \\n\\n\\nWe have also significantly improved the presentation quality of the overall paper, and would like to request the reviewer to take a second look at it. Thank you!\"}",
"{\"title\": \"[Authors' Common Response] Major update to paper; improved presentation and details.\", \"comment\": \"We thank the reviewers for their insightful and helpful feedback. We are glad reviewers found the general motivation of proposed task \\\"interesting\\\" (R1, R2, R3), the video results \\\"attractive\\\" (R3) and \\\"as a proof of concept... quite an encouraging demonstration\\\" (R2). However, all reviewers were concerned about missing details in the method and experiment sections. We apologize for this lack of clarity.\\n\\nMotivated by the reviewers' comments, we have done a major update of the paper, clarifying the experiments and hopefully addressing all the reviewers' questions and concerns. Here we summarize the key changes we have made:\\n\\n1) Improved overall presentation: updated introduction, environment/agents details, method section and discussion of results.\\n2) Added a list of contributions to the end of the introduction\\n3) Replaced generalization graphs with tables: we realized that showing the generalization results as plots was unnecessarily confusing. These plots showed zero-shot generalization performance at each *iteration* of training. However, what is more common is to pick the single best policy from training (the one that achieves highest training reward), then test how well it generalizes to new scenarios. In our revision, we report these numbers in Table 1. (For completeness, we have moved the original plots to the supplementary material.)\\n\\nWe will answer individual questions that the reviewers raised in the respective replies, and look forward to their follow-up advice.\"}",
"{\"title\": \"Collection of primitive agents is interesting, but\", \"review\": \"This paper investigates a collection of primitive agents that learns to self-assemble into complex collectives to solve control tasks.\\nThe motivation of the paper is interesting. The project videos are attractive. However there are some issues:\\n1. The proposed model is specific to the \\\"multi-limb\\\" setting. I don't understand the applicability to other setting. How much generality does the method (or the experiment) have?\\n\\n2. Comparison to other existing methods is not enough. There are many state-of-the-art RL algorithms, and there should be natural extension to this problem setting. I can not judge whether the proposed methods work better or not.\\n\\n3. The algorithm is not described in detail. For example, detail of the sensor inputs, action spaces, and the whole algorithm including hyper-parameters are not explained well.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"An interesting idea of dynamical \\\"self-assembly\\\" but unclear implications of the proposed message passing\", \"review\": [\"The paper describes training a collection of independent agents enabled with message passing to dynamically form tree-morphologies. The results are interesting and as proof of concept this is quite an encouraging demonstration.\", \"Main issue is the value of message passing\", \"Although the standing task does demonstrate that message passing may be of benefit. It is unclear in the other two tasks if it even makes a difference. Is grouping behavior typical in the locomotion task or it is an infrequent event?\", \"Would it be correct to assume that even without message passing and given enough training time the \\\"assemblies\\\" will learn to perform as well as with message passing? The graphs in the standing task seem to indicate this. Would you be able to explain and perform experiments that prove or disprove that?\", \"The videos demonstrate balancing in the standing task and it is unclear why the bottom-up and bidirectional messages perform equally well. I would disagree with your comment about lack of information for balancing in the top-down messages. The result is not intuitive.\", \"Given the above, does message passing lead to a faster training? Would you be able to add an experimental evidence of this statement?\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Interesting ideas and the setup, but virtually no details provided\", \"review\": \"Summary:\\n--------------\\nThe paper considers the problem of constructing compositional robotic morphologies that can solve different continuous control tasks in a (multi-agent) reinforcement learning setting. The authors created an environment where the actor consists of a number of primitive components which interface with each other via \\\"linking\\\" and construct a morphology of a robot. To learn in such an environment, the authors proposed a graph neural network policy architecture and showed that it is better than the baselines on the proposed tasks.\\n\\nI find the idea of learning in environments with modular morphologies as well as the proposed tasks interesting. However, the major drawback of the paper is the lack of any reasonable details on the methods and experiments. It's hard to comment on the novelty of the architecture or the soundness of the method when such details are simply unavailable.\\n\\nMore comments and questions are below. I would not recommend publishing the paper in the current form.\", \"comments\": [\"----------------\", \"If I understand it correctly, each component (\\\"limb\\\") represents an agent. Can you define precisely (ie mathematically) what the observations and actions of each agent are?\", \"Page 4, paragraph 2: in the inline equation, you write that a sum over actions equals policy applied to a sum over states. What does it mean? My understanding of monolithic agents is that observations and actions must be stacked together. Otherwise, the information would be lost.\", \"Page 4, paragraphs 3-(end of section): if I understand it correctly, the proposed method looks similar to the problem of \\\"learning to communicate\\\" in a cooperative multi-agent setting. This raises the question, how exactly the proposed architecture is trained? Is it joint learning and joint execution (ie there's a shared policy network, observation and action spaces are shared, etc), or not? All the details on how to apply RL to the proposed setup are completely omitted.\", \"Is the topology of the sub-agents restricted to a tree? Why so? How is it selected (in cases when it is not hand-specified)?\", \"From the videos, it looks like certain behaviors are very unphysical or unrealistic (eg parts jumping around and linking to each other). I'm wondering which kind of simulator was used? How was linking defined (on the simulator level)? It would be nice if such environments with modular morphologies were built using the standard simulators, such as MuJoCo, Bullet, etc.\", \"All in all, despite potentially interesting ideas and setup, the paper is sloppily written, has mistakes, and lacks crucial details.\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Authors' response to comments\", \"comment\": \"We thank you for taking the time to read our draft. Our answers are as follows:\\n\\n1. \\\"Fixed-Morphology Baseline\\\"\\n=> For the fixed-morphology baseline, we chose the morphology to be a straight line chain of 6-limbs (i.e., a linear morphology) in all the experiments including standing-up and locomotion. This linear-chain may be optimal for standing as tall as possible, but it is not necessarily optimal for learning to stand; same would hold for locomotion.\\n\\n=> Note that DGN also converges to linear-chain morphology to achieve the best reward in case of standing-up task (e.g., see video results on the project website). Moreover, one can confirm that the locomotion task is also solvable with linear-morphology because one of the DGN ablation methods converged to a linear-morphology while doing well at locomotion.\\n\\n=> The underlying PPO code is used off-the-shelf from a publicly available implementation (https://github.com/ikostrikov/pytorch-a2c-ppo-acktr) and is kept same across all methods in the graph without any change.\\n\\n=> That being said, we were indeed surprised at baseline not performing too well and had been working on improving it. We recently found that it is hard to train fixed-morphology baseline for 6-limbs while it works well with 4-limbs. However, in either case, it does not seem to generalize as well. We will include these latest findings in an updated draft of the paper.\\n\\n2. \\\"Role of Message-passing in DGN\\\"\\n=> We would like to emphasize that the message-passing DGN works significantly better than the non-message passing variant in the standing-up task. For instance, there is a significant gap between blue-curve (message passing) and gray-curve (non-message passing) in Figure-1.\\n\\n=> For the locomotion task, in particular, the graphs do indicate that the message-passing does not improve the performance. We investigated this issue in depth and found out that it is possible to do well on the current bumpy-terrain-locomotion task without making any complicated morphology. For instance, any morphology with sufficient height and forward velocity can make comparable progress. We are running experiments by making the terrain even harder to verify indeed whether it is easiness of the task or the overhead of message-passing that makes non-message passing DGN work as well or better in this case.\\n\\n3. Other Clarification:\\n=> Finally, we would like to clarify that the generalization curves denote the performance of different training checkpoints of a model across novel setups without any further fine-tuning on those setups (i.e., zero-shot). Hence, the checkpoint, which performs the best at training, is the one that matters the most in generalization plots instead of the whole x-axis. An alternate and cleaner way to present these generalization results would be to show scores in a table for the best training checkpoint. \\n\\nHope this clarifies the raised questions. We would update the submitted version as the rebuttal period starts with these clarifications and new results.\"}",
"{\"comment\": \"Dear authors,\\n\\nThanks for working on this problem which always takes us back to the classic Karl Sims results. But it seems like there are two very blatant issues in your experiments: \\n\\n1. There are no details on how you picked the fixed morphology for PPO. You have shown really bad training curves for the locomotion task, but it is quite well known now that any reasonable morphology can be trained to locomote when there are no bugs in the implementation of PPO. So, it seems like the fixed morphology was picked to make sure the baseline doesn't work. \\n\\n2. It seems like in most of your experiments the message-passing doesn't matter at all, ie no-message passing baseline works pretty well. So, if all these individual limbs can just independently work to locomote efficiently, the need for the whole DGN architecture is quite questionable.\", \"title\": \"Some deep issues with your results\"}"
]
} |
|
ByleB2CcKm | Learning Procedural Abstractions and Evaluating Discrete Latent Temporal Structure | [
"Karan Goel",
"Emma Brunskill"
] | Clustering methods and latent variable models are often used as tools for pattern mining and discovery of latent structure in time-series data. In this work, we consider the problem of learning procedural abstractions from possibly high-dimensional observational sequences, such as video demonstrations. Given a dataset of time-series, the goal is to identify the latent sequence of steps common to them and label each time-series with the temporal extent of these procedural steps. We introduce a hierarchical Bayesian model called Prism that models the realization of a common procedure across multiple time-series, and can recover procedural abstractions with supervision. We also bring to light two characteristics ignored by traditional evaluation criteria when evaluating latent temporal labelings (temporal clusterings) -- segment structure, and repeated structure -- and develop new metrics tailored to their evaluation. We demonstrate that our metrics improve interpretability and ease of analysis for evaluation on benchmark time-series datasets. Results on benchmark and video datasets indicate that Prism outperforms standard sequence models as well as state-of-the-art techniques in identifying procedural abstractions. | [
"learning procedural abstractions",
"latent variable modeling",
"evaluation criteria"
] | https://openreview.net/pdf?id=ByleB2CcKm | https://openreview.net/forum?id=ByleB2CcKm | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"rkgO_O9zxE",
"B1lfkJngeV",
"HyxQ798507",
"Ske7htL5AX",
"B1x2lF85Rm",
"SyxHYuUcAQ",
"HygJ8OLcCQ",
"ryxCFI8907",
"Hyx08Tcp2Q",
"ryxNNJIs2m",
"BkgfMLD42X"
],
"note_type": [
"official_comment",
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544886384072,
1544761050324,
1543297563129,
1543297450721,
1543297268479,
1543297149118,
1543297095303,
1543296645866,
1541414229702,
1541263148355,
1540810250420
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1508/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1508/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1508/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1508/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1508/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1508/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1508/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1508/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1508/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1508/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1508/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"modestly revising up my review\", \"comment\": \"The author responses to my review were thorough and compelling. The revisions made the paper stronger.\\n\\nOne of my main complaints about the paper was that it might not be a good subject fit for ICLR. That the other reviewers did not raise the same objection (indeed thought the opposite: \\\"This work is appropriate for ICLR.\\\"), and gave positive reviews, leads me to believe I could be wrong about the subject fit. That is, my confidence in my evaluation is now lower.\\n\\nI still believe the contribution in this manuscript would be much stronger if (1) it contained user studies that showed the proposed metric corresponds to some human perception of the goodness of segmentation or (2) it showed that improvements on the metric correlated with some kind of downstream task performance. Without a compelling demonstration of strengths like these, it seems much less likely that the metric or the proposed method will impact others' future work.\\n\\nI'll revise my review score up to be on the negative side of neutral, and revise down my confidence. That way I expect my review wouldn't be enough to sink the submission if another reviewer wants to champion it.\"}",
"{\"metareview\": \"While the reviews of this paper were somewhat mixed (7,6,4), I ended up favoring acceptance because of the thorough author responses, and the novelty of what is being examined.\\n\\nThe reviewer with a score of 4, argues that this work is not a good fit for iclr, but, although tailoring new metrics may not be a common area that is explored, I don't believe that it's outside the range of iclr's interest, and therefore also more unique.\", \"confidence\": \"3: The area chair is somewhat confident\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Meta-Review for Learning Procedural Abstractions\"}",
"{\"title\": \"Author Response (clarifications)\", \"comment\": \"We also discuss additional clarifications to questions raised by you,\\n\\nChoosing \\\\mathcal{H} (S3.1): \\nWhile the space of alternative choices is enormous, the main desiderata is that the chosen scoring function should look for overlapping sequences of tokens in the two segments, so common substring/subsequence style functions are most suitable. Using the heaviest common substring allows us to take into account the relative length of the matched token sequence. \\n\\n\\nClarification for H (S3.2): \\nThis follows from the definition of conditional entropy, as laid out in prior work such as Rosenberg & Hirschberg (2007), Meila (2007) and Dom (2001), who adopt these definitions for the standard clustering setting.\", \"algebraic_forms_of_criteria\": \"The choice of algebraic form is to represent them as normalized mutual information criteria. The metric that is canonically called NMI is one instance of a family of such criteria. The criteria we derived are part of this family and can be rewritten as a mutual information term divided by some normalization. For instance,\\n\\nLASS \\t= 1 - \\\\frac{H(A|B) + H(B|A)} {H(A) + H(B)}\\n = \\\\frac{H(A) + H(B) - H(A|B) - H(B|A)} {H(A) + H(B)}\\n = \\\\frac{2 * I(A;B)} {H(A) + H(B)}\\n\\nThis relates them to the large body of previous work in clustering evaluation using information-theoretic criteria (see Table 2 in Vinh, Epps and Bailey (2010) for a review).\\n\\n-------------------------------------------------------\\nWe would like to conclude by thanking you for the helpful suggestions. We hope that our response addresses the concerns raised by you and that you will reconsider your assessment of our work.\"}",
"{\"title\": \"Author Response (method)\", \"comment\": \"We would like to highlight that we see our new evaluation criteria as our primary contribution, and here we are of course happy to clarify questions about the algorithm we introduced (Prism), which provides a small concrete improvement in trying to model procedural structure.\", \"segment_length_generation\": \"We have updated our discussion in Section 4 to clarify that the generative process we propose (sample m Categoricals from the prior and sort them) is exactly equivalent to generating segment lengths from a Multinomial distribution (over m draws) with a Dirichlet prior. However, representing the process in the way we have written it improved inference efficiency with Gibbs sampling -- resampling the segment lengths requires only computing the likelihood of data points at segment boundaries, which is independent of the length of the time-series and far more efficient (we also discuss this in the appendix).\", \"baseline_suggestion\": \"The baseline suggested is interesting -- however, our concern is that it cannot represent repeated structure since the bidiagonal structure only allows forward transitions. As a key part of our proposed metrics is to be able to capture repeated structure, it would be somewhat impoverished in comparison to our method and the HMM models we compare to.\", \"generate_procedure_as_hmm\": \"The decision to generate each step in the procedure (p_1, \\u2026, p_s) independently was a conscious design choice to improve the model\\u2019s ability to recover non-Markov segmentations. Non-Markov processes can be made Markov by expanding the state (to include the history) but this can both increase the amount of data needed to fit a good model (since there are now more states) and make it harder to identify repeated structure. Alternatively, fitting a non-Markov procedure using a Markov process can result in learning a highly stochastic transition model that does not provide a good fit to the data. We explored both of these possibilities and found as expected that they did not do well in the simulations we considered, though we completely agree that we could also use a Markov process to generate the procedure, if it is present. We have included a small simulation study in the appendix that highlights this point.\", \"presence_of_self_transitions\": \"Our model allows self-transitions which can lead to adjacent segments being assigned the same label (effectively causing them to be condensed into a single segment). A version of our model that rules out self-transitions performed similarly, so we have not included that in the paper to avoid confusion, since inference for that model is far more complex and requires the introduction of auxiliary variables in the model.\\n\\n\\nMini-batch learning/neural net observations: \\nWhile we expect that these extensions will allow us to scale directly to high-dimensional data and improve performance, our focus was to establish the need for learning procedural abstractions. Procedural tasks are extremely common, and our experiments show the benefit of baking in structural assumptions into the data-generating process. We used the same observational model for all compared methods to disentangle this benefit. Recent related work such as Johnson et al. (2016) (\\u201cComposing graphical models with neural networks for structured representations and fast inference\\u201d) could provide a way of combining the kind of structured model we have described with neural net observations. Another alternative would be to design a variational autoencoder using the Gumbel-Softmax trick to represent the discrete variables. However, these are non-trivial extensions that require careful thought so we defer them to future work.\", \"hyperparameters_in_experimental_evaluation\": \"We have added further experimentation to show Prism\\u2019s sensitivity to the number of segments (s) and clusters (K) in Section 6 and Figure 7. We found that Prism\\u2019s performance is relatively insensitive to the number of segments as long as it is greater than the number in ground-truth, suggesting one can set it to a large value. Prism\\u2019s performance is also stable across a wide range of K.\"}",
"{\"title\": \"Author Response (metrics)\", \"comment\": \"Thank you for your detailed comments! Here\\u2019s a response to the concerns that you raised,\", \"response_on_metrics\": \"Indeed, we do view our primary contribution as providing a new performance metric for better assessing the quality of extracting latent structure in temporal sequences. We appreciate the suggestion to (i) more clearly articulate the design space; (ii) discuss alternative formulations and (iii) validate other proposals. We have updated our paper in Sections 3 & 5 as well as Figure 6 to address this and we briefly summarize this here.\\n\\nWe began from the stance that something seemed to be missing in existing evaluation criteria that is important to capture in temporal data -- segment and repeated structure. We wanted to draw on the widely used criteria that have been designed for evaluating clusterings (Rosenberg and Hirschberg 2007) and consider them in the temporal setting. In doing so we encountered a multi-objective problem, in how to weigh the new criteria that we designed (RSS and SSS). Our solution to introduce a tradeoff parameter (\\\\beta) follows the approach laid out in the key paper on (non-temporal) clustering evaluation by Rosenberg and Hirschberg (2007) that introduces the V-measure for clustering evaluation. That paper includes a tradeoff parameter between completeness and homogeneity (the constituent criteria for V-measure) with the harmonic mean (\\\\beta=1) kept as the default, allowing them to \\u201cprioritize one criterion over another, depending on the clustering task and goals.\\u201d \\n\\nBased on your and other reviewers\\u2019 excellent feedback to further explore how these settings impact the resulting metric, we have also introduced a new sensitivity analysis for the tradeoff parameter. While determining the right value of \\\\beta for evaluation is a function of the problem and end-goal (e.g. repeated structure may have no importance in changepoint segmentation, and should be disregarded), we show that \\\\beta=1 is a problem-agnostic compromise that works well in practice. To this end, our sensitivity analysis answers the following question: suppose \\\\beta\\u2019 =/= 1.0 is the right value of the tradeoff parameter for a particular problem -- how similar does the metric perform (in terms of the ranking of methods) by using \\\\beta=1.0? We find, quite naturally, that at extreme values of \\\\beta (0 or \\\\infty) the methods may be ranked quite differently than by \\\\beta=1.0. However, a large range of \\\\beta values can be approximated well by using \\\\beta=1.0 (Figure 6i in the revised paper). This shows the general robustness of our metric when used to compare methods for different applications.\\n\\nWe completely agree that it would be interesting to conduct a study in which human judgments are compared to the evaluation criteria to validate them qualitatively. Interestingly, prior work in the area such as Rosenberg and Hirschberg (2007), Meila (2007), Dom (2001) also did not conduct user studies but instead justified their metric through examples and direct reasoning about how it fulfils the desiderata laid out; we followed this in our work, with the additional inclusion of a large-scale comparison of methods on real-world datasets, as well as validation of the tradeoff parameter that we introduced.\\n\\nWe believe those working on time-series data will benefit from having access to the tailored evaluation criteria we have introduced. These evaluation criteria identify and target specific characteristics of the temporal clustering setting, something that has not been done systematically in the past.\"}",
"{\"title\": \"Author Response (continued)\", \"comment\": \"Hyperparameters in experimental evaluation:\\nWe have added further experimentation to show Prism\\u2019s sensitivity to the number of segments (s) and clusters (K). We found that Prism\\u2019s performance is relatively insensitive to the number of segments as long as it is greater than the number in ground-truth, suggesting one can set it to a large value. Prism\\u2019s performance is also stable across a wide range of K.\"}",
"{\"title\": \"Author Response\", \"comment\": \"Thank you for your feedback! Here\\u2019s a response to the concerns that you raised,\", \"existing_segmentation_criteria\": \"We have included additional descriptive details in the revision in Section 2 & 3 on the relationship between temporal clustering and segmentation. Thank you for pointing us to the paper by Killick, Fearnhead, & Eckley (2011). We have included a discussion of this in the paper in Section 3. Their work uses criteria that are not very similar to those that we have proposed -- e.g. a criteria from their work (which is common in changepoint detection) is to evaluate whether a changepoint occurred close to (within some tolerance interval) one in ground-truth and measure the precision/recall. However, this is (i) sensitive to the tolerance interval, which is problem specific; (ii) an all or nothing metric which cannot distinguish small degradations or changes in the temporal clustering, unlike our approach.\", \"reliance_on_a_tradeoff_parameter\": \"Currently, the paper studies 3 settings of this \\\\beta parameter -- 0, 1 and \\\\infty. Our hope with these settings was to expose the behavior of the constituent metrics in a problem-agnostic way. \\n\\nOur original goal was to draw on the widely used criteria that have been designed for evaluating clusterings (Rosenberg and Hirschberg 2007) and consider them in the temporal setting. In doing so we encountered a multi-objective problem in how to weigh the new criteria that we designed (RSS and SSS). Our solution to introduce a tradeoff parameter (\\\\beta) follows the approach laid out in the key paper on (non-temporal) clustering evaluation by Rosenberg and Hirschberg (2007) that introduces the V-measure for clustering evaluation. That paper includes a tradeoff parameter between completeness and homogeneity (the constituent criteria for V-measure) with the harmonic mean (\\\\beta=1) kept as the default, allowing them to \\u201cprioritize one criterion over another, depending on the clustering task and goals.\\u201d \\n\\nBased on your and other reviewers\\u2019 excellent feedback to further explore how these settings impact the resulting metric, we have also introduced a new sensitivity analysis for the tradeoff parameter. While determining the right value of \\\\beta for evaluation is a function of the problem and end-goal (e.g. repeated structure may have no importance in changepoint segmentation, and should be disregarded), we show that \\\\beta=1 is a problem-agnostic compromise that works well in practice. To this end, our sensitivity analysis answers the following question: suppose \\\\beta\\u2019 =/= 1.0 is the right value of the tradeoff parameter for a particular problem -- how similar does the metric perform (in terms of the ranking of methods) by using \\\\beta=1.0? We explored this issue and show the results in Figure 6i in our revised paper. As expected, at extreme values of \\\\beta (0 or \\\\infty) the methods may be ranked differently than by \\\\beta=1.0. However, encouragingly, a large range of \\\\beta values can be approximated well by using \\\\beta=1.0 . This shows the general robustness of our metric when used to compare methods for different applications.\\n\\n\\nInclusion of the Adjusted Rand Index (ARI):\\nBased on your suggestion, we have added evaluation with respect to the ARI in the revision, including discussion in the results section (Section 5 text, Fig 6). We found that the ARI tends to mediate the effect of changing the number of clusters compared to NMI as you\\u2019d suggested. However, it suffers from the same problems as NMI in evaluating temporal clusterings, without the benefit of having constituent criteria that can be analyzed and interpreted.\", \"difficulty_of_analyzing_munkres\": \"We have improved the clarity of the argument against the difficulty of using the Munkres metric. The Munkres method has two main issues: (i) Since it relies on computing a matching between ground-truth labels and clusters, the score is agnostic to changes in clusters that are not matched with any ground-truth label. This \\u201cproblem of matching\\u201d was pointed out in Rosenberg and Hirschberg (2007) and Meila (2007) for standard clustering settings. (ii) A contingency matrix is fed to the Munkres method for computing the optimal correspondences, ignoring temporal structure.\", \"segment_length_generation\": \"We have updated our discussion to clarify that the generative process we propose (sample m Categoricals from the prior and sort them) is exactly equivalent to generating segment lengths from a Multinomial distribution (over m draws) with a Dirichlet prior. However, representing the process in the way we have written it improved inference efficiency with Gibbs sampling -- resampling the segment lengths requires only computing the likelihood of data points at segment boundaries, which is independent of the length of the time-series and far more efficient (we also discuss this in the appendix).\"}",
"{\"title\": \"Author Response\", \"comment\": \"Thank you for your encouraging comments! Here\\u2019s a response to the concerns that you raised,\", \"incorporating_nonparametric_priors\": \"We completely agree incorporating a nonparametric prior would be an interesting extension to our approach. We chose not to incorporate this in our current work for several reasons:\\n\\n(i) Our contribution was to establish the benefit of incorporating additional assumptions about the underlying procedure with the ability to flexibly learn non-Markov procedures. Thus, keeping the model as simple as possible allows us to isolate the difference that our model makes compared to existing methods that typically make more restrictive, Markovian assumptions. Using a nonparametric prior adds an additional confounder, complicating our ability to understand whether the benefit is caused by the prior, or by the modeling assumptions used. We anticipated incorporating a nonparametric prior would offer no additional benefit beyond the ability to flexibly set some quantities based on data. Fox et al.\\u2019s central contribution was to describe new inference methods for such priors, while the focus of our work is different -- understanding where existing modeling fall short in modeling procedural data, and addressing them.\\n\\n(ii) Even without a nonparametric prior, Prism has the ability to \\u2018skip\\u2019 steps in the procedure. This can be realized since the model is able to set segment lengths to be 0 for some steps in the procedure. Thus, we can achieve at least some of the flexibility afforded by the nonparametric prior by setting the number of segments to be large. We have added further experimentation in the revision to show Prism\\u2019s sensitivity to the number of segments (s) -- see Figure 7. We found that Prism\\u2019s performance is relatively insensitive to the number of segments as long as it is larger than the number in ground-truth.\", \"distinction_from_fox_et_al\": \"A related concern that was pointed out is how Fig. 5 is distinct from the work of Fox et al. Fox et al. primarily target recovering a faithful generative model for the sequences. In contrast we focus on identifying the latent structure in the given sequences. In particular, for a procedure identification setting, we describe how sharing a common procedure (not done in Fox et al.) and separating only the realizations (also different from Fox et al., which assumes a Markov stochastic transition matrix) can improve our ability to recover the latent segmentation. Technically these distinctions lie in how we model the data-generating process, specifically the local assignments of each data-point to a latent discrete cluster label. Fox et al. are concerned with the specification of and inference for, nonparametric priors that can be used with autoregressive generative HMM/SLDS models, in contrast to our work. We believe that identifying latent segmentation structure alone (even sans a generative model) is often of useful value, such as for the important application of activity understanding, or potentially for identifying building blocks in imitation learning.\"}",
"{\"title\": \"Learning procedural abstractions and evaluating discrete latent temporal structure\", \"review\": \"In \\\"Learning procedural abstractions and evaluating discrete latent temporal structure\\\" the authors develop a hierarchical Bayesian model for patterns across time in video data. They also introduce new metrics for understanding structure in time series (completeness and homogeneity). This work is appropriate for ICLR. They provide some applications to robotics, suggesting that this could be used to teach robots to act in environments by learning from videos.\\n\\nThis manuscript paid quite close attention to quality of segmentation, in which actions in videos are decomposed into component parts. It is quite hard to determine groundtruth in such situations and many metrics abound, and so a thorough discussion and comparison of metrics is useful.\\n\\nThe state of the art for Bayesian hierarchical models for segmentation is Fox et al., which is referenced heavily by this work (including the use of test data prepared in Fox et al.) I wonder why the authors drop the Bayesian nonparametric nature of the hierarchy in the section \\\"Modeling realizations in each time-series\\\" (i.e., for Fox et al., the first unnumbered equation in this section would have had arbitrary s).\\n\\nI found that the experiments were quite thorough, with many methods and metrics compared. However, I found the details of the model to be quite sparse, for example it's unclear how Figure 5 is that much different from Fox et al. But, overall I found this to be a strong paper.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"some good ideas, but performance metric isn't sufficiently compared or validated, model contributions aren't enough\", \"review\": \"This is a hybrid paper, making contributions on two related fronts:\\n1. the paper proposes a performance metric for sequence labeling, capturing salient qualities missed by other metrics, and\\n2. the paper also proposes a new sequence labeling method based on inference in a hierarchical Bayesian model, focused on simultaneously labeling multiple sequences that have the same underlying procedure but with varying segment lengths.\", \"this_paper_is_not_a_great_topic_fit_for_iclr\": \"it's primarily about a hand-designed performance metric for sequence labeling and a hierarchical Bayesian model with Gaussian observations and fit with Gibbs sampling in a full-batch setting. The ICLR 2019 reviewer guidelines suggest \\\"Ask yourself: will a substantial fraction of ICLR attendees be interested in reading this paper?\\\" and based on my understanding of the ICLR audience I suspect not. Based on looking at past ICLR proceedings, this paper's topic and collection of techniques is not in the ICLR mainstream (though it's not totally unrelated). The authors could convince me that I'm mistaken by pointing out closely related ICLR papers (e.g. with a similar mix of techniques in their methods, or similarly proposing a hand-designed performance metric); as far as I can tell, none of the papers cited in the references are from ICLR, but rather from e.g. NIPS, AISTATS, and IEEE TPAMI, which I believe would be better fits for this kind of work.\\n\\nOne way to make this work more relevant to the ICLR audience would be to add feature learning (especially based on neural network architectures). That might also entail additional technical contributions, like how to fit models like these in the minibatch setting (where the current Gibbs sampling method might not apply).\\n\\n\\nOn the proposed performance metric, the discussion of existing metrics as they apply to the example in Fig 3 was really helpful. (I assume, but didn't check, that the authors' characterization of the published performance metrics is accurate, e.g. \\\"no traditional clustering criteria can distinguish C_2 from C_3\\\".) The proposed metric seems to help.\\n\\nBut it's a bit complicated, with several free design decisions involved (e.g. choosing the scoring function \\\\mathcal{H} in Sec 3.1, the choice of conditional entropy H in Sec 3.2, the choice of \\\\beta in Sec 3.3, the choice of the specific algebraic forms of RSS, LASS, SSS, and TSS). Certainly the proposed metrics incorporate the kind of information that the authors argue can be important, but the design details of how that information is summarized into a single number aren't really explored or weighed against alternative designs choices. \\n\\nIf a primary aim of this paper is to propose a new performance metric, and presumably to have it catch on with the rest of the field, then the contribution would be much greater if the design space was clearly articulated, alternatives were considered, and multiple proposals were validated. Validation could be done with human labelers ranking the intuitive 'goodness' of labeling results (and then compared to rankings derived from the proposed performance metrics), and with comparing how the metrics correlate with performance on various downstream tasks.\\n\\nAnother idea is to take advantage of a better segmentation performance metric and use it to automatically tune the hyperparameters of the sequence labeling methods considered in the experiments section. (IIUC hyperparameters were set by hand in the experiments.). That would make for more interesting experiments that give a more comprehensive summary of how these techniques can compare.\\n\\nHowever, as it stands, while the performance metric itself may have merit, in this paper it is not sufficiently well validated or compared to alternatives.\\n\\n\\nOn the hierarchical Bayesian model, the current model design andinference algorithm are okay but don't constitute major technical contributions. I was surprised by some model details: for example, in \\\"Modeling the procedure\\\" of Sec 4.1, it would be much more satisfying to generate the (p_1, ..., p_s) sequence from an HMM instead of sampling the elements of the sequence independently, dropping any chance to learn transition structure as part of the Bayesian inference procedure. More importantly, it wasn't made clear if 'self-transitions' where p_s = p_{s+1} were ruled out, though such transitions might confuse the model's semantics. As another example, in \\\"Modeling the realizations in each time-series\\\" of Sec 4.1, the procedure based on iid sampling and sorting seems unnatural, and might make inference more complex. Why not just sample the durations directly (rather than indirectly defining them via sorting independently-generated indices)? If there's a good reason, it should probably be discussed (e.g. maybe parameterizing the durations directly would make it easier to express prior distributions over *absolute* segment lengths, but harder to express distributions over *relative* segment lengths?). Finally, the restriction to conditionally iid Gaussian observations was disappointing.\\n\\nThe experimental results were solid on the task for which the model's extra assumptions paid off, but that's a niche comparison.\", \"one_suggestion_on_the_baseline_front\": \"you can tie multiple HMMs to have the same procedure (i.e. the same state sequences not counting repeats) by fixing the number of states to be s (the length of the procedure sequence) and fixing the transition matrices to have an upper-bidiagonal support structure. A similar construction can be used for HSMMs. I think a natural Gibbs sampling procedure would emerge. This approach is probably written down in the HMM literature (it seems every conceivable HMM variant has been studied!) but I don't have a reference for it.\\n\\n\\nOverall, this paper needs more work.\", \"minor_suggestions\": [\"maybe refer to \\\"segment structure\\\" (e.g. in Sec 3), as \\\"changepoint structure\\\" (and consider looking into changepoint performance metrics if you haven't already)\", \"if you used code from other authors in your baselines, it would be good to cite that code (e.g. GitHub links)\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"An interesting contribution on temporal clustering which consists off a new quality criterion and off a new model.\", \"review\": \"This paper describes two distinct contributions: a new compound criterion for comparing a temporal clustering to a ground truth clustering and a new bayesian temporal clustering method. Globally the paper is clear and well illustrated.\\n1) About the new criterion:\\n*pros: *\\n a) as clearly pointed out by the authors, using standard non temporal clustering comparison metrics for temporal clustering evaluation is in a way \\\"broken by design\\\" as standard metrics disregard the very specificity of the problem. Thus the introduction of metrics that take explicitly into account time is extremely important.\\n b) the proposed criterion combines two parts that are very important: finding the length of the stable intervals (i.e. intervals whose instants are all classified into a single cluster) and finding the sequence of labels. \\n*cons:*\\n a) while the criterion seems new it is also related to criteria used in the segmentation literature (see among many other https://doi.org/10.1080/01621459.2012.737745) and it would have been a good idea to discuss the relation between temporal clustering and segmentation, even briefly.\\nb) the reliance on a tradeoff parameter in the final criterion is a major problem: how shall one chose the parameter (more on this below)? The paper does not explore the effect of modifying the parameter.\\nc) in the experimental section, TSS is mostly compared to NMI and to optimal matching (called Munkres here). Even considering the full list of criteria in the appendix, the normalized rand index (NRI) seems to be missing. This is a major oversight as the NRI is very adapted to comparing clusterings with different number of clusters, contrarily to NMI. In addition, the authors claim that optimal matching is completely opaque and difficult to analyse, while on the contrary it gives a proper way of comparing clusters from different clusterings, enabling fine grain analysis. \\n\\n2) about the new model\\n*pros*: \\n a) as far as I know, this is indeed a new model\\n b) the way the model is structured emphasizes segmentation rather than temporal dependency: the so called procedure is arbitrary and no dependency is assumed from one segment to another. In descriptive analysis this is highly desirable (as opposed to say HMM which focuses on temporal dependencies). \\n*cons*\\na) the way the length of the segments in the sequence are generated (with sorting) this a bit convolved. Why not generating directly those lengths? What is the distribution of those lengths under the sampling model? Is this adapted? \\nb) I find the experimental evaluation acceptable but a bit poor. In particular, nothing is said on how a practitioner would tune the parameters. I can accept that the model will be rather insensitive to hyper-parameters alpha and beta, but I've serious doubt about the number of clusters, especially as the evaluation is done here in the best possible setting. In addition, the other beta parameter (of TSS) is not studied.\", \"minor_point\": [\"do not use beta for two different things (the balance in TSS and the prior parameter in the model)\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
ryxeB30cYX | Stochastic Quantized Activation: To prevent Overfitting in Fast Adversarial Training | [
"Wonjun Yoon",
"Jisuk Park",
"Daeshik Kim"
] | Existing neural networks are vulnerable to "adversarial examples"---created by adding maliciously designed small perturbations in inputs to induce a misclassification by the networks. The most investigated defense strategy is adversarial training which augments training data with adversarial examples. However, applying single-step adversaries in adversarial training does not support the robustness of the networks, instead, they will even make the networks to be overfitted. In contrast to the single-step, multi-step training results in the state-of-the-art performance on MNIST and CIFAR10, yet it needs a massive amount of time. Therefore, we propose a method, Stochastic Quantized Activation (SQA) that solves overfitting problems in single-step adversarial training and fastly achieves the robustness comparable to the multi-step. SQA attenuates the adversarial effects by providing random selectivity to activation functions and allows the network to learn robustness with only single-step training. Throughout the experiment, our method demonstrates the state-of-the-art robustness against one of the strongest white-box attacks as PGD training, but with much less computational cost. Finally, we visualize the learning process of the network with SQA to handle strong adversaries, which is different from existing methods. | [
"adversarial examples",
"deep learning"
] | https://openreview.net/pdf?id=ryxeB30cYX | https://openreview.net/forum?id=ryxeB30cYX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"Bke8gY5HlV",
"H1gel-SAn7",
"S1lm07nnnQ",
"B1xyY2gFhQ"
],
"note_type": [
"meta_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1545083118429,
1541456104311,
1541354442743,
1541110902923
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1507/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1507/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1507/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1507/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"While the paper contains interesting ideas, the reviewers suggest improving the clarity and experimental study of the paper. The work holds promises but is not ready for publication at ICLR.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"reject\"}",
"{\"title\": \"interesting idea, but too much of an accuracy hit, and a problem with clarity\", \"review\": \"The paper proposes a model to improve adversarial training, by introducing random perturbations in the activations of one of the hidden layers. Experiments show that robustness to attacks can be improved, but seemingly at a significant cost to accuracy on non-adversarial input.\\n\\nI have not spent significant time on adversarial training, and review the paper under the following understanding: It was observed that the decision regions of a class are sprinkled with \\\"holes\\\" that get misclassified. These holes are neither naturally occuring. Their existence allows a potential attacker to coerce a model into mis-classifying by providing specially crafted inputs, in order to attain a benefit. Therefore, those holes are called \\\"adversarial\\\" examples. The risk is heightened by the fact that adversarial examples are commonly not mis-classified by humans (or even detectable by the eye). To \\\"plug\\\" the holes, one includes adversarial examples in the training, called \\\"adversarial training.\\\" A resulting system should now have a much improved accuracy for the \\\"holes\\\", while ideally not affecting classification accuracy for the natural examples, which will continue to constitute nearly 100% of the samples the system will be used on. (The \\\"hole\\\" metaphor may not be entirely appropriate, since the space of adversarial examples that are neither misclassified by humans nor detectable is likely much larger than the space of naturally occuring samples.)\\n\\nThe paper proposes a way of plugging the hole by quantizing layer activations. The results show that this makes the system robust to adversarial attacks.\", \"clarity\": \"I spent a lot of time figuring out, as someone who has not spent a lot of time with this, what is being evaluated. It is very unclear whether the non-clean systems in Tables 1 and 2 do apply FGSM etc. also in training (in combination with SQA), or only to the test samples. Table 4, the wording in 4.2, and the wording of the Conclusion indicate that they are. But then, where do I find the accuracy on the naturally-occuring (non-manipulated) samples?\\n\\nThe only combination of interpretations that makes sense in the end is to parse \\\"The networks are all trained with fast single-step adversaries\\\" as to mean \\\"The networks are all trained with FGSM\\\", and that the non-Clean columns in Table 1 refer to test data perturbed by the respective method, while the Clean column shows the accuracy on the natural data. This *must* be clarified in the final version, as it took way too long to understand this. I strongly suggest to do this with the naming: change small_full to small_FGSM, and small_SQA to small_SQA+FGSM.\\n\\nAssuming I figured this out right, the tables still lack the baseline accuracy of doing nothing (clean-clean), so one can know how much the nearly-100% use case gets affected.\", \"results\": \"The second concern I have is that, assuming my reading of the results as described above is correct, that the SQA method quite severely affects accuracy on the clean test data, e.g. increasing the error rate on CIFAR by 72% (from 12.33% to 17.06%). There must be a discussion on why such severe performance hit is worth it, especially since there often is an accuracy cliff below which there is a steep loss of usability of a system. For example, according to my personal experience in speech recognition, the difference between 12% and 17% is the difference between decent and unacceptable user experience (also considering that a few percent of errors are caused by ambiguities in the ground-truth annotations themselves, which should be the case for CIFAR as well).\\n\\nFigure 1 seems a little misleading in this regard since the areas of good accuracy are very condensed. It should be rescaled, as only the area close to the optimum performance is relevant. It does not matter whether we degrade from 99.x% to 77% or 58%, or even 95-ish. All of those hurt performance to the point of not being useful.\\n\\nIt would be nice to discuss what an accuracy metric would be that is useful for the end user. It would have to be a combination of the expected cost of a misclassification of a natural image and the expected cost caused by attacks. A good method would improve this overall metric. A paper attempting to address adversarial attacks should at least discuss this topic briefly, in my view.\", \"technical_soundness\": \"A technical question I have is whether the min-max normalization may be too susceptible to outliers. A single extreme activation can drastically shift the threshold for \\\\lambda=1. How about a mean-var normalization? If there is batch or layer normalization in the system, your activations may already be scaled into a consistent range anyway, that might allow you to use a constant scaling on top of that.\", \"another_question_i_have_is\": \"quantization is often modeled as adding uniform noise. Why not add noise directly? And why uniform noise? For example, would compute g = h + Gaussian noise with std dev=(max-min)/lambda work equally well? What is special about quantization?\", \"and_another_technical_question\": \"My guess is that the notable loss of accuracy is caused by the strong quantization (two values only in the case of \\\\lambda=1). I think the paper should show results for larger lambdas, specifically whether there is a better trade-off point between the accuracy loss from quantization vs. robustness to adversarial samples.\\n\\nSection 3/SQA: \\\"This is the reason why we rescale g^i to the original range of h^i\\\" This seems wrong. I think the main reason is that one would not want to totally change the dynamic ranges of the network, as it may affect convergence merely by scaling. You'd want to limit any impact on convergence to the quantization itself.\", \"significance\": \"I think the significance is limited. Given that the accuracy impact of the mitigation method is very large, I do not consider this paper as substantially solving the problem, or even bringing a practical solution much closer in reach.\", \"pros\": [\"tnteresting idea;\", \"comparison against various attacks.\"], \"cons\": [\"Hard to understand because it was left unclear what is evaluated, at least to readers who are not familiar with a possibly existing implied convention;\", \"The method seems to harm accuracy on clean data a lot, which is the main use case of such a system.\", \"I would in the current form reject the paper. To make it acceptable, the clarity of presentation, especially of the results, must be improved, but more importantly, more work seems necessary to reduce the currently significant accuracy hit from the method, and the trade-off of quantization level vs. robustness should be addressed.\"], \"minor_feedback\": \"Please review the paper for grammar and spelling errors (e.g. \\\"BinaryConnect constraints\\\" or the use of \\\"make\\\", which is often not correct).\\n\\nIn Algorithm 1, I suggest to not use 'g', as it may be mis-read as \\\"gradient.\\\" Unless this is a common symbol in this context.\\n\\n\\\"Thus, we propose SQA\\\" warrants another \\\\subsubsection{}, to indicate where \\\\subsubsection{BinaryConnect} ends.\\n\\nSection 2.2's early reference to SQA is a little confusing, since SQA has not formally been defined. I would smooth this a little, e.g. change \\\"SQA can be considered\\\" to \\\"We will see that our SQA, as introduced in the next section, can be considered\\\"\\n\\n\\\"an alternative is to approximate it\\\" probably should be \\\"our approach is to approximate it\\\"\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Limited novelty, but good experimental results\", \"review\": \"The paper proposes to quantize activation outputs in FGSM training. The algorithm itself is not novel. The straight through approach for training quantized network has been used in previous papers, as also pointed out by the authors. The new thing is that the authors found that quantization of activation function improves robustness, and the approach can be naturally combined with FGSM adversarial training. Experimental results show comparable (and slightly worse) results compared to adversarial training with PGD, while the proposed approach is faster in training time.\\n\\nI have the following questions/comments: \\n\\n1. Why not do SQA with PGD-adversarial training? If SQA+FGSM performs similar to PGD training, SQA+PGD might perform even better. \\n\\n2. There are several important papers missing in the discussion/comparisons: \\n- Quantization improves robustness has been reported in a previous paper: \\\"Defend Deep Neural Networks Against Adversarial Examples via Fixed andDynamic Quantized Activation Functions\\\". How does the proposed algorithm compare with this paper? \\n- Adding stochastic noise in each layer has been used in some recent papers: \\\"Towards Robust Neural Networks via Random Self-ensemble\\\". It will be good to include into discussions. \\n\\n3. I can't find the comparison between PGD-training and SQA on MNIST. Are they also comparable on MNIST? Showing results on more datasets will make the conclusion more convincing. If the benefit of the proposed approach is training time, showing the scalability on ImageNet will make the argument stronger.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Interesting work but requires more thorough experiment\", \"review\": \"This paper proposes to use a stochastically quantized network combined with adversarial training to improve the robustness of models against adversarial examples. The main finding is that, compared to a full precision network, the quantized network can generalize to unseen adversarial attacks better while training only on FGSM-perturbed input. This provides a modest speedup over traditional adversarial training.\\n\\nWhile the findings are certainly interesting, the method lacks experimental validation in certain aspects. The comparison with other adversarial training methods is not standardized across networks, making the efficiency claims questionable. Furthermore, I am uncertain whether the authors implemented expectation over transformations (EoT) for the C&W attack. Since the network produces randomized output, vanilla gradient descent against an adversarial loss is likely to fail. It is conceivable that by taking an average over gradients from different quantizations, the C&W adversary would be able to circumvent the defense better. I would be willing to reconsider my review if the authors can address the above weaknesses.\", \"pros\": [\"Surprising result showing that quantization leads to improved generalization to unseen attack methods.\"], \"cons\": [\"Invalid comparison to other adversarial training techniques since the evaluated models are very different.\", \"Lack of evaluation against EoT adversary.\", \"Algorithm 1 is poorly presented. I'm sure there are better ways of expressing such a simple quantization scheme.\", \"Figures 2 and 3 are uninteresting. The fact that the model is robust against adversaries implies that the activations remain unchanged when presented with perturbed input.\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
rkxkHnA5tX | Learning from Noisy Demonstration Sets via Meta-Learned Suitability Assessor | [
"Te-Lin Wu",
"Jaedong Hwang",
"Jingyun Yang",
"Shaofan Lai",
"Carl Vondrick",
"Joseph J. Lim"
] | A noisy and diverse demonstration set may hinder the performances of an agent aiming to acquire certain skills via imitation learning. However, state-of-the-art imitation learning algorithms often assume the optimality of the given demonstration set.
In this paper, we address such optimal assumption by learning only from the most suitable demonstrations in a given set. Suitability of a demonstration is estimated by whether imitating it produce desirable outcomes for achieving the goals of the tasks. For more efficient demonstration suitability assessments, the learning agent should be capable of imitating a demonstration as quick as possible, which shares similar spirit with fast adaptation in the meta-learning regime. Our framework, thus built on top of Model-Agnostic Meta-Learning, evaluates how desirable the imitated outcomes are, after adaptation to each demonstration in the set. The resulting assessments hence enable us to select suitable demonstration subsets for acquiring better imitated skills. The videos related to our experiments are available at: https://sites.google.com/view/deepdj | [
"Imitation Learning",
"Noisy Demonstration Set",
"Meta-Learning"
] | https://openreview.net/pdf?id=rkxkHnA5tX | https://openreview.net/forum?id=rkxkHnA5tX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"BJg4rnmtxV",
"HJxSenCj6Q",
"SylhzCuiaX",
"SkxapRNs3X"
],
"note_type": [
"meta_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1545317435550,
1542347757506,
1542323732488,
1541258949465
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1506/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1506/AnonReviewer4"
],
[
"ICLR.cc/2019/Conference/Paper1506/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1506/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"The reviewers raised a number of major concerns including the incremental novelty of the proposed (if any), insufficient explanation, and, most importantly, insufficient and inadequate experimental evaluation presented. The authors did not provide any rebuttal. Hence, I cannot suggest this paper for presentation at ICLR.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"metareview\"}",
"{\"title\": \"Problem of limited scope, with interesting domains but uncompelling final performance\", \"review\": \"Summary/Contributions:\\nThis paper focuses on an imitation learning setup where there some of the provided demonstrations which are irrelevant to the task being considered. The stated contribution of the paper is a MAML based algorithm to imitation learning which automatically determines if the demonstrations are \\\"suitable\\\". The authors also employ a mutual information based maximization term between the demonstrations and the pre-update and post update trajectories.\", \"pros\": [\"The tasks proposed in the problem seem interesting.\"], \"cons\": [\"The problem statement seems to be of limited scope.\", \"The use of the task heuristics seems a bit ad-hoc.\", \"The final policies are unimpressive\"], \"justification_for_rating\": \"The major weakness of this paper in my view are that the setup is of somewhat limited scope since receiving irrelevant demonstrations in the form used by the paper would be unnecessarily costly. The domains considered by the paper seem interesting, but the learned policies are not very compelling. I also feel that the MAML baselines + avg finetuning baselines are somewhat limited giving the new domains. I would appreciate for instance a comparison to off-policy learning methods with demonstrations which the authors discuss in the related work (Hester et al. 2017, Nair et al. 2017, Yang et al. 2018). The justification between using mutual information regularization term also does not seem well-motivated and orthogonal to the problem statement. For instance, a diversity of demonstrations should in principle allow for more information between the demonstrations and the induced change.\", \"other\": \"The writing and grammar of the paper needs serious revision. There are error throughout the paper starting from the abstract.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Artificial problem class which doesn't justify the complexity of the method that doesn't deliver good performance.\", \"review\": \"The problem is described as doing imitation learning from a set of demonstrations that includes useless behavior. Authors propose a method that is an extension of MAML which selects the useful demonstrations by their provided performance gains at the meta-training time.\\n\\nPaper clearly demonstrates significant amount of work. Pieces from different modern method implementations (like MAML, TRPO, GAIL, multiple custom loss functions) are combined to work together. Also four custom task domains are implemented with MuJoCo. Finally decent amount of experiments are run.\\n\\nUnfortunately, all that hard work can't be justified by the motivations that are very artificial in details and by the final task performance.\\n\\nFirst of all, the setup includes small number of demonstrations where almost none of them are seemingly successful (judging by the videos). This is a very artificial setting that does not reflect the actual imitation learning problems like demonstrations provided by humans. There, normally the problem is either dealing with small number of demonstrations that are all typically successful but similarly suboptimal or dealing with small number of distinct demonstrators which are again successful but have significantly different styles. In the summary video, authors motivate the case by learning from sources like internet videos, but that setting is also very far away from the case here, because such video collections are much larger but more importantly the main problem is dealing with the third person perspective. All the experiments here is done from first person demonstrations (in one case with a slightly different body).\\n\\nBiggest caveat of the paper is that it is promoted as a purely imitation learning method. Yet everything hinges on the existence of a \\\"task heuristic\\\" which is nothing but a reward function. If such function exists, all these first person demonstrations can be judged and selected based on that function. There would be no need for a complicated meta-learning scheme. Also the task could be trained directly on that reward by reinforcement learning. Also computation of this heuristic function is not specified. As far as I understand, it is a different quantity than the sparse \\\"Task Success Reward\\\".\\n\\nFinally, the final performance of the imitating agents are far from accomplishing the task, though they show some resemblance to the imitation behavior. This is not all that surprising, given small number of demonstrations and high dimensional control problems.\\n\\nOverall, the details of the setup makes the problem very artificial, the final performance is not impressive. Method is an amalgamation of bunch other recent work, which gives the impression of creating complexity for its own sake. I do not think that this method will be useful for moving the field forward and produce any impact.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"many things unclear, experiments not convincing enough, writing needs improvement.\", \"review\": \"The paper makes its intent plainly clear, it wants to remove the assumption that demonstrations are optimal. Thus it should show that in a case that some demonstrations are bad, it outperforms other methods which assume they are all good. The method proposed, while interesting, well-conceived and potentially novel, is not convincingly tested to this end.\\n\\nThe paper should also show that the method can detect the bad demonstrations, and select the good demonstrations. \\n\\nThe experiments are on toy tasks and not existing tasks in the literature. Why not use an existing dataset/domain and simply noise up the demonstrations?\\n\\nFurthermore, many crucial details are omitted, such as the nature of the heuristic function K, and how precisely the weighting $c_i$ is adapted (section 4.4). Is it done by gradient descent? We would have to know what K is, and if it is differentiable to know this.\\n\\nAlso the writing itself needs a thorough revision.\\n\\nI think there may well be promise in the method, but it does not appear ready for publication.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
BkxkH30cFm | Object-Oriented Model Learning through Multi-Level Abstraction | [
"Guangxiang Zhu",
"Jianhao Wang",
"ZhiZhou Ren",
"Chongjie Zhang"
] | Object-based approaches for learning action-conditioned dynamics has demonstrated promise for generalization and interpretability. However, existing approaches suffer from structural limitations and optimization difficulties for common environments with multiple dynamic objects. In this paper, we present a novel self-supervised learning framework, called Multi-level Abstraction Object-oriented Predictor (MAOP), for learning object-based dynamics models from raw visual observations. MAOP employs a three-level learning architecture that enables efficient dynamics learning for complex environments with a dynamic background. We also design a spatial-temporal relational reasoning mechanism to support instance-level dynamics learning and handle partial observability. Empirical results show that MAOP significantly outperforms previous methods in terms of sample efficiency and generalization over novel environments that have multiple controllable and uncontrollable dynamic objects and different static object layouts. In addition, MAOP learns semantically and visually interpretable disentangled representations. | [
"action-conditioned dynamics learning",
"deep learning",
"generalization",
"interpretability",
"sample efficiency"
] | https://openreview.net/pdf?id=BkxkH30cFm | https://openreview.net/forum?id=BkxkH30cFm | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"S1lKcAwHl4",
"SJegxbwTyE",
"HkebUj-MkE",
"SJentx5y1E",
"HyeMx0niRQ",
"BygyoahiC7",
"B1ga323iRm",
"ryge8nniRX",
"rkl_Lshj0Q",
"HJx-4O3oA7",
"S1gwJXrihQ",
"SyeBo445hQ",
"HJg7RcTPn7"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1545072272825,
1544544487850,
1543801673293,
1543639171827,
1543388649844,
1543388567365,
1543388341249,
1543388232033,
1543387984120,
1543387176649,
1541259999026,
1541190812958,
1541032650740
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1505/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1505/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1505/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1505/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1505/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1505/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1505/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1505/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1505/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1505/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1505/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1505/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1505/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper tackles a very valuable problem of learning object detection and object dynamics from video sequences, and builds upon the method of Zhu et al. 2018. The reviewers point out that there is a lot of engineering steps in the object proposal stage, which takes into account background subtraction to propose objects. In its current form, the writing of the paper is not clear enough on the object instantiation part, which is also the novel part over Zhu et al., potentially due to the complexity of using motion to guide object proposals. A limitation of the proposed formulation is that it works for moving cameras but only in 2d environments. Experiments on 3D environments would make this paper a much stronger submission.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"an interesting formulation for 2D dynamics learning not clearly described\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"Thank you for your constructive suggestions. The entire architecture of our multi-level abstraction framework can be summarized as follows:\", \"step_1\": \"Initialization. Initialize the parameters of all neural networks with random weights respectively.\", \"step_2\": \"Motion Detection Level. Perform foreground detection to produce dynamic region proposals, which potentially have moving objects\", \"step_3\": \"Instance Segmentation Level. Train the dynamic instance segmentation network (including Instance Splitter and Merging Net) by minimizing L_DIS, which includes a proposal loss to focus the dynamic instance segmentation on the dynamic region proposals from Step 2.\", \"step_4\": \"Dynamic learning Level. Train the dynamics learning network (whose forward process is shown as Algorithm 1) by minimizing L_DL, which includes a proposal loss to utilize the dynamic instance proposals generated by the trained dynamic instance segmentation network in Step 3 to facilitate the learning of Object Detector.\\n\\u00a0\\nWe will add these descriptions in the next version of our paper.\"}",
"{\"title\": \"Response to authors\", \"comment\": \"Thank you for your response and for including the additional experiments with longer rollouts, evaluation on an additional environment, the video demonstrating the approach, and further details in the paper. I believe this does make the paper quite a bit stronger. However, I unfortunately feel two of my major points have not been fully addressed (the first one below being more important than the second) and therefore I am not inclined to change my score.\\n\\nFirst, while the paper is indeed clearer than before, I still do not feel like the architecture has been explained clearly enough for it to be accepted. For example, while I appreciate Algorithm 1, it does not include any information about how the Instance Segmentation level feeds into the Dynamics Learning level. Instead, the answer is buried at the end of the \\\"Prediction and Training Loss\\\" section on page 6: the Dynamics Learning level includes an additional loss term that computes the L2 loss between the masks proposed at the Dynamics Learning level and the Instance Segmentation level. As I mentioned in my original review, I strongly recommend including pseudocode or an algorithm box for the *entire* architecture (not just Dynamics Learning level), as it is very difficult to otherwise understand how all the parts interact.\\n\\nSecond, while I am sympathetic to the fact that model learning in and of itself is difficult, if the point of the model is to be used within an RL system then it really should be be validated against an RL system. Small model errors that might seem insignificant when judged via L2 loss (or whatever metric is chosen) may actually be very problematic when trying to use the model in the context of a larger system. This issue was raised both by myself and by R1, and I do not feel like the response that \\\"applications of the learned dynamics model are not the focus of this paper but remain to be the future work\\\" really addresses this concern. While I don't think this issue is absolutely necessary for acceptance (certainly other model learning papers have not always included an evaluation in a model-based RL system), I think having this would offset some of the concerns about the system being overly complex or specific and would make the paper significantly stronger.\"}",
"{\"title\": \"Additional modular test to better address the reviews' concerns\", \"comment\": \"We conduct modular test to better understand the contribution of each abstraction level (the detailed results are shown in https://github.com/maop2018/maop-video/blob/master/MAOP.pdf ). First, we investigate whether the level of dynamics learning can learn the accurate dynamics model when the coarse region proposals of dynamic instances are given. We remove the other two levels and replace them by the artificially synthesized coarse proposals of dynamic instances to test the independent performance of the dynamics learning level. Specifically, the synthesized data are generated by adding standard Gaussian or Poisson noise on ground-true dynamic instance masks (Figure 1). As shown in Table 1, the level of dynamics learning can learn accurate dynamics of all dynamic objects given coarse proposals of dynamic instances. Similarly, we also test the independent performance of the dynamics instance segmentation level. We replace the foreground proposal generated by the motion detection level with the artificially synthesized noisy foreground proposal. Figure 2 shows cases to demonstrate our learned dynamic instances in the level of dynamic instance segmentation, which demonstrates the competence of the dynamic instance segmentation level. Taken together, the modular test shows that each level of MAOP can independently perform well and has a good robustness to the proposals generated by the more abstracted level.\\n\\nWe also provide the detailed results on Freeway from Atari games, which has a large number of dynamic objects. To test generalization ability, we use first 1800 frames for training and the last 200 frames for testing. As shown in Table 2, our model outperforms the existing modeling methods in this domain. Note that only the ground-true location of the agent is accessible in Arcade Learning Environment, so we just show the quantitative prediction performance of the agent's dynamics. Actually, we observe that the predictions of other dynamic objects are also accurate by comparing the predicted with the ground-true images, as shown in Figure 3. The validation results on Freeway demonstrate that our model is effective for the concurrent dynamics prediction of a large number of objects.\\n\\nWe will add these results in the next version of our paper. We would like to thank again for the reviews' suggestions.\"}",
"{\"title\": \"Response to Reviewer 3 (connected to the previous response)\", \"comment\": \"Q: \\\"to show the detailed structure of the Effect Net module.\\\" First time I see the name 'Effect Net', what is it? This whole paragraph different nets are named, with a rough indication of their relation, such as \\\"Dynamic Net\\\", \\\"Relation Net\\\" and \\\"Inertia Net\\\". Is \\\"Effect Net\\\" a different name for any of the three previous nets? The paper requires the reader to puzzle from Fig.2 that Relation Net and Inertia Net are parts of Effect Net, which in turn is part of Dynamics Net. This wasn't clear from the text at all.\", \"a\": \"We describe how MAOP and baseline methods differ as follows and added these descriptions in Section 4. AC Model adopts an encoder-LSTM-decoder structure, which performs transformations in hidden space and constructs pixel predictions. CDNA explicitly models pixel motions to achieve invariance to appearance. OODP and MAOP both aim at learning object-level dynamics through an object-oriented learning paradigm, which decomposes raw images into objects and perform predictions based on object-level relations. OODP is only designed for class-level dynamics, while MAOP is able to learn instance-level dynamics.\", \"q\": [\"\\\"We compare MAOP with state-of-the-art action-conditioned dynamics learning baselines, ...\\\" Please re-iterate how these methods differ in assumptions, what they model, with respect to your novel method? For instance, is the main difference your \\\"novel region proposal method\\\" and such? Is the overall architecture different? E.g. explain here already the AC Model uses \\\"pixel-level inference\\\", and that OODP has \\\"lacks knowledge on object-to-object relations\\\" to underline their difference to your approach, and provide context for your conclusions in Section 4.1.\"], \"references\": \"[1] Vijayanarasimhan, Sudheendra, et al. \\\"Sfm-net: Learning of structure and motion from video.\\\" arXiv preprint arXiv:1704.07804 (2017).\\n[2] Ren, Shaoqing, et al. \\\"Faster r-cnn: Towards real-time object detection with region proposal networks.\\\" Advances in neural information processing systems. 2015.\\n[3] Chiappa, Silvia, et al. \\\"Recurrent environment simulators.\\\" arXiv preprint arXiv:1704.02254 (2017).\\n[4] Finn, Chelsea, and Sergey Levine. \\\"Deep visual foresight for planning robot motion.\\\" Robotics and Automation (ICRA), 2017 IEEE International Conference on. IEEE, 2017.\\n[5] Racani\\u00e8re, S\\u00e9bastien, et al. \\\"Imagination-augmented agents for deep reinforcement learning.\\\" Advances in Neural Information Processing Systems. 2017.\\n[6] Deisenroth, Marc Peter, Carl Edward Rasmussen, and Dieter Fox. \\\"Learning to control a low-cost manipulator using data-efficient reinforcement learning.\\\" (2011): 57-64.\\n[7] Pathak, Deepak, et al. \\\"Curiosity-driven exploration by self-supervised prediction.\\\" International Conference on Machine Learning (ICML). Vol. 2017. 2017.\\n[8] Srinivas, Aravind, et al. \\\"Universal Planning Networks.\\\" arXiv preprint arXiv:1804.00645 (2018).\\n[9] Kulkarni, Tejas D., et al. \\\"Hierarchical deep reinforcement learning: Integrating temporal abstraction and intrinsic motivation.\\\" Advances in neural information processing systems. 2016.\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"Thank you for your thoughtful review and suggestions.\", \"q\": \"\\\"An object mask describes the spatial distribution of an object ...\\\" Does the distribution capture uncertainty on the object's location, or does it capture the spread of the object's extent ('mass distribution') ?\", \"a\": \"It captures the spread of the object's extent. An object mask describes the spatial distribution of a class of objects. Each entry of one object mask represents the probability that the corresponding pixel belongs to this class of objects.\"}",
"{\"title\": \"Response to Reviewer 1 (connected to the previous response)\", \"comment\": \"Q: The writing of this paper makes it a bit hard to understand what the novel contributions of this paper are, and how the proposed method should go beyond the two problems that it solves. In general, there are many phrasings that would benefit from being rewritten more concisely; it would help with clarity, since the proposed model has a multitude of different parts with sometimes long names.\\n\\nExperimentally, there are many parts to the proposed model, and while it is clear what each of them achieves, it is unclear how necessary each of the parts are, and how sensitive the model is to any part being (possibly slightly) incorrect.\\n\\nThe proposed method is tested on, presumably, RL environments; yet, no RL experiments are performed, so there is no way of knowing if the proposed model is actually useful for planning (there are instances of model-based methods learning acceptable models that are just wrong enough to *not* be useful to actually do RL or e.g. MCTS planning).\", \"a\": \"We have addressed these concerns in the general response to all reviewers.\", \"references\": \"[1] BPL Lo and SA Velastin. Automatic congestion detection system for underground platforms. In Intelligent Multimedia, Video and Speech Processing, 2001. Proceedings of 2001 International Symposium on, pp. 158\\u2013161. IEEE, 2001.\\n[2] Dar-Shyang Lee. Effective gaussian mixture learning for video background subtraction. IEEE Transactions on Pattern Analysis & Machine Intelligence, (5):827\\u2013832, 2005.\\n[3] Xiaowei Zhou, Can Yang, and Weichuan Yu. Moving object detection by detecting contiguous outliers in the low-rank representation. IEEE Transactions on Pattern Analysis and Machine 427 Intelligence, 35(3):597\\u2013610, 2013.\\n[4] Xiaojie Guo, Xinggang Wang, Liang Yang, Xiaochun Cao, and Yi Ma. Robust foreground detection using smoothness and arbitrariness constraints. In European Conference on Computer Vision, pages 535\\u2013550. Springer, 2014.\\n[5] Lucia Maddalena, Alfredo Petrosino, et al. A self-organizing approach to background subtraction for visual surveillance applications. IEEE Transactions on Image Processing, 17(7):1168,433 2008.\\n[6] Watters, Nicholas, et al. \\\"Visual interaction networks.\\\" arXiv preprint arXiv:1706.01433 (2017).\\n[7] Wu, Jiajun, et al. \\\"Learning to see physics via visual de-animation.\\\" Advances in Neural Information Processing Systems. 2017.\\n[8] Vijayanarasimhan, Sudheendra, et al. \\\"Sfm-net: Learning of structure and motion from video.\\\" arXiv preprint arXiv:1704.07804 (2017).\\n[9] He, Kaiming, et al. \\\"Mask r-cnn.\\\" Computer Vision (ICCV), 2017 IEEE International Conference on. IEEE, 2017.\\n[10] Liu, Li, et al. \\\"Deep learning for generic object detection: A survey.\\\" arXiv preprint arXiv:1809.02165 (2018).\\n[11] Denton, Emily L. \\\"Unsupervised learning of disentangled representations from video.\\\" Advances in Neural Information Processing Systems. 2017.\\n[12] Higgins, Irina, et al. \\\"beta-vae: Learning basic visual concepts with a constrained variational framework.\\\" International Conference on Learning Representations. 2017.\\n[13] Hsieh, Jun-Ting, et al. \\\"Learning to Decompose and Disentangle Representations for Video Prediction.\\\" arXiv preprint arXiv:1806.04166 (2018).\\n[14] Mnih, Volodymyr, Kavukcuoglu, Koray, Silver, David S, and Rusu, Andrei A. et al. Human-level control through deep reinforcement learning. Nature, 2015\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"Thank you for your review and suggestions.\", \"q\": \"When running your experiments, do you report results averaged over multiple runs? - Figure 4+C7: why does the x-axis start at 2000? - I don't think Figure 5 is really necessary - All figures: your captions could be improved by giving more information about what their figure presents. E.g. in Figure C7 I have no idea what the curves correspond to. Sure it's accuracy, but for which task? How many runs? Is it a running average? Etc. - Where are the test curves? Or are all curves test curves? - Your usage of \\\\citep and \\\\citet, (Author, year) vs Author (year), is often inconsistent with how the citation is used.\", \"a\": \"To make the training process more efficient and stable in deep learning, there are usually a certain number of frames (2000 frames in our experiment) collected for populating the training buffer before learning start, which is also adopted in RL algorithms such as [14]. Thus, the first 2000 iterations are only used to collect an initial dataset and the learning process starts at iteration 2001. All the figures are test curves. Figure C7 plots the learning curves for the dynamics prediction in unseen Monster Kong environments . The curves with \\\"Agent\\\" notation illustrate the learning processes for the dynamics of the agent, while those with \\\"All\\\" notation indicate the learning curves of all dynamic objects.\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"Thank you for your thoughtful review and suggestions.\", \"q\": \"Overall, the idea of learning object-based transition models is not really new (and there are a few citations missing regarding prior work in this regard, e.g. [4-6]). However, there is yet to be an accepted solution for actually learning object-based models robustly and the present work seems to result in the cleanest separation between dynamic objects and background that I have seen so far, and is therefore quite original in that regard.\", \"a\": \"We have properly cited the papers according to the review\\u2019s suggestion. [5,6] assumed that the object localization and tracking, or the object representations are given, and directly used them to learn the object-based dynamics model. [4] proposed to use a realistic physics engine called Bullet physics engine to perceive physical object properties. Unlike them, our novelty lies in developing a self-supervised neural network framework that automatically learns object representations and object-based dynamics from raw visual observations and demonstrating the generalization ability of this framework over novel environments with multiple dynamic objects and different object layouts.\\n\\n[4] Wu, Yildirim, Lim, Freeman, & Tenenbaum (2015). Galileo: Perceiving Physical Object Properties by Integrating a Physics Engine with Deep Learning. NIPS 2015.\\n[5] Fragkiadaki, Agrawal, Levine, & Malik (2016). Learning visual predictive models of physics for playing billiards. ICLR 2016.\\n[6] Kansky, Silver, Mely, Eldawy, Lazaro-Gredilla, Lou, Dorfman, Sido, Phoenix, & George (2017). Schema Networks: Zero-shot Transfer with a Generative Causal Model of Intuitive Physics. ICML 2017.\"}",
"{\"title\": \"General response to all reviewers\", \"comment\": \"We thank all reviewers for their feedback and thoughtful comments and suggestions, which are helpful for improving the quality of our paper. In this updated paper, we have revised our manuscript according to their comments and suggestions. Below, we describe in detail how we have modified our paper to address the reviewers\\u2019 feedback.\\n\\n1. We refined the presentation of our method, and added additional descriptions to illustrate the high-level intuition of the architecture design and clarified our main contributions.\\n\\n2. We compared our model with baselines in terms of the long-term predictions in unseen environments.\\n\\n3. We add a video for better perceptual understanding of the prediction performance in unseen environments.\\n\\nIn addition, we take this opportunity to emphasize the main contributions of this paper:\\n\\n1. We propose a novel self-supervised, object-oriented dynamics learning framework to enable sample-efficient learning and zero-shot generalization over novel environments that have multiple controllable and uncontrollable dynamic objects and different static object layouts.\\n\\n2. Our approach takes a step towards interpretable deep learning and disentangled representation learning. It learns disentangled representations and visually and semantically interpretable knowledge, which contributes to understanding the logic behind the dynamics prediction and opens the avenue for further researches on object-based planning, object-oriented model-based RL, and hierarchical learning.\\n\\n3. We provide a general multi-level framework for learning object-based dynamics model from raw visual observations, which offers opportunities to easily leverage the well-studied object detection methods (e.g. Mask R-CNN [He et al., 2017]) in the computer vision area.\\n\\nOur main objective lies in learning generalizable and interpretable dynamics from raw visual observations, which is a general-purpose task for AI and potentially benefit a broad range of domains. For example, the learned dynamics model can guide the exploration of model-free RL [Chiappa et al., 2017], be used with existing policy search or planning methods (e.g., MCTS and MPC) [Finn et al., 2017], or directly plugged into an end-to-end policy network integrating model-free and model-based path [Weber et al., 2018]. The prediction error of our dynamics model can be used as signals for curiosity-driven exploration [Pathak et al., 2017]. Our learned object representations can be leveraged to design effective heuristic reward functions (like the distance-based rewards [Srinivas et at., 2018]) to facilitate model-free RL, or used to set subgoals in hierarchical RL [Kulkarni et al., 2016]. However, these applications of the learned dynamics model are not the focus of this paper but remain to be the future work. \\n\\nWe also want to address the general concern about universality of our approach. The assumption of our method is that the environment only contains rigid objects and has no camera motion. Under this assumption, we choose another game Freeway from Atari Game to test our model and get similar performance with Monsterkong and Flappy Bird. To test generalization ability, we use first 1800 frames for training and the last 200 frames for testing. For the training frames, our model achieves 0.80, 0.91, and 0.94 for 0-error, 1-error, and 2-error accuracy, respectively. For the testing frames, our model achieves 0.79, 0.89, and 0.94 for 0-error, 1-error, and 2-error accuracy, respectively.\\n\\n[1] He, Kaiming, et al. \\\"Mask r-cnn.\\\" Computer Vision (ICCV), 2017 IEEE International Conference on. IEEE, 2017.\\n[2] Chiappa, Silvia, et al. \\\"Recurrent environment simulators.\\\" arXiv preprint arXiv:1704.02254 (2017).\\n[3] Finn, Chelsea, and Sergey Levine. \\\"Deep visual foresight for planning robot motion.\\\" Robotics and Automation (ICRA), 2017 IEEE International Conference on. IEEE, 2017.\\n[4] Racani\\u00e8re, S\\u00e9bastien, et al. \\\"Imagination-augmented agents for deep reinforcement learning.\\\" Advances in Neural Information Processing Systems. 2017.\\n[5] Pathak, Deepak, et al. \\\"Curiosity-driven exploration by self-supervised prediction.\\\" International Conference on Machine Learning (ICML). Vol. 2017. 2017.\\n[6] Srinivas, Aravind, et al. \\\"Universal Planning Networks.\\\" arXiv preprint arXiv:1804.00645 (2018).\\n[7] Kulkarni, Tejas D., et al. \\\"Hierarchical deep reinforcement learning: Integrating temporal abstraction and intrinsic motivation.\\\" Advances in neural information processing systems. 2016.\"}",
"{\"title\": \"Nice results but difficult to understand\", \"review\": \"This paper proposes a new architecture for learning dynamics models in 2D Atari-like game words. The architecture includes multiple layers of abstraction: a \\u201cmotion detection\\u201d level, which looks at which pixels change over time in order to guess at which parts of the image are in the foreground or not; a \\u201cinstance segmentation\\u201d level, which segments the foreground into regions and instances; and a \\u201cdynamics learning\\u201d level, which learns the dynamics of object instances using a interaction network-style approach.\", \"pros\": [\"Impressive-looking dynamics predictions in Atari-like games.\", \"An object-based prediction model, which could enable predictions about specific entities in the scene rather than holistic frame predictions.\"], \"cons\": \"- Very complicated and difficult-to-understand architecture.\\n- No ablation studies to validate different components of the architecture.\\n- No validation in a model-based RL or control setting.\\n- Experiments are only done on one-step predictions, rather than long-term rollouts.\\n\\nQuality\\n---------\\n\\nThe quality of the predictions seems quite high (based on Figure 6 and the results tables), though there are a number of opportunities to further strengthen the evaluation and analysis:\\n\\n- I wish that there were more than a single figure of qualitative results to go on. I highly recommend that a revision include a link to a video showing more predictions over time for each environment, ideally with comparisons to the other baselines as well.\\n- The introduction of the paper motivates the learning of the model in terms of model-based RL, however, the model is not actually used in a model-based RL setting. It would be nice to see at least a simple validation that the model can be used with an off-the-shelf planner to solve one of the games which are evaluated in the paper. If it cannot, then that limits the significance of the model.\\n- As far as I can tell, all the results reported in the tables are based on one-step predictions only. While it is great to show that even in this regime the other models struggle, it would be even better if results could be reported for longer rollouts (i.e., taking the model outputs and feeding it back in as input, and repeating this procedure say 50 steps into the future). Models are not particularly useful in a MBRL setting if they can only be used to predict a single timestep, so it is important to validate that longer-term predictions can be made as well.\\n\\nOverall the literature review is reasonably solid, but I am not sure the citations in the opening sentence are quite appropriate as model-based DRL has been around for longer than 2017 (see for example [1-3]). Moreover, Chiappa et al (2017) only learns a model and does not use it for planning, so I am not sure it is quite appropriate as a citation for MBRL. \\n\\n\\nClarity\\n--------\\n\\nUnfortunately, I had a very hard time understanding how exactly the architecture works and I felt like there were a lot of details missing. I am not confident that I would be able to reproduce the architecture from reading the paper alone. Below, I will list some of the specific points where I was confused, but I think overall the paper needs to be substantially reorganized in order to be clearer as to how the architecture actually works.\\n\\nMore broadly, I think some of my confusion stems from the fact that there are very similar computations occurring across the three levels of abstraction but the paper does not really make it clear how these computations relate to one another or how they are similar/different. For example, in the \\u201cdynamics learning\\u201d level there are modules for performing object detection and instance localization. But then in the \\u201cinstance segmentation\\u201d level, there are similarly modules for detecting and masking out instances. It is not clear to me why this needs to be done twice? \\n\\nIn general, I would *strongly* recommend including at least in the appendix an algorithm box that sketches out the computational graph for the whole architecture (not in as much detail as the existing algorithm boxes, but in more detail than what is given in Figure 1).\", \"specific_places_where_i_was_confused\": \"- Where do the region proposals (P) come from?\\n- If I\\u2019m understanding correctly, the variable M is used multiple times in multiple different ways. It seems to be produced from the \\u201cinstance localization\\u201d module in the \\u201cdynamics learning\\u201d level, but also from the \\u201cdynamic instance segmentation network\\u201d in the \\u201cinstance segmentation\\u201d level. Are these M different or the same?\\n- Where does F_foreground^(t) come from?\\n\\n\\nOriginality\\n-------------\\n\\nOverall, the idea of learning object-based transition models is not really new (and there are a few citations missing regarding prior work in this regard, e.g. [4-6]). However, there is yet to be an accepted solution for actually learning object-based models robustly and the present work seems to result in the cleanest separation between dynamic objects and background that I have seen so far, and is therefore quite original in that regard.\\n\\nThis paper appears to be quite similar to Zhu & Zhang (2018), with the main difference being additional functionality to handle multiple dynamic objects in a scene rather than just a single dynamic object. This is a fairly significant difference and the improvement over Zhu & Zhang (2018) seems quite large, so even though the papers seem quite similar on the surface I think the difference is actually quite substantial.\\n\\nSignificance\\n----------------\\n\\nIf it were clearer how to reproduce this paper, and if it could be shown to apply to a wider range of environments (e.g. the Atari suite, or even better the Sonic domains from the OpenAI Retro contest), then I believe this paper could be quite significant as it would open up new avenues for model-based learning in these domains. Unfortunately, however, it is not clear to me as the paper is currently written how well it would do on other 2D environments, thus limiting the significance. If the model only works on Monster Kong and Flappy Bird---neither of which are commonly used in the RL literature---then it has limited applicability to the rest of the model-based RL community. Similarly, as stated above, it is not clear how well the model will work with longer rollouts or in actual in MBRL settings, thus limiting its significance.\\n\\nReferences\\n---------------\\n\\n[1] Heess, Wayne, Silver, Lillicrap, Tassa, & Erez (2015). Learning Continuous Control Policies by Stochastic Value Gradients. NIPS 2015.\\n[2] Gu, Lillicrap, Sutskever, & Levine (2016). Continuous Deep Q-Learning with Model-based Acceleration. ICML 2016.\\n[3] Schmidhuber (2015). On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models. arXiv 2015.\\n[4] Wu, Yildirim, Lim, Freeman, & Tenenbaum (2015). Galileo: Perceiving Physical Object Properties by Integrating a Physics Engine with Deep Learning. NIPS 2015.\\n[5] Fragkiadaki, Agrawal, Levine, & Malik (2016). Learning visual predictive models of physics for playing billiards. ICLR 2016.\\n[6] Kansky, Silver, Mely, Eldawy, Lazaro-Gredilla, Lou, Dorfman, Sido, Phoenix, & George (2017). Schema Networks: Zero-shot Transfer with a Generative Causal Model of Intuitive Physics. ICML 2017.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Review\", \"review\": \"This paper proposes a novel architecture, coined Multi-Level Abstraction Object-Oriented Predictor, MAOP. This architeture is composed of 3 parts, a Dynamics model, an object segmentation model, and a motion detection module.\\n\\nWhile some parts of the model use handcrafted algorithms to extract data (e.g. the motion detection), most parts are learned and can be trained without much additional supervision, as the objectives are mostly unsupervised objectives.\\n\\nThe proposed model is interesting, and certainly \\\"solves\\\" the two tasks it is trained on. On the other hand, this model seems to be specifically tailored to solve these two tasks. It assumes a static background, very local newtonian-like physics, a very strong notion of object and object class. It is not clear to me if any of the improvements seen in this paper are valuable, reusable methods, or just good engineering work.\\nAs such, I do not think that this paper fits ICLR. There has been a growing number of works that aim to find learning algorithms that learn to discover and disentangle object-like representations without having so much prior put into the model, but rather through some general purpose objective. The current paper seems like a decent applications paper, but it explores improvements orthogonal to this trend that IMO is what preoccupies the ICLR audience.\\n\\nThe writing of this paper makes it a bit hard to understand what the novel contributions of this paper are, and how the proposed method should go beyond the two problems that it solves. In general, there are many phrasings that would benefit from being rewritten more concisely; it would help with clarity, since the proposed model has a multitude of different parts with sometimes long names.\\n\\nExperimentally, there are many parts to the proposed model, and while it is clear what each of them achieves, it is unclear how necessary each of the parts are, and how sensitive the model is to any part being (possibly slightly) incorrect.\\n\\nThe proposed method is tested on, presumably, RL environments; yet, no RL experiments are performed, so there is no way of knowing if the proposed model is actually useful for planning (there are instances of model-based methods learning acceptable models that are just wrong enough to *not* be useful to actually do RL or e.g. MCTS planning).\\n\\nOverall, this paper tackles its tasks in an interesting but maybe too specific way; in addition, it could be improved in a variety of ways, both in terms of presentation and content. While the work is novel, I am not convinced that it is relevant to the interests of the ICLR audience.\", \"comments\": [\"When running your experiments, do you report results averaged over multiple runs?\", \"Figure 4+C7: why does the x-axis start at 2000?\", \"I don't think Figure 5 is really necessary\", \"All figures: your captions could be improved by giving more information about what their figure presents. E.g. in Figure C7 I have no idea what the curves correspond to. Sure it's accuracy, but for which task? How many runs? Is it a running average? Etc.\", \"Where are the test curves? Or are all curves test curves?\", \"Your usage of \\\\citep and \\\\citet, (Author, year) vs Author (year), is often inconsistent with how the citation is used.\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Proposed model can betrained sucessfully on video game frames, but appears highly engineered and not very generic. Paper could be structured better to improve readibility\", \"review\": \"In this paper, the novel MAOP model is described for self-supervised learning on past video game frames to predict future frames. The presented results indicate that the method is capable of discovering semantically important visual components, and their relation and dynamics, in frames from arcade-style video games. Key to the approach is a multi-level self-learning approach: more abstract stages focus on simpler problems that are easier to learn, which in turn guide the learning process at the more complex stages.\\nA downside is that it the method is complex, consisting of many specific sub-components and algorithms, which in turn have again other sub-components. This makes the paper a long read with a lot of repetition, and various times the paper refers to the names of sub-components that are only explained later. Other methodological details that are relevant to understand how the method operates are described in the Appendices. I expect that if the paper would be better structured, it would be easier to understanding how all the parts fit together. Another downside of this complexity is that the method seems designed for particular types of video game frames, with static backgrounds, a fixed set of objects or agents. It is unclear how the method would perform on other types of games, or on real-world videos. While the method therefore avoids the need for manual annotation, it instead encodes a lot of domain knowledge in its design and components.\\nI also didn't fully understand how the self-supervised model is used for Reinforcement Learning in the experiments. Is the MAOP first trained, and the fixed to perform RL with the learned agent models, or is the MOAP learned end-to-end during RL?\", \"pros\": [\"MAOP seems successful on the tested games in the experiments\", \"Demonstrates that, with a sufficiently engineered method, self-supervised learning can be used to discover different types of objects, and their dynamics.\"], \"cons\": [\"writing could be improved, as the methodology currently reads as a summation of facts, and some parts are written out of order, resulting in various forward references to components that only become clear later. Several times, the paper states that some novel algorithm is used, but then provides no further explanation in the text as all description of this novelty is deferred to an appendix.\", \"method does not seem generic, hence it is unclear how relevant this architecture it is to other use cases\", \"many hyperparameters for the individual components, algorithms. Unclear how these parameter setting affect the results\"], \"below_are_more_detailed_comments_and_questions\": \"\", \"general_comments\": [\"The proposed MOAP method consists of many subalgorithms, resulting in various (hyper)parameters which may impact the results (e.g. see Appendix A, B). Appendix D lists several used hyperparameter settings, though various parameters for the algorithms are still missing (e.g. thresholds alpha, beta in Algo.2). Were the used parameters optimized? How are these hyperparameters set in practice? How does changing them impact your results?\", \"Methods seems particularly designed for 'video games', where the object and background structures have well defined sizes, appearance, etc. How will the MOAP fair in more realistic situations with noisy observations, occluded objects, changing appearances and lighting conditions, etc.?\", \"How about changing appearance of an agent during an action, e.g. a 'walking animation' ? Can your method learn the sequence of sprites to accurately predict the next image? Is that even part of the objective?\", \"Appendix D has important implementation details, but is never mentioned in the text I believe! Didn't realize it existed on first read through.\", \"Introduction:\", \"What prediction horizon are you targeting? 1 step, T steps into the future, 1 to T steps in the future simultaneously?\", \"What are you trying to predict? Object motion? Future observations?\", \"\\\"... which includes a CNN-based Relation Net to ... \\\", the names Relation Net, Inertia Net, etc.. are used as if the reader is expected to know what these are already. If these networks were introduced in related work already, please add citations. Otherwise please rephrase to clarify that these are networks themselves are part of your novel design.\", \"Section 3.1\", \"\\\"It takes multiple-frame video images ... and produce the predictions of raw visual observations.\\\". As I understand from this, the self-supervised approach basically performs supervised learning to predict a future frame (target output) given past frames (input). I do not understand how this relates to Reinforcement Learning (RL) as mentioned in the introduction and Related Work. Is there still some reward function in play when learning the MAOP parameters? Or is the idea to first self-supervised learn the MAOP, and afterwards fix its parameters and use it in separate a RL framework? I believe RL is not mentioned anymore until Section 4.2. This connection between self-supervised and reinforcement learning should be clarified, or otherwise the related work should be adjusted to include other (self-supervised) work on predicting future image frames.\", \"\\\"An object mask describes the spatial distribution of an object ...\\\" Does the distribution capture uncertainty on the object's location, or does it capture the spread of the object's extent ('mass distribution') ?\", \"\\\"Note that Object Detector uses the same CNN architecture with OODP\\\". What does OODP stand for? Add citation here. (first mention of OODP is in Experiments section)\", \"\\\"(similar with Section 3.2)\\\" \\u2192 \\\"similar to\\\". Also, I find it a confusing to say something is similar to what will be done in a future section, which has not yet been introduced. Can you not explain the procedure here, and in Section 3.2 say that the procedure is \\\"similar to Section 3.1\\\" instead?\", \"\\\"to show the detailed structure of the Effect Net module.\\\" First time I see the name 'Effect Net', what is it? This whole paragraph different nets are named, with a rough indication of their relation, such as \\\"Dynamic Net\\\", \\\"Relation Net\\\" and \\\"Inertia Net\\\". Is \\\"Effect Net\\\" a different name for any of the three previous nets? The paper requires the reader to puzzle from Fig.2 that Relation Net and Inertia Net are parts of Effect Net, which in turn is part of Dynamics Net. This wasn't clear from the text at all.\", \"Section 3.2:\", \"p7.: \\\"Since DISN leans\\\" \\u2192 \\\"Since DISN learns\\\" ?\", \"There are many losses throughout the paper, but I only see at the end of Section 3.1 some mentioning that multiple losses are combined. How is this done for the other components, .e.g is the total loss for DISN a weighted sum of L_foreground and L_instance ? Are the losses for all three three MAOP levels weighted for full end-to-end learning?\", \"This section states various times \\\"we propose a novel [method]\\\", for which then no explanation is given, and all details are explained in the Appendix. While the Appendix can hold important implementation details, I would still expect that novelties of the paper are clearly explained in the paper itself. As it stands, the appendix is used as an extension of the methodological section of an already lengthy paper.\", \"\\\"Conversely, the inverse function is ... \\\" M has a mask for each of the n_o \\\"object classes\\\", hence the \\\"Instance Localization Module\\\" earlier to split out instances from the class masks. So how can there be a single motion vector STN^-1(M,M') if there are multiple instances for an object mask? How will STN^-1 deal with different amount of instances in M and M' ?\", \"Section 3.3:\", \"What is the output of this level? I expect some mathematical formulation as in the previous sections, resulting in some symbol, that is then used in Section 3.2. E.g. is the output \\\"foreground masks F\\\" (found in Appendix A) ? This paper is a bit of a puzzle through the pages for the reader.\", \"Section 4:\", \"\\\"We compare MAOP with state-of-the-art action-conditioned dynamics learning baselines, ...\\\" Please re-iterate how these methods differ in assumptions, what they model, with respect to your novel method? For instance, is the main difference your \\\"novel region proposal method\\\" and such? Is the overall architecture different? E.g. explain here already the AC Model uses \\\"pixel-level inference\\\", and that OODP has \\\"lacks knowledge on object-to-object relations\\\" to underline their difference to your approach, and provide context for your conclusions in Section 4.1.\", \"Appendix A:\", \"Algorithm 1, line 7: \\\"sample a pixel coordinate\\\" \\u2192 is this non-deterministically sampling?\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
ByfyHh05tQ | Learning to Design RNA | [
"Frederic Runge",
"Danny Stoll",
"Stefan Falkner",
"Frank Hutter"
] | Designing RNA molecules has garnered recent interest in medicine, synthetic biology, biotechnology and bioinformatics since many functional RNA molecules were shown to be involved in regulatory processes for transcription, epigenetics and translation. Since an RNA's function depends on its structural properties, the RNA Design problem is to find an RNA sequence which satisfies given structural constraints. Here, we propose a new algorithm for the RNA Design problem, dubbed LEARNA. LEARNA uses deep reinforcement learning to train a policy network to sequentially design an entire RNA sequence given a specified target structure. By meta-learning across 65000 different RNA Design tasks for one hour on 20 CPU cores, our extension Meta-LEARNA constructs an RNA Design policy that can be applied out of the box to solve novel RNA Design tasks. Methodologically, for what we believe to be the first time, we jointly optimize over a rich space of architectures for the policy network, the hyperparameters of the training procedure and the formulation of the decision process. Comprehensive empirical results on two widely-used RNA Design benchmarks, as well as a third one that we introduce, show that our approach achieves new state-of-the-art performance on the former while also being orders of magnitudes faster in reaching the previous state-of-the-art performance. In an ablation study, we analyze the importance of our method's different components.
| [
"matter engineering",
"bioinformatics",
"rna design",
"reinforcement learning",
"meta learning",
"neural architecture search",
"hyperparameter optimization"
] | https://openreview.net/pdf?id=ByfyHh05tQ | https://openreview.net/forum?id=ByfyHh05tQ | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"Hye-Eey7lV",
"BklQ-Y9PkV",
"rkeF2D9PJV",
"Syl-dLf4yN",
"HygIRdFbyN",
"rJxME7y-yE",
"BkAgSNekN",
"BkxvU5i6Cm",
"r1e-JqspC7",
"Bygfptj6A7",
"HylyHJLcCQ",
"ByguVuyUCQ",
"B1ewpv1ICm",
"BklaXw1LCm",
"B1ecw8JLAX",
"SkgXlBJLRX",
"SJgLJQJ8Am",
"H1eWdO9lTQ",
"rkxet6Pa27",
"HJgr7SfxnX"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544904744955,
1544165626662,
1544165297312,
1543935593278,
1543768269889,
1543725866024,
1543681269514,
1543514703286,
1543514584914,
1543514553993,
1543294775156,
1543006256318,
1543006142859,
1543005988530,
1543005794468,
1543005418560,
1543004894271,
1541609576913,
1541401976511,
1540527389278
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1504/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1504/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1504/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1504/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1504/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1504/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1504/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1504/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1504/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1504/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1504/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1504/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1504/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1504/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1504/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1504/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1504/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1504/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1504/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1504/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"After a healthy discussion between reviewers and authors, the reviewers' consensus is to recommend acceptance to ICLR. The authors thoroughly addressed reviewer concerns, and all reviewers noted the quality of the paper, methodological innovations and SotA results.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Consensus is accept\"}",
"{\"title\": \"Indeed, parentheses :)\", \"comment\": \"Thanks, we did not think of the interpretation \\\"Neural (Architecture Search)\\\". Now being very aware of the two different possible interpretations of NAS, we will be sure to use a wording that avoids the confusion. Thanks again!\"}",
"{\"title\": \"Thanks\", \"comment\": \"Thanks for these references. We already cited the first two in Section 5\\n(Joint Architecture and Hyperparameter Search, top of page 6), but we now\\nthink that a brief paragraph on architecture search and hyperparameter\\noptimization in the related work section would be useful as well, where all\\nthese references will be a natural fit. Thanks again for the helpful\\nfeedback and for increasing your rating!\"}",
"{\"title\": \"Parentheses\", \"comment\": \"Thanks for updating the text to avoid confusion.\\n\\nI suppose the interpretation is dependent on parentheses. :) \\n(Neural Architecture) Search is [presumably] yours\\nNeural (Architecture Search) is mine\\n\\nI would argue that the paper by Quoc Le's group does not call their method NAS. Rather they frame it as 'searching the NASNet space' with an evolutionary strategy.\"}",
"{\"title\": \"Suggest accepting\", \"comment\": \"I also suggest accepting the paper as an application paper. Although methodological contributions are limited, the evaluation is strong and results promising. The authors clearly addressed my comments and revised their manuscript (see my comments and their answers below).\"}",
"{\"title\": \"References hyperparameter optimization\", \"comment\": [\"References hyperparameter optimization:\", \"https://arxiv.org/abs/1808.05377\", \"https://arxiv.org/abs/1611.01578\", \"https://arxiv.org/pdf/1807.07663.pdf\"]}",
"{\"title\": \"Increased rating\", \"comment\": [\"Thanks for your final answers and changes. I increased the rating of your paper to 8. Tl;DR;\", \"methodological contributions existing but incremental\", \"comprehensive evaluation and experiments\", \"strong application paper overall\"]}",
"{\"title\": \"Reply to update\", \"comment\": \"\\u201cI'm happy with the revisions the authors have made, as I find that they call out the novel contributions a bit more explicitly. Specifically I see some novel work in the area of simultaneous multi-task/meta-RL and black box optimization of the policy net architectures. I don't think calling this NAS is justified; calling it bayesopt or black box opt is fair. NAS uses a neural net to propose experiments over structured graphs of computation nodes. This work appears to be simpler hyperparameter optimization.\\u201d\\n\\n--> Thanks for the positive feedback, and for seeing some novel work in the area of simultaneous multi-task/meta-RL and black box optimization of the policy net architectures. We agree that much of the current work on NAS does indeed use neural nets to propose experiments over structured graphs of computation nodes, and to not be confusing we\\u2019ll reword. For completeness, we would like to mention, however, that not all NAS methods fall into that category; specifically, most current NAS papers use a cell search space of fixed dimensionality, and the method that has the best published performance (regularized evolution, by Quoc Le\\u2019s group at Google Brain [https://arxiv.org/abs/1802.01548], better than reinforcement learning by the same group and others) does *not* use a neural network but a simpler hyperparameter optimization method with a fixed dimensionality approach through genetic algorithms. But this is really not important for this paper and we will simply reword to the non-contentious term \\u201cjoint optimization of architectural choices, state description hyperparameters, and RL algorithm hyperparameters\\u201d. \\n\\nThanks again for the positive reply and update!\"}",
"{\"title\": \"Reply to minor comments 1/2\", \"comment\": \"Thank you for appreciating our detailed rebuttal and our revised manuscript. We also thank you for again pointing out that our work is a strong application paper (as we mentioned in our top level comment, applications are specifically listed as relevant in the ICLR call for papers, including applications in computational biology and other fields).\\n\\n\\n# 1a. Hyper-parameter optimization\\n\\\"I still believe that defining parameters of the neural network architectures in addition to optimization parameters is not a strong methodological contribution. This is rather common practice in reinforcement learning although often not described in detail in manuscripts. Methods for optimizing both discrete and continuous hyper-parameter had been described before, including Spearmint or Hyperopt. That said, I still believe that the paper is a strong application paper!\\u201d\\n\\n---> We fully agree that hyperparameter optimization is an integral part of machine learning and reinforcement learning in general. For our application, it was the key to success, as we did not a priori know which architecture or state space size would work best. For this reason, we automatically searched a fairly flexible space that included pure RNNs, pure CNNs, and mixtures of these with an additional MLP. This level of parametrization is rarely laid out, so we do hope that you agree to at least some novelty in this regard. (Indeed, if you know of a reference that searched over a combination of RNNs and CNNs before we would be very grateful to know about it to not falsely claim novelty in this regard.) \\n\\nSpearmint and TPE are useful tools in general. We expect that for our 14-dimensional space with many integer choices TPE would work better than Spearmint, and BOHB is a more efficient multi-fidelity variant of TPE (also see the BOHB paper for large speedups over TPE and Spearmint: proceedings.mlr.press/v80/falkner18a/falkner18a.pdf); our modest contribution in this regard is to provide a case study for this existing tool.\\n\\n\\n# 2. [Training/Validation/Test split of the data sets].\\n\\\"Do I understand you correctly that you proposed a \\u2018standard\\u2019 training, evaluation, and test set for Rfam-Learn, which does does not exist for Eterna100 or Rfam-Taneda? This is useful if the split is well defined (e.g. if the distribution of certain sequence properties is equal in all three sets), but not a strong contribution. Is the dataset larger than existing datasets, more diverse, or does it include additional sequences? I suggest to more clearly define differences in either the main text or appendix and more clearly motivate why Rfam-Learn is superior to existing datasets.\\u201d\\n\\n---> Yes, indeed, we proposed such a standard training/evaluation/test split, and these do not exist for Eterna100 or Rfam-Taneda. As described in the added Appendix C, we selected a subset of the Rfam database v13.0 based on difficulty (measured by number of known solutions and time it took MCTS-RNA to solve them) and controlled the distribution of sequence lengths across splits.\\n\\nOur data sets consist of 65000, 100, and 100 target structures (for training, validation, and test, respectively), based on naturally occurring RNA sequences. In contrast, Rfam-Taneda and Eterna100 contain only 29 and 100 sequences respectively. While the former also is a subset of the Rfam database, the latter consists of handcrafted sequences only. We included both in our work as they serve as the default test sets in the community. Our data sets are a \\u201ccurated\\u201d selection of a larger corpus of natural RNA sequences allowing more data driven approaches to be applied. It is hard to compare RNA sequence datasets in terms of quantitative measures, but we tried to select an interesting collection that enables generalization across different RNA families. We hope this clarifies your questions regarding our new benchmark.\"}",
"{\"title\": \"Reply to minor comments 2/2\", \"comment\": \"# 3. Hyperparameter optimization\\n\\\"Please highlight in the main text that hyperparameters were only optimized for LEARNA and that other methods might also benefit by rigorously optimizing both model as well as optimization hyperparameters.\\u201d\\n\\n---> Thanks, we will definitely highlight in the final version which methods were optimized on what data set, and that other methods could benefit from that as well. (The server does not allow us to upload a new version at this time.)\\n\\n\\n# 7. How does the accuracy and runtime scale depending on the sequence (structure) length?\\n\\\"Thanks for the additional runtime analysis. Does each dot correspond to one target structure in the test set? Why are you showing the \\u2018minimum solution time\\u2019? Does the runtime vary over multiple runs? If so, it is more fair to show the average run time.\\u201d\\n\\n---> Yes, indeed, every point corresponds to a single target structure in the test set. We decided to plot the minimum (1) to account for missing data due to sequences not being solved within the time limit by individual runs and (2) since the minimum is used as the benchmarking criterion throughout the related RNA Design literature. However, we have addressed the average performance in Figure 3 (performance over time) which (also) shows the average solution time and in Tables 6-8 which list the number of solved sequences for various numbers of runs. Our approach performs well in both of these regards.\\n\\n\\n# 8. How sensitive is the model performance depending on the context size \\\\kappa for representing the current state?\\n\\\"Given that sequences can be hundreds of nucleotides long, I agree that RNNs would be slow and sensitive to exploding/vanishing gradients. You can consider non-recurrent models such as dilated CNNs or transformers in the future.\\u201d\\n\\n---> Thanks, we will adjust the axis labels for the plots for the final version to be more consistent with the text. The sequence length is indeed challenging, and thank for your suggestion for future work, we\\u2019ll include these models in the ones we are planning to study next.\\n\\n\\n# 9. Local improvement step\\n\\\"Thanks for clarifying the local improvement step (LIS). Figure 9 indicates LIS clearly boost performance, which is an important finding. Can you highlight this in the main text? Are other methods also likely to benefit from a post-hoc LIS?\\u201d\\n\\n---> Thanks, we did mention the importance of this step in our ablation study in Section 6.2 where we discuss Figure 9, but will do so more explicitly in the final version. The way we view this step, it is a very limited local search applied when proposed sequences almost folds into the target structure; most of the other methods we compare against are local search methods that take a long time until they get close to the target structure; we would expect that applying this local improvement step would likely slow them down. We do, however, believe that this step could also benefit other generative models, such as MCTS-RNA, but we have not tried to incorporate it into MCTS-RNA; we will point out the possibility of doing so more explicitly in the final version.\"}",
"{\"title\": \"Manuscript clearly improved; only few minor comments.\", \"comment\": \"I appreciate that you clearly addressed all comments and revised your manuscript! I have only few remaining comments.\\n\\n# 1a. Hyper-parameter optimization\\nI still believe that defining parameters of the neural network architectures in addition to optimization parameters is not a strong methodological contribution. This is rather common practice in reinforcement learning although often not described in detail in manuscripts. Methods for optimizing both discrete and continuous hyper-parameter had been described before, including Spearmint or Hyperopt. That said, I still believe that the pap1` er is a strong application paper!\\n\\n# 2. [Training/Validation/Test split of the data sets].\\nDo I understand you correctly that you proposed a \\u2018standard\\u2019 training, evaluation, and test set for Rfam-Learn, which does does not exist for Eterna100 or Rfam-Taneda? This is useful if the split is well defined (e.g. if the distribution of certain sequence properties is equal in all three sets), but not a strong contribution. Is the dataset larger than existing datasets, more diverse, or does it include additional sequences? I suggest to more clearly define differences in either the main text or appendix and more clearly motivate why Rfam-Learn is superior to existing datasets.\\n\\n# 3. Hyperparameter optimization\\nPlease highlight in the main text that hyperparameters were only optimized for LEARNA and that other methods might also benefit by rigorously optimizing both model as well as optimization hyperparameters.\\n\\n# 7. How does the accuracy and runtime scale depending on the sequence (structure) length?\\nThanks for the additional runtime analysis. Does each dot correspond to one target structure in the test set? Why are you showing the \\u2018minimum solution time\\u2019? Does the runtime vary over multiple runs? If so, it is more fair to show the average run time.\\n\\n# 8. How sensitive is the model performance depending on the context size \\\\kappa for representing the current state? \\nGiven that sequences can be hundreds of nucleotides long, I agree that RNNs would be slow and sensitive to exploding/vanishing gradients. You can consider non-recurrent models such as dilated CNNs or transformers in the future.\\n\\nThanks clarifying that \\\\kappa corresponds to the \\u2018state_radius\\u2019 in Appendix I. For consistency, I suggest to change the x-axis title to \\u2018state_radius \\\\kappa\\u2019 or \\u2018\\\\kappa\\u2019.\\n\\n# 9. Local improvement step\\nThanks for clarifying the local improvement step (LIS). Figure 9 indicates LIS clearly boost performance, which is an important finding. Can you highlight this in the main text? Are other methods also likely to benefit from a post-hoc LIS?\"}",
"{\"title\": \"Detailed replies\", \"comment\": \"Thanks for your helpful comments and questions. Thanks also for your positive feedback on our work in general, our experiments, the significance of our approach for therapeutics and other practical use cases and for characterizing our work as interesting to ICLR as an application area. We would like to comment on your suggestions, comments and questions in the following.\\n\\n\\n\\u201c1. I was a bit confused by Table 1 until reading the prose at the bottom of page 7 indicated Table 1 is presenting percentages, not integer quantities.\\u201d\\n\\n--> Reviewing Table 1, we agree that it could be confusing -- its caption did mention that all entries represent percentages and not total values, but this was unnecessarily indirect, and we now reworked the tables to include a percentage symbol to make it clearer.\\n\\n\\n\\u201c2. The local improvement step is not very clearly explained. Are all combos tried across all mismatched positions, or do we try each mismatched position independently holding the others to their predicted values? What value of \\\\xi did you end up using? It seems like this is essential to getting good performance.\\u201d\\n\\n--> Thanks, we agree that the local improvement step should be described more clearly and that it is an important part of our approach (as the empirical evidence in our ablation study suggests). We have since reworked the corresponding paragraph and included pseudocode (Appendix A). It works as follows: we exhaustively try all possible nucleotide assignments for the mismatched positions which takes at most 4^|differing_sites| additional folds. The value of \\\\xi we used was 5, i.e., we used the local improvement step if the number of differing sites was at most 4. This was set early on based on runtime considerations and preliminary experiments and was not part of our hyperparameter optimization; thank you for the detailed reading of our paper and pointing out this missing value, we have added it now.\\n\\n\\n\\u201c3. It is completely unclear to me what the 'restart option' does.\\u201d\\n\\n--> Thanks for pointing out this missing information. Since RL algorithms are prone to getting stuck in local minima, we decided to employ occasional restarts (i.e., reinitialization) in our strategies. We now describe this in the revised version in Section 5. For LEARNA and for Meta-LEARNA-Adapt, this makes a difference, whereas for Meta-LEARNA it does not since Meta-LEARNA is directly sampling from the model without updating it (which is equivalent to restarting at each step)\\n\\n\\n\\u201c4. Using RL in this specific application setting seems relatively new (though also explored by RL-LS in https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6029810/).\\u201d \\n\\n--> Thanks for this comment! Indeed, the reinforcement learning guided local search (RL-LS) was developed in parallel and independently from LEARNA (as mentioned in our discussion on RL-LS in Section 2.2 of our initial submission; now discussed in Section 3). However, the two approaches differ a lot: although both approaches employ RL to RNA Design, Eastman et al. follows the common approach of using a local search strategy for solving the RNA Design problem, while we try to tackle the problem with a generative model.\\n\\n\\n\\u201c5. On the other hand, the approach used doesn't seem to be substantially different than anything else typically used for policy gradient RL. The meta-learning approach is interesting, though again not too different from multi-task approaches (though these are perhaps less common in RL than in general deep learning).\\u201d\\n\\n--> We agree that the policy gradient approach we use is standard, but that using meta-learning in this context is already less common. We would also like to repeat the point concerning novelty of our joint optimization we made in the general reply to all reviewers. We copied this here for convenience:\\n\\n<\\u201cTo the best of our knowledge, our paper is the first case study on the joint optimization of the architecture of the policy network (including both recurrent connections and convolutions in a single search space), the state representation, and the hyperparameters of an RL algorithm. In fact, we are not even aware of *any* other previous work on neural architecture search (NAS) for RL. Also, while there is of course a lot of work on NAS for CNNs and NAS for RNNs individually, we are not aware of any other previous NAS work that tackles a search space including both convolutions and recurrent units at the same time (i.e., with NAS choosing the best combination of the two). Finally, we are not aware of any previous work on NAS for meta-learning (other than learning a cell architecture and transferring that cell to a different dataset). We do believe that these are clear points in favor of our paper\\u2019s novelty, and we should have made these clearer in the submitted version of our paper; we\\u2019ve fixed this now in Section 5 and in the introduction.\\u201d>\\n\\n\\nThanks again for your comments! If we cleared up some of your concerns, we would kindly ask you to update your assessment.\"}",
"{\"title\": \"Detailed replies\", \"comment\": \"Thanks for the suggested improvements, the insightful comments and questions! Thanks also for the positive feedback on the text of the paper, references and motivation. In the following we provide detailed replies:\\n\\n\\n\\u201c1. What is the star (*) superscript for? Was expecting the length of the RNA sequence instead.\\u201d\\n\\n--> Thank you for pointing out this undefined and potentially confusing use of notation. The Kleene Operator (*) applied to a set M yields a set of all finite-length sequences based on M, and we used it since RNA Structures have variable length. But we do agree that this can be confusing and made changes to talk about a specific structure w and then use N^|w| as you suggested. \\n\\n\\n\\u201c2. Same on p4, when introducing the notation of your decision process $ D_w $, explicitly introduce all the ingredients.\\u201d\\n\\n--> We agree with you and revised the definition of the undiscounted decision process. We now explicitly name the components of the quadruple D_w and also refer to the specifics in the paragraphs following the definition of D_w.\\n\\n\\n\\u201c3. in Equation (2) on p4, maybe clarify the notation with '.', '(' and ')' for example as the reader could really struggle.\\u201d\\n\\n--> We have looked at this again and changed the equation, making it easier to parse for the reader. We have also included a verbatim \\u201cdot\\u201d and \\u201copening bracket\\u201d to not confuse the reader by the notation.\\n\\n\\n\\u201c4. I didn't really understand the message in Section 4, not being an expert in the field. Could you clarify your contribution here?\\u201d\\n\\n--> Thanks for asking about this! As detailed in our general reply to all reviewers, this section breaks novel ground concerning the joint optimization of neural architectures and hyperparameters, joint search over combinations of recurrent and convolutional layers in the same search space, neural architecture search for RL, and neural architecture search for meta-learning. In the interest of brevity, we refer to the detailed reply to all reviewers above.\\n\\n\\n\\u201c5. your 'Ablation study' in Section 5.2; does it correspond to true uncertainty/noise that could be observed in real data?\\u201d\\n\\n--> In our ablation study, we disable one functional component of our approach at a time in order to study its influence; incorporating ablations in empirically evaluated work is important to find out whether all proposed components are necessary and contribute to the final performance. Our ablation study is performed on the test split of our introduced dataset, which as we point out in the heading of Section 5 of our initial submission, has been generated from sequences observed in living organisms as listed in the Rfam 13.0 database; it is not used to optimize hyperparameters but is a post hoc evaluation.\\n\\n\\n\\u201c6. why a new benchmark data set, when there exist good ones to compare your method to, e.g. in competitions like CASP for proteins?\\u201d\\n\\n--> We report our results on two widely used benchmarks which were also used in the work we compare to but unfortunately only provide test sets (no training/validation/test split). To the best of our knowledge, we introduce the first benchmark with an explicit training/validation/test split. The reviewer is right in that there exist other and good data sources, but to the best of our knowledge not in the form of competitions. To mention two databases by name:\\n\\n* the STRAND database (http://www.rnasoft.ca/strand/) that currently holds 4666 known RNA secondary structures\\n* the FRABASE 2.0 database (https://bmcbioinformatics.biomedcentral.com/articles/10.1186/1471-2105-11-231) with 2753 entries of fragments of secondary structures\\n\\nBoth databases have not been used by the publications we compare to and cannot satisfy the size and sequence diversity requirements for our meta-learning approach and future research (especially for methods needing a large training set). The Rfam 13.0 database we use here for generating our new training-, validation- and test set is large enough to yield three distinct datasets of meaningful sizes and diversity.\\n\\n\\n\\u201c7. do you make your implementation available?\\u201d\\n\\n--> Thanks for the question, indeed, we strongly believe in sharing code (as well as data) to reproduce scientific findings. To stand by this opinion, we had included a note in the conclusion of our initial submission that we will make all of our code and data available upon acceptance of our paper.\\n\\n\\n\\u201c8. quite like the clarification of the relationship of your work to that of Eastman et al. 2018. Could you also include discussions to other papers, e.g. Chuai et al. 2018 Genome Biol and Shi et al. 2018 SentRNA on arXiv\\u201d\\n\\n--> Thanks for the positive feedback regarding our discussion of the relationship of our work to that of Eastman et al. 2018, and for bringing the related work to our attention. We included discussions in our related work section.\\n\\n\\nThanks again for all your comments! If we cleared up some of your concerns, we would kindly ask you to update your assessment.\"}",
"{\"title\": \"Detailed replies 1/3\", \"comment\": \"Thanks for your positive feedback regarding our motivation and general writing, the characterization of our paper as a good application paper and for your comments, questions and helpful suggestions. In the following we reply to your comments and clarify some of the points:\\n\\n\\n\\u201c1a. The methodological contributions are limited. Performing hyper-parameter optimization is in my eyes not novel, but common practice in the field.\\u201d\\n\\n--> We agree that hyperparameter optimization is clearly standard in RL, but our work goes much further than that. A joint optimization over neural architectures and hyperparameters, to the best of our knowledge, is novel in the field of RL (and is also not common in supervised learning). We would also like to repeat point (2) from our general reply to all reviewers concerning novelty, copied here for convenience:\\n\\n<\\u201cTo the best of our knowledge, our paper is the first case study on the joint optimization of the architecture of the policy network (including both recurrent connections and convolutions in a single search space), the state representation, and the hyperparameters of an RL algorithm. In fact, we are not even aware of *any* other previous work on neural architecture search (NAS) for RL. Also, while there is of course a lot of work on NAS for CNNs and NAS for RNNs individually, we are not aware of any other previous NAS work that tackles a search space including both convolutions and recurrent units at the same time (i.e., with NAS choosing the best combination of the two). Finally, we are not aware of any previous work on NAS for meta-learning (other than learning a cell architecture and transferring that cell to a different dataset). We do believe that these are clear points in favor of our paper\\u2019s novelty, and we should have made these clearer in the submitted version of our paper; we\\u2019ve fixed this now in Section 5 and in the introduction.\\u201d>\\n\\n\\n\\u201c1b. Related work; It would me more informative if the authors compared reinforcement learning to other approaches for (conditional) sequence generations, e.g. RNNs, autoregressive models, VAEs, or GANs, which have been previously reported for biological sequence generation (e.g. http://arxiv.org/abs/1804.01694).\\u201d\\n\\n--> Thanks for the helpful comment on the interesting work in the fields of protein design and biological sequence generation. In our revised related work section we did include a discussion on the general field of matter engineering and reference a very recent review on generative approaches for this field. We did not experiment with VAEs or GANs (with appendix, our paper is already 30 pages...) but consider that future work. However, concerning RNNs, as described in Section 5, these were in fact part of our design space and were selected by the joint optimization process for two out of three final configurations used in our experiments (see Table 4 in Appendix A of our initial submission; in the revised version this is Table 5 in Appendix E).\\n\\n\\n\\u201c2. [Training/Validation/Test split of the data sets]\\u201d and \\n\\u201c12. The authors should more clearly motivate in the introduction why they created a new dataset.\\u201d\\n\\n--> The benchmarks used in the recent RNA Design literature Eterna100 (100 datapoints) and Rfam-Taneda (29 datapoints) do not have a train/validation/test split associated with them. (As ML researchers, we were surprised about this, too...) Hence, the need for a training and validation set of adequate size and diversity motivated us to introduce Rfam-Learn, which to the best of our knowledge is the first RNA Design benchmark with an explicit training/validation/test split.\\n\\nWe optimized each of our approaches using only our own validation set (Rfam-Learn-Validation) and for our meta learning approach only used our own training set (Rfam-Learn-Train). To measure the final performance, as well as the transferability of the found architecture, hyperparameters, and the trained policy (Meta-LEARNA), the best configuration of each of our methods was then tested on Eterna100, Rfam-Taneda and Rfam-Learn-Test, and they achieve state-of-the-art results on all of them.\\n\\nWe incorporated changes to clarify the above points and we thank you for the suggestion to use a table to display benchmark information as it indeed conveys the information more clearly.\"}",
"{\"title\": \"Detailed replies 2/3\", \"comment\": \"\\u201c3. Hyperparameter optimization of other methods; Did the authors also optimize the most important hyperparameters of RL-LS and other methods? Otherwise it is unclear if the performance gain is due to hyperparameter optimization or the method itself.\\u201d\\n\\n--> We assess the performance of all methods on three test sets, where our method was trained and optimized using a single designated dataset for training and validation. The other methods we compare to either do not have clear/exposed hyperparameters (RNAinverse), were optimized by the original authors either also on a subset of the Rfam database (AntaRNA, and MCTS), or optimized on a non-disclosed dataset (RL-LS).\\nAdditionally, the authors of RL-LS, state in their paper: \\u201dA more rigorous hyperparameter search might improve our results somewhat, but would probably not dramatically change the model's performance.\\u201d.\\n\\nOur empirical evaluation focuses more on generalization rather than optimizing the hyperparameters to every dataset. That is why we optimized each of our approaches (LEARNA and Meta-LEARNA) using only our own validation set. For our meta-learning approaches (Meta-LEARNA, Meta-LEARNA-Adapt) the single best configuration was then evaluated on the three test sets without modification and still surpassed the state-of-the-art. Potentially, all methods could be improved by further optimization on each type of dataset, but this was not our focus.\\n\\n\\n\\u201c4. The time measurement (x-axis figure 3) is unclear. Is it the time that methods were given to solve a particular target structure and does figure 3 show the average number of solved structures in the test for the time shown on the x-axis?\\u201d and\\n\\u201c6. The term \\u2018run\\u2019 (\\u201cunreliable outcomes in single runs\\u201d, section 4) is unclear. Is it a single sample from the model (one rollout), a particular hyperparameter configuration, or training the model once for a single target structure? This must be clarified for understanding the evaluation.\\u201d\\n\\n--> Thanks, you are right to point out that these two points were unclear. We believe this was due to an inconsistent usage of the term \\u201crun\\u201d. In Section 4 of our initial submission (joint architecture and hyperparameter optimization) we referred with \\u201crun\\u201d to a full optimization of the policy and in Section 5 of our initial submission (experiments) we referred with \\u201crun\\u201d to an \\u201cevaluation run\\u201d which consists of evaluating a given method once on each target structure in the corresponding benchmark. An evaluation run can be visualized by plotting the number of solved target structures across the time spent on each particular target structure. Existing benchmarks for RNA Design consider a number of evaluation runs and use the total number of target structures that were solved in at least one of these evaluation runs as the objective. Hence, Figure 3 visualizes aggregates of all evaluation runs: On the left side of Figure 3 we plot the total number of target structures that were solved in at least one evaluation run across time spent on each particular target structure, and similarly, the right side of Figure 3 shows the average number of solved target structures. Thank you very much for pointing out this issue, we disambiguated the terms and worked on clarity.\\n\\n\\n\\u201c5. Were all methods compared on the same hardware (section 5; 20 cores; Broadwell E5-2630v4 2.2 GHz CPUs) and can they be parallelized over multiple CPU or GPU cores? This is essential for a fair runtime comparison.\\u201d\\n\\n--> We agree that this is essential for a fair comparison and as we noted in the header of our experiments section in our initial submission all computations were done on the same listed CPU model. As mentioned in our initial submission, the training stage of Meta-LEARNA uses 20 cores (we use parallel PPO), but at validation/test time all methods were only allowed a single core (using core binding).\\n\\n\\n\\u201c7. How does the accuracy and runtime scale depending on the sequence (structure) length?\\u201d\\n\\n--> Thank you for asking this important question. We have now included plots for solution times across sequence lengths (Appendix J), which clearly indicate that our approaches scale very well and are not affected a lot by increasing sequence length.\"}",
"{\"title\": \"Detailed replies 3/3\", \"comment\": \"\\u201c8. How sensitive is the model performance depending on the context size \\\\kappa for representing the current state? Did the authors try to encode the entire target structure with, e.g. recurrent models, instead of using a window centered on the current position?\\u201d\\n\\n--> Thanks for the suggestion. An RNN is already included in our search space, and was indeed selected by our joint architecture search and hyperparameter optimization. We have not yet experimented with encoding the entire target structure with an RNN, since having to backpropagate through that RNN at each time step of our agent would lead to a substantial increase of computational cost, be harder to train and increase the number of hyperparameters. Having said that, we do think this is a good idea if it can be made computationally efficient, e.g., by learning the embedding offline (although the training signal for that would need to be defined first); since this is not straightforward we leave it to future work.\\n\\nIn terms of the importance of the context size, our new hyperparameter importance in Appendix I indicates that the context size (state space radius) \\\\kappa does not appear to be very important.\\n\\n\\n\\u201c9. The authors should more clearly describe the local optimization step (section 3.1; reward). Were all nucleotides that differ mutated independently, or enumerated exhaustively? The latter would have a high runtime of O(3^d), where d is the number of nucleotides that differ. When do the authors start with the local optimization?\\u201d\\n\\n--> We agree that the local improvement step should be described more clearly: we revised the reward paragraph and included pseudocode for computing the reward using the local improvement step (Appendix A). It works as follows: After the policy rollout we fold the candidate solution and compare it to the target structure, if less than \\\\xi sites differ we perform this local improvement step in order to compute the reward. The value of \\\\xi is not part of the hyperparameter optimization and based on the runtime costs and preliminary experiments we set xi=5, i.e., we used the local improvement step if the number of differing sites was at most 4. Keeping this number low was indeed important because of the computational complexity mentioned by the reviewer (it\\u2019s actually O(4^d), with d<=4). \\n\\n\\n\\u201c10. The authors should replace \\u2018450x\\u2019 faster in the abstract by \\u2018clearly\\u2019 faster since the evaluation does not show that LEARNA is 450x faster than all other methods.\\u201d\\n\\n--> Thank you for the comment, we changed the abstract to say that our approach achieves new state-of-the-art performance on all benchmarks while also being orders of magnitudes faster in reaching the previous state-of-the-art performance. We note that these speedups (including the 450x one on the Eterna100 benchmark, Figure 3 (top)) can clearly be seen in the evaluation plots.\\n\\n\\n\\u201c11. Does \\u201cAt its most basic form\\u201d (introduction) mean that alternative RNA nucleotides exist? If so, this should be cited.\\u201d\\n\\n--> Thanks for this question. With \\u201cAt its most basic form\\u201d we refer to the most basic structural form of RNA, which is a sequence of nucleotides. We have since clarified the phrasing to \\u201cAt its most basic structural form\\u201d.\\n\\n\\n\\u201c13. The authors should mention in section 2.1 that the dot-bracket notation is not the only notation for representing RNA structures (https://www.tbi.univie.ac.at/RNA/ViennaRNA/doc/html/rna_structure_notations.html).\\u201d and\\n\\u201c14a. The authors should define the hamming distance (section 2.1).\\u201d\\n\\n--> We have included references, thank you for your comments.\\n\\n\\n\\u201c14b. Do other distance metrics [than the hamming distance] exist?\\u201d\\n\\n--> While not formally metrics, we have experimented with the paired-unpaired-ratio and derivatives of the hamming distance. While also not a metric, the GC-content (which is the ratio of G and C nucleotides to the U and A nucleotides) has been used in the RNA Design literature (e.g. by antaRNA) as an additional objective.\\n\\n\\n\\u201c15. For the Traveling Salesman Problem (section 2.2) should the reward be the *negative* tour length?\\u201d\\n\\n--> You are of course right, thank you for reading our paper carefully and bringing this to our attention; we fixed it.\\n\\n\\n\\u201c16. The authors should more clearly describe the embedding layer (section 4). Are nucleotides one-hot encoded or represented as integers (0, 1 for \\u2018(\\u2018 and \\u2018.\\u2019)?\\u201d\\n\\n--> Thank you for this comment; we agree and have included a clearer description. For representing nucleotides, our automated reinforcement learning approach includes the choice between: 1) a binary encoding differentiating between paired and unpaired sites, and 2) a learned embedding layer whose dimension is a hyperparameter (only active if the learned embedding is selected). \\n\\n\\nThanks again for all your comments! If we cleared up some of your concerns, we would kindly ask you to update your assessment.\"}",
"{\"title\": \"Aspect of being an application paper and novelty of our methods\", \"comment\": \"We would like to thank all reviewers for their helpful comments! In response to them we performed additional analysis, updated the paper and now reply to all reviews at the same time in order to limit the overhead for the reviewers.\\n\\nSince these were comments several reviewers had, we would like to comment on (1) the aspect of being an application paper and (2) the novelty of our methods.\\n\\n(1) We are glad that several reviewers found the application we are tackling interesting. We would like to note that applications are specifically listed as relevant in the ICLR call for papers (https://iclr.cc/Conferences/2019/CallForPapers), including applications in computational biology and others. We believe that a strong application paper takes existing methods and applies them to an interesting and difficult problem of a certain significance. In the process, the formulation of the problem, and technical details need to be adjusted to make it work. Additionally, a thorough evaluation comparing the method to other state-of-the-art approaches from the field and analyzing the importance of components (e.g. via ablation) is vital. We feel that we accomplished these in our work, and our reviews also indicate that the reviewers agree.\\n\\n(2) Having said that, we in fact also believe that our work is novel in many ways other than this application. While hyperparameter optimization is clearly standard in RL, to the best of our knowledge, our paper is the first case study on the joint optimization of the architecture of the policy network, the state representation, and the hyperparameters of an RL algorithm. In fact, we are not even aware of *any* other previous work on neural architecture search (NAS) for RL. Also, while there is of course a lot of work on NAS for CNNs and NAS for RNNs individually, we are not aware of any other previous NAS work that tackles a search space including both convolutions and recurrent units at the same time (i.e., with NAS choosing the best combination of the two). Finally, we are not aware of any previous work on NAS for meta-learning (other than learning a cell architecture and transferring that cell to a different dataset). We do believe that these are clear points in favor of our paper\\u2019s novelty, and we agree with the reviewers that we should have made these much clearer in the submitted version of our paper; we are thankful to the reviewers\\u2019 comments and have fixed this now.\\n\\nWe would also like to note that, e.g., the popular population-based-training (PBT) method for tuning RL hyperparameters, is limited to optimizing hyperparameters that can be adapted during the optimization trajectory, while our approach also handles the tuning of other choices, such as the neural architecture and the state representation. As such, our paper can be viewed as an important step towards \\u201cautomated reinforcement learning\\u201d, applied to a real-world problem (which we also believe to be novel).\", \"we_made_the_following_changes_to_the_paper_in_response_to_the_reviews\": [\"Relating to (2) above, we clarified the novelty of our joint and automated architecture and hyperparameter search and added subsections to distinguish between the search space and the search procedure in the corresponding section.\", \"We added a parameter importance analysis to Section 6 (experiments) which supports the importance of the joint optimization of the policy network\\u2019s architecture, the environment parameters and the training hyperparameters.\", \"We explained our experimental protocol better, including more details on the used datasets from the literature and the dataset we compiled ourselves.\", \"We split our previous background section into two distinct sections, one for explaining the RNA-Design problem and one for discussing related work.\", \"We restructured the appendix, included plots that compare the performance of all approaches across different sequence lengths (Appendix J) and show the strong scaling of our approaches with sequence length, and added more analysis regarding our joint architecture and hyperparameter optimization.\", \"We incorporated clarification and discussion where indicated by the reviewers. We detail these changes in our responses to the individual reviewers.\"]}",
"{\"title\": \"Interesting application of RL to DNA, new SotA perf, some theoretical novelty\", \"review\": \"I'm happy with the revisions the authors have made, as I find that they call out the novel contributions a bit more explicitly. Specifically I see some novel work in the area of simultaneous multi-task/meta-RL and black box optimization of the policy net architectures. I don't think calling this NAS is justified; calling it bayesopt or black box opt is fair. NAS uses a neural net to propose experiments over structured graphs of computation nodes. This work appears to be simpler hyperparameter optimization.\\n\\n====\", \"quality\": \"The work is well done, and the experiments are reasonable/competitive, showcasing other recent work and outperforming.\", \"clarity\": \"I thought the presentation was tolerable. I was a bit confused by Table 1 until reading the prose at the bottom of page 7 indicated Table 1 is presenting percentages, not integer quantities. The local improvement step is not very clearly explained. Are all combos tried across all mismatched positions, or do we try each mismatched position independently holding the others to their predicted values? What value of zeta did you end up using? It seems like this is essential to getting good performance. It is completely unclear to me what the 'restart option' does.\", \"originality\": \"Using RL in this specific application setting seems relatively new (though also explored by RL-LS in https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6029810/). On the other hand, the approach used doesn't seem to be substantially different than anything else typically used for policy gradient RL. The meta-learning approach is interesting, though again not too different from multi-task approaches (though these are perhaps less common in RL than in general deep learning).\", \"significance\": \"Likely to be of practical utility in the inverse design space, specifically therapeutics, CRISPR guide RNA design, etc. Interesting to ICLR as an application area but probably not much theory/methods interest.\\n\\n\\nOn balance I lean slightly against accepting and think this is a better fit to either a workshop or a more domain-specific venue (MLHC http://mucmd.org/ for example).\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"RNA sequence design with deep reinforcement learning\", \"review\": [\"This work tackles the difficult RNA design problem, i.e. that of finding a RNA primary sequence that is going to fold into a secondary/tertiary structure able to perform a desired biological function. More specifically, it used Reinforcement Learning (RL) to find the best sequence that will fold into a target secondary structure, using the Zuker algorithm and designing a new primary sequence 'from scratch'. A new benchmark data set is also introduced in the paper along .\", \"Questions/remarks:\", \"I struggle with your notations as soon as section 2.1. What is the star (*) superscript for? Was expecting the length of the RNA sequence instead. Same on p4, when introducing the notation of your decision process $ D_w $, explicitly introduce all the ingredients.\", \"in Equation (2) on p4, maybe clarify the notation with '.', '(' and ')' for example as the reader could really struggle.\", \"I didn't really understand the message in Section 4, not being an expert in the field. Could you clarify your contribution here?\", \"your 'Ablation study' in Section 5.2; does it correspond to true uncertainty/noise that could be observed in real data?\", \"why a new benchmark data set, when there exist good ones to compare your method to, e.g. in competitions like CASP for proteins?\", \"do you make your implementation available?\", \"quite like the clarification of the relationship of your work to that of Eastman et al. 2018. Could you also include discussions to other papers, e.g. Chuai et al. 2018 Genome Biol and Shi et al. 2018 SentRNA on arXiv?\", \"Altogether the paper reads well, seems to have adequate references, motivates and proposes 3 variations of a new algorithm for a difficult learning problem. Not being an expert in the field, I just can't judge about the novelty of the appraoch.\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"1: The reviewer's evaluation is an educated guess\"}",
"{\"title\": \"Partially unclear and minor methodological contributions, but good application paper overall\", \"review\": \"General comment\\n==============\\nThe authors used policy gradient optimization for generating RNA sequences that fold into a target secondary structure, reporting clear accuracy and runtime improvements over the previous state-of-the-art. The authors used BOHR for optimizing hyper-parameters and present a new dataset for evaluating RNA design methods. The paper is well motivated and mostly clearly written. However, the methodological contributions are limited and I have some important concerns about their evaluation. Overall, I feel it\\u2019s a good paper for an ICLR workshop or biological journal if the authors address the outstanding comments.\\n\\nMajor comments\\n=============\\n1. The methodological contributions are limited. The authors used existing approaches (policy gradient optimization and BOHR for hyperparameter optimization) but do not report new methods, e.g. for sequence modeling. Performing hyper-parameter optimization is in my eyes not novel, but common practice in the field. It would me more informative if the authors compared reinforcement learning to other approaches for (conditional) sequence generations, e.g. RNNs, autoregressive models, VAEs, or GANs, which have been previously reported for biological sequence generation (e.g. http://arxiv.org/abs/1804.01694).\\n\\n2. Did the authors split all three datasets (Eterna, Rfam-Taneda, Rfam-learn-test) into train, eval, and test set, trained their method on the training set, optimized hyper-parameters on the eval set, and measured generalization and runtime on the test set? This is not described clearly enough in section 5. I suggest to summarize the number of sequences for each dataset and split in a table.\\n\\n3. Did the authors also optimize the most important hyperparameters of RL-LS and other methods? Otherwise it is unclear if the performance gain is due to hyperparameter optimization or the method itself.\\n\\n4. The time measurement (x-axis figure 3) is unclear. Is it the time that methods were given to solve a particular target structure and does figure 3 show the average number of solved structures in the test for a the time shown on the x-axis? \\n\\n5. Were all methods compared on the same hardware (section 5; 20 cores; Broadwell E5-2630v4 2.2 GHz CPUs) and can they be parallelized over multiple CPU or GPU cores? This is essential for a fair runtime comparison.\\n\\n6. The term \\u2018run\\u2019 (\\u201cunreliable outcomes in single runs\\u201d, section 4) is unclear. Is it a single sample from the model (one rollout), a particular hyperparameter configuration, or training the model once for a single target structure? This must be clarified for understanding the evaluation.\\n\\n7. How does the accuracy and runtime or LEARNA scale depending on the sequence (structure) length?\\n\\n8. How sensitive is the model performance depending on the context size k for representing the current state? Did the authors try to encode the entire target structure with, e.g. recurrent models, instead of using a window centered on the current position?\\n\\n9. The authors should more clearly describe the local optimization step (section 3.1; reward). Were all nucleotides that differ mutated independently, or enumerated exhaustively? The latter would have a high runtime of O(3^d), where d is the number of nucleotides that differ. When do the authors start with the local optimization? \\n\\nMinor comments\\n=============\\n10. The authors should replace \\u2018450x\\u2019 faster in the abstract by \\u2018clearly\\u2019 faster since the evaluation does not show that LEARNA is 450x faster than all other methods.\\n\\n11. Does \\u201cAt its most basic form\\u201d (introduction) mean that alternative RNA nucleotides exist? If so, this should be cited.\\n\\n12. The authors should more clearly motive in the introduction why they created a new dataset.\\n\\n13. The authors should mention in section 2.1 that the dot-bracket notation is not the only notation for representing RNA structures (https://www.tbi.univie.ac.at/RNA/ViennaRNA/doc/html/rna_structure_notations.html).\\n\\n14. The authors should define the hamming distance (section 2.1). Do other distance metrics exist?\\n\\n15. For the Traveling Salesman Problem (section 2.2) should the reward be the *negative* tour length?\\n\\n16. The authors should more clearly describe the embedding layer (section 4). Are nucleotides one-hot encoded or represented as integers (0, 1 for \\u2018(\\u2018 and \\u2018.\\u2019)?\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
ryxyHnR5tX | Accelerated Sparse Recovery Under Structured Measurements | [
"Ke Li",
"Jitendra Malik"
] | Extensive work on compressed sensing has yielded a rich collection of sparse recovery algorithms, each making different tradeoffs between recovery condition and computational efficiency. In this paper, we propose a unified framework for accelerating various existing sparse recovery algorithms without sacrificing recovery guarantees by exploiting structure in the measurement matrix. Unlike fast algorithms that are specific to particular choices of measurement matrices where the columns are Fourier or wavelet filters for example, the proposed approach works on a broad range of measurement matrices that satisfy a particular property. We precisely characterize this property, which quantifies how easy it is to accelerate sparse recovery for the measurement matrix in question. We also derive the time complexity of the accelerated algorithm, which is sublinear in the signal length in each iteration. Moreover, we present experimental results on real world data that demonstrate the effectiveness of the proposed approach in practice. | [
"sparse recovery"
] | https://openreview.net/pdf?id=ryxyHnR5tX | https://openreview.net/forum?id=ryxyHnR5tX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"r1l9_rhEgN",
"HyeWSuL527",
"Hylg2MTYhQ",
"SJxbH3C4h7"
],
"note_type": [
"meta_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1545024881786,
1541199929025,
1541161640301,
1540840505089
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1503/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1503/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1503/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1503/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"The main idea of this paper is to use nearest neighbor search to to accelerate iterative thresholding based sparse recovery algorithms. All reviewers were underwhelmed by somewhat straightforward combination of existing results in sparse recovery and nearest-neighbor search. While the proposed method seems effective in practice, the paper has the feel of not being a fully publishable unit yet. Several technical questions were asked but no author feedback was provided to potentially lift this paper up.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Effective acceleration technique for sparsity regularized regression, but not complete enough\"}",
"{\"title\": \"This paper shows how to accelerate certain popular sparse recovery approaches under certain conditions. However, the contributions seem to be incremental and it is unclear how the technique significantly advance the state of the art.\", \"review\": \"Clarity: Paper is generally well written; however, certain theoretical statements (e.g. Theorem 1) are not very precise.\", \"originality\": \"Contribution seems to be incremental; the proposed method seems to be a straightforward concatenation of well-known existing results in sparse recovery and nearest-neighbor search.\", \"significance\": \"Unclear whether the techniques significantly advance the state of the art.\", \"quality\": \"Overall, I think this is a promising direction but the idea might not have fully fleshed out.\\n\\n----\", \"summary\": \"the paper proposes a scheme to accelerate popular sparse recovery methods that rely on hard thresholding (specifically, CoSaMP and IHT, but presumably other similar methods can also be used here). The key idea is that if the measurement matrix is normalized, then the k-sparse thresholding of the gradient update can be viewed as solving a k-nearest neighbor problem. Therefore, one can presumably use fast k-NN methods instead of exact NN methods. Specifically the authors propose to use the prioritized DCI method of Li and Malik.\", \"pros\": \"reasonable idea to use fast (sublinear) NN techniques in the k-sparse projection step.\", \"cons\": [\"It appears that the running time improvement over the baseline IHT (which has Otilde(mn) complexity) heavily depends on the intrinsic dimensionality of A. However, the authors do not characterize this.\", \"The authors neglect to mention in the paper that prioritized DCI has a pre-processing time of O(mn), so the final algorithm isn't really asymptotically faster.\", \"I cannot parse Theorem 1 (especially, the second sentence). Is epsilon the failure probability of DCI?\", \"Experimental results are far too synthetic. In real-life problems k itself is big, so there may be other bottlenecks (least squares, gradient updates, etc) and not necessarily the hard thresholding step.\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"This paper proposes a unified framework for speeding up sparse regression algorithms by adapting fast nearest-neighbour search algorithms for updating the support.\", \"review\": \"The paper is very well-written, readable, with the ideas and derivations clearly explained.\\n\\nThe literature review is comprehensive and informative. I do feel however that the review could be improved, for example, by discussing the recent papers by Chinmay Hegde and Piotr Indyk on \\\"head\\\" and \\\"tail\\\" approximate projections to speed up recovery algorithms. The problem under study is indeed important and the contribution is interesting. \\n\\nMy biggest concern is that the technical contribution is too modest. Theorem 1 serves more as a decorative technical result (the assumption \\\"And for any vector v...\\\" seems out of the blue and too convenient) and the paper does not answer the many questions that come to mind here. For example, what is the intrinsic dimension of common random measurement matrices? Or how do any wrongly detected nearest neighbours propagate through the iterations of the algorithm? How does the measurement noise change the intrinsic dimension? We should intuitively lose stability in return for faster recovery. How would this be quantified in what you've proposed.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Accelerated Greedy Sparse Recovery Review\", \"review\": \"The paper proposes a greedy-like algorithm for sparse recovery that uses nearest neighbors algorithms to efficiently identify candidates for the support estimates obtained at each iteration of a greedy algorithm. It assumes that the norms of the columns of the matrix A are one to be able to change the project-and-sort step into a nearest neighbors search.\\n\\nIt is not clear what the value of Fact 1 is, given that none of the sparse recovery algorithms discussed here actually performs ell0 norm minimization. Additionally, it is common in theoretical analysis of sparse recovery to assume that the columns of the matrix A have unit norm. In fact, the RIP implies that the columns of the matrix must have norm within delta of 1. Nonetheless, it would be useful to have a discussion of the effect that having non-unit column norms would have on the proposed approach.\\n\\nSimilarly, Fact 2 is almost self-evident; I suggest to discard the proof.\\n\\nThe equivalence of Definition 1 and the statement involving ps and qs needs to be shown more clearly. The statement in Definition 1 is given in terms of distances (ball radiuses), not counts of neighbors.\\n\\nI suggest swapping the use of CoSaMP and AIHT - the theoretical results of the paper refer to AIHT, so it is not clear why the algorithm itself is relegated to the supplementary material.\\n\\nIt is not clear how d0 is to be computed to implement Accelerated AIHT.\\n\\nFor Theorem 1, the authors should comment on when the assumption \\\"xtilde(t) converges linearly to a k-sparse signal with rate c\\\".\\n\\nIn Figures 1 and 2, does \\\"residual\\\" refer to the difference between x and xtilde, or b and Axtilde?\", \"minor_comments\": \"Typo in page 5 \\\"\\u00bf\\\"\\nGrammar error in page 6 \\\"characterizing of the difficulty\\\".\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
HJgJS30qtm | REVISTING NEGATIVE TRANSFER USING ADVERSARIAL LEARNING | [
"Saneem Ahmed Chemmengath",
"Samarth Bharadwaj",
"Suranjana Samanta",
"Karthik Sankaranarayanan"
] | An unintended consequence of feature sharing is the model fitting to correlated tasks within the dataset, termed negative transfer. In this paper, we revisit the problem of negative transfer in multitask setting and find that its corrosive effects are applicable to a wide range of linear and non-linear models, including neural networks. We first study the effects of negative transfer in a principled way and show that previously proposed counter-measures are insufficient, particularly for trainable features. We propose an adversarial training approach to mitigate the effects of negative transfer by viewing the problem in a domain adaptation setting. Finally, empirical results on attribute prediction multi-task on AWA and CUB datasets further validate the need for correcting negative sharing in an end-to-end manner. | [
"Negative Transfer",
"Adversarial Learning"
] | https://openreview.net/pdf?id=HJgJS30qtm | https://openreview.net/forum?id=HJgJS30qtm | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"rygMyJRTyV",
"rylr-32cA7",
"HyelBNg92m",
"rklt7bDY2X",
"SJlOAEjunQ"
],
"note_type": [
"meta_review",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544572634278,
1543322620602,
1541174328185,
1541136672829,
1541088463585
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1502/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1502/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1502/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1502/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1502/AnonReviewer1"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper proposes reducing so called \\\"negative transfer\\\" through adversarial feature learning. The application of DANN for this task is new. However, the problem setting and particular assumptions are not sufficiently justified. As commented by the reviewers and acknowledged by the authors there is miscommunication about the basic premise of negative transfer and the main assumptions about the target distribution and it's label distribution need further justification. The authors are advised to restructure their manuscript so as to clarify the main contribution, assumptions, and motivation for their problem statement.\\n\\nIn addition, the paper in it's current form is lacking sufficient experimental evidence to conclude that the proposed approach is preferable compared to prior work (such as Li 2018 and Zhang 2018) and lacks the proper ablation to conclude that the elimination of negative transfer is the main source of improvements. \\n\\nWe encourage the authors to improve these aspects of the work and resubmit to a future venue.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Refinement of objective and comparison against prior work needed\"}",
"{\"title\": \"Thanks for your comments\", \"comment\": \"We thank the reviewers for the constructive remarks on our idea and pointing out some relevant literature. Based on your comments, we will revamp the presentation and articulation of the paper for a future venue.\\n\\nThe concerns of the reviewers can be summarized as follows with our brief response (to foster a discussion):\\n\\n-- Why we chose to use the term \\\"negative transfer\\\"?\\nWe used the term \\\"negative transfer\\\" to describe the problem emerging from correlated tasks in multi-task learning which has been looked at in earlier papers, especially in [1]. These papers propose using regularization [1,5] to prevent sharing features among tasks which are not related to each other. Further, we find this problem is not confined to multi-task learning but can extend to any supervised learning approach. This problem setting was previously addressed as \\\"negative transfer\\\" in [2,3,4].\\u00a0\\n\\nIt seems \\\"negative transfer\\\" term has different meaning in domain adaptation. We agree with the reviewers that there is an ambiguity in the meaning of \\\"negative transfer\\\" in the community. We shall explicitly address this in a future submission.\\u00a0\\n\\n-- Differentiate from DANN (Ganin & Lempitsky '15)\\nDANN is a domain adaptation technique that uses gradient reversal to explicitly prevent encoding of domain information in the feature representation. As discussed in Section 3.2 (paragraph 2), we do not have access to unlabeled instances of a \\\"target domain\\\" in this work. Further, we deal with large number of tasks and corresponding adversarial tasks (here AwA dataset: 85 attributes, CUB dataset: 312 attributes) by using a novel adversarial task weighting scheme together with gradient reversal proposed by Ganin and Lempitsky'15.\\n\\nTo reiterate the contributions of this work,\\u00a0\\n - We draw the attention of the community to \\\"negative transfer\\\" problem that was previously looked at before the deep learning era in all supervised learning problems. Further, we show the limitation of previously proposed approaches to tackle negative transfer using various regularization methods.\\u00a0\\n - We pose the negative transfer problem as an instance of domain adaptation with strong assumptions. We show that DANN can then be used to prevent negative transfer in this setting. We then propose a feature selection variant of adversarial learning, that can also tackles negative sharing.\\u00a0\\n - We empirically show improvement in attribute prediction (on unknown classes)\\u00a0 on two public datasets over a known protocols [1].\\u00a0\\n\\n-- Clarification on the experimental protocol\\n - our models use ResNet101 as a base model with one trainable layer (of size 500).\\n - baseline model is multitask learning without adversarial arms.\\n - attribute prediction using correlation does not generalize (body-color and wing-color correlation may not be applicable to a different bird)\\n - we shall have illustrations with visualization of activation patterns in a future draft.\", \"references\": \"[1] Dinesh\\u00a0 Jayaraman,\\u00a0 Fei\\u00a0 Sha,\\u00a0 and\\u00a0 Kristen\\u00a0 Grauman. Decorrelating\\u00a0 semantic\\u00a0 visual\\u00a0 attributes\\u00a0 by resisting\\u00a0 the\\u00a0 urge\\u00a0 to\\u00a0 share.\\u00a0 CVPR 2014\\n[2] Lee, Giwoong, Eunho Yang, and Sung Hwang. Asymmetric multi-task learning based on task relatedness and loss. International Conference on Machine Learning. 2016.\\n[3] Hae Beom Lee, Eunho Yang, Sung Ju Hwang. Deep Asymmetric Multi-task Feature Learning. International Conference on Machine Learning. 2018.\\n[4]\\u00a0 Ruder, Sebastian. An overview of multi-task learning in deep neural networks. arXiv preprint arXiv:1706.05098 (2017).\\n[5] Yang Zhou, Rong Jin, and Steven Chu-Hong Hoi.\\u00a0 Exclusive lasso for multi-task feature selection. In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, (AISTATS) 2010\"}",
"{\"title\": \"The problem setting is strange, and the assumptions used in the proposed algorithms are too strong\", \"review\": \"The term \\\"negative transfer\\\" is quite confusing, especially when it is used together with the term \\\"domain adaptation\\\". In domain adaptation, negative transfer means transferring knowledge from a source domain to a target domain in a brute-force manner may result in worse performance compared with that obtained by only using the target domain data.\\nIn this paper, the negative transfer problem is different from that in domain adaptation. The authors just tried to model the proposed negative transfer learning problem as a domain adaptation problem. However, the defined problem setting of negative transfer is quite strange, where for the target dataset, neither instances nor labels are available expect for the probability of P_T(Y_p, Y_a), and there is relationship between Y_p and Y_a, which is different from that of the source dataset. It is not convincing that why the proposed problem setting is important in practice.\", \"the_proposed_algorithm_is_designed_based_on_two_strong_assumptions\": \"1. D_T is drawn from a distribution that is nearest to that of D_S, and\\n2. P_T(Y) is given in advance.\\nRegarding the first assumption, it is not reasonable, and it is hard to be satisfied in practice. For the second assumption, it is also too strong to be satisfied in practice. Though the authors mentioned that when P_T(Y) is not given in advance, P_T(Y) can be further assumed to be of the uniform distribution or the classes are uncorrelated. However, these are just ad-hoc solutions. In practice, if P_T(Y) is unknown, and it is very different from the uniform distribution, or labels are highly correlated, the proposed algorithm may perform very poorly.\\n\\nRegarding the details of the algorithm, it just simply applies an existing model, DANN. In addition, the theoretical part is a well-known theorem.\", \"there_are_some_typos\": \"on Page 3, Figure 3(a) --> Figure 2(a); on Page 4, Figure 3(b) --> Figure 2(b).\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting formulation; lack of mention and comparison to related work, terminology issue, and other flaws\", \"review\": \"Pros:\\n- Provides illustration and math formulation for the problem of generalization beyond the correlation of labels and correlated but irrelevant attributes. Forming the issue as a domain adaptation problem (or specifically, a special kind of probability shift) is a clever idea.\", \"cons\": \"- Lacks comparison to existing work. Making features invariant to attributes to improve generalization is not a new idea, cf. :\\n(1) Xie, Qizhe, et al. \\\"Controllable invariance through adversarial feature learning.\\\" Advances in Neural Information Processing Systems. 2017.\\n(2) If you consider the domain difference between various domains to be similar to attribute, then this is also related: Li, Haoliang, et al. \\\"Domain generalization with adversarial feature learning.\\\" Proc. IEEE Conf. Comput. Vis. Pattern Recognit.(CVPR). 2018.\\n(3) There are other works that, although do not aim at improving generalization, use very similar formulation to decouple attribute from features: e.g. (a) Lample, Guillaume, et al. \\\"Fader networks: Manipulating images by sliding attributes.\\\" Advances in Neural Information Processing Systems. 2017. (b) Mitigating Unwanted Biases with Adversarial Learning (which the authors cite, but do not offer any comparison or differentiation)\\nTo improve the paper, these related work should be discussed in related work section, and (if applicable) compared to the proposed method in the experiments, rather than a very brief mention of one of them in Section 3.3 and no comparison.\\n\\n- Use of the term \\\"negative transfer\\\" is problematic. This is a more important shortcoming, but people may disagree with me. As far as I know, this term is used to describe a *source task* being used to help a *different target task* but result in a negative gain in performance (Torrey, Lisa, and Jude Shavlik. \\\"Transfer learning.\\\"), which is inherently a multi-task learning setting. However, in this paper it refers to the effect of unrelated features being used in classifier, resulting in a worse generalization. The existence of this issue does not involve a second task at all. If this is not intended, please use another phrase. If the authors think that these are one and the same, I would strongly argue against this proposition.\\nAlso, there is no \\\"negative transfer technique\\\" as implied by page 2, end of the first paragraph.\\n\\n- Section 3.2 and 3.3's analysis is somewhat disjoint from the method. The analysis boils down to \\\"given a different correlation between primary and aux tasks, you can compute the distribution of inputs, which will be different from the source, so let's make the aux task unpredictable to get domain invariance.\\\" And the method goes on to remove auxiliary task information from the shared feature space. This is disjoint from either eq. (1) picking a target domain closest to source, and Theorem 1 the bound for domain adaptation. One way to improve the paper is to analyze how these analysis are affected by the adversarial training.\\n\\n- One of the selling points is that the method can adapt to trainable features in deep learning. However, in the experiment, fixed extracted features from pre-trained ResNet is used anyway. If so, a way to improve the paper is to compare to the traditional methods cited in page 2 paragraph 1, by applying them on fixed extracted ResNet features.\", \"rating\": \"2: Strong rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting application of adversarial learning to tackle negative transfer, but further analysis on source of performance improvement required\", \"review\": [\"The authors study the problem of negative transfer in representation learning, and propose to use the formulation proposed by Ganin & Lempitsky '15 for domain adaptation to reduce negative transfer. Instead of defining the domain classification as the adversarial task to learn a domain-independent representation, they collect a set of classification problems irrelevant to the main task as the adversarial tasks, and aim to learn a representation that focuses only on the primary task. There are very little changes compared to the proposal by Ganin & Lempitsky '15, but the application to solve the problem of negative transfer is interesting.\", \"My main concern on the whole argument of the paper is whether the benefits we see in the experiments come from the elimination of negative transfer, or just come from having more training labels from different tasks available. In the main formulation of the approach (equation 7), the authors try to learn a feature representation that works well for the primary task but works poorly for the auxiliary(irrelevant) tasks. If we switch the sign for lambda, then it becomes very similar to traditional multi-task learning. I wonder how the multi-task formulation would compare against the adversarial formulation proposed by the authors. There are reasons to suspect the multi-task formulation will also work better than the logistic regression baseline, since more labels from different tasks are available to learn a better joint representation. It is not clear whether the improvements come from modeling the auxiliary tasks using negative transfer (where the adversarial approach should beat the baseline and multi-task approach), or just come from having more information (where both the adversarial approach and the multi-task approach beat the baseline, but have similar performance).\", \"From a practical point of view, it is not easy to decide what prediction tasks are irrelevant. For example, in the birds dataset, I would expect the color and patterns in the body parts to have some correlations (primary_color, upperparts_color, underparts_color, wing_color, etc). In the case of occlusion of the relevant body parts, I could make a guess on the color based on the colors on other parts of the bird. In the ideal case for the current method, I would expect the adversarial approach proposed to learn a representation that mask out all the irrelevant parts of the animal or irrelevant contextual information. Apart from showing improved prediction performance, have the authors perform analysis on the image activation patterns similar to the motivation example in Figure 1 to see if the new approach actually focus on the relevant body parts of the animals?\", \"The definition of auxiliary tasks are described in the second last paragraph of 3.3, but it would be clearer if it is also mentioned how they are defined in the experiments section. I went through the whole experiments section having trouble interpreting the results because I could not find the definition of adversarial tasks.\", \"Overall I like this paper since it attempts to solve an interesting problem in computer vision, but I would like to see the above question on comparison with multi-task learning answered, or some image activation pattern analysis to provide a more solid argument that the improvements come from elimination of negative transfer.\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
|
BJxRVnC5Fm | Mean Replacement Pruning | [
"Utku Evci",
"Nicolas Le Roux",
"Pablo Castro",
"Leon Bottou"
] | Pruning units in a deep network can help speed up inference and training as well as reduce the size of the model. We show that bias propagation is a pruning technique which consistently outperforms the common approach of merely removing units, regardless of the architecture and the dataset. We also show how a simple adaptation to an existing scoring function allows us to select the best units to prune. Finally, we show that the units selected by the best performing scoring functions are somewhat consistent over the course of training, implying the dead parts of the network appear during the stages of training. | [
"pruning",
"saliency",
"neural networks",
"optimization",
"redundancy",
"model compression"
] | https://openreview.net/pdf?id=BJxRVnC5Fm | https://openreview.net/forum?id=BJxRVnC5Fm | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"B1l0Y7BklN",
"HklPw2PaJE",
"HyeTYxm3yN",
"rJxMPRvD14",
"BJxNc_wv14",
"H1eEkaWDJE",
"Hkggp3CrkE",
"SkxaUZ0rA7",
"SygQNZASAQ",
"S1xub-0H0m",
"SJxKlwK6hQ",
"rylsv4pjhX",
"r1xlzq9inm"
],
"note_type": [
"meta_review",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544668038143,
1544547422547,
1544462469429,
1544154713562,
1544153227916,
1544129755587,
1544051895518,
1543000404761,
1543000362986,
1543000320208,
1541408496575,
1541293155499,
1541282312375
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1501/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1501/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1501/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1501/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1501/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1501/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1501/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1501/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1501/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1501/Authors"
],
[
"ICLR.cc/2019/Conference/Paper1501/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1501/AnonReviewer3"
],
[
"ICLR.cc/2019/Conference/Paper1501/AnonReviewer2"
]
],
"structured_content_str": [
"{\"metareview\": \"This paper proposes an approach to pruning units in a deep neural network while training is in progress. The idea is to (1) use a specific \\\"scoring function\\\" (the absolute-valued Taylor expansion of the loss) to identify the best units to prune, (2) computing the mean activations of the units to be pruned on a small sample of training data, (3) adding the mean activations multiplied by the outgoing weights into the biases of the next layer's units, and (4) removing the pruned units from the network. Extensive experiments show that this approach to pruning does less immediate damage than the more common zero-replacement approach, that this advantage remains (but is much smaller) after fine-tuning, and that the importance of units tends not to change much during training. The reviewers liked the quality of the writing and the extensive experimentation, but even after discussion and revision had concerns about the limited novelty of the approach, the fact that the proposed approach is incompatible with batch normalization (which severely limits the range of architectures to which the method may be applied), and were concerned that the proposed method has limited impact after fine-tuning.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Good writing and experiments, but limited novelty and applicability\"}",
"{\"title\": \"Fine-tuning\", \"comment\": \"Thank you for your comment, although we believe one should not underestimate the impact that reducing the number of required fine-tuning steps might have.\\n\\nIndeed, in the case of incremental pruning, fine-tuning steps can take a significant portion of the total training time and reducing them is desirable.\\n\\nAnother result worth emphasizing once more is the fact that, based on the metric chosen, pruning methods can be found to be nearly equivalent. In a field where there are many new methods proposed, we believe it is important to show the sensitivity of the result to the particular metric of choice, which emphasizes the importance of defining ahead of time what the goal of pruning is.\"}",
"{\"title\": \"Follow-up\", \"comment\": \"Thanks for the additional response. Overall, I'm somewhat conflicted on this paper. The revisions have made the paper stronger. I generally like the thorough experiments, and, were the idea to be entirely novel, the scope of the analysis and experiments would be reasonably compelling.\\n\\nBut (1) the basic idea is already known in a couple of publications. And regardless of novelty, (2) the significance is limited: (a) the method seems applicable only to networks without batch normalization, and (b) the initial advantage of pruning with this method is matched by instead using some number of fine-tuning steps (though number of additional steps isn't estimated in the paper). As such, I'm not sure who really needs to know about this method or what follow-up work it could inspire. Unfortunately, while the paper may be a no-brainer to accept at a workshop, but I don't think it meets the bar for an ICLR conference publication.\"}",
"{\"title\": \"General framework\", \"comment\": \"Thank you for your comment. We actually considered casting all existing methods as special cases of a general framework but we felt the added layer of abstraction might be confusing to those already familiar with existing methods.\\n\\nHowever, should the paper be accepted and the consensus among reviewers be that this would improve clarity, it would be straightforward for us to include such a framework in the final version.\\n\\nOnce again, thank you for taking the time to reply to our updates.\"}",
"{\"title\": \"Thanks for the response!\", \"comment\": \"I appreciate the authors' detailed responses and the modifications in the new draft. In particular, table 1 in the new draft makes the legends in the plots much clearer. However, I still have two concerns:\\n\\n1. regarding the \\\"winning ticket\\\" hypothesis, do you want to emphasize that the pruning can be done after a short pre-training of the large network, followed by re-training from the same initialization of the pruned network, which would result in both faster training and smaller network size (and faster inference)? \\n\\n2. Another thing is about the novelty and significance of the work, and its relationship to other methods. I would suggest the authors to add an algorithm frame describing the prototype of a general pruning approach in neural networks, with consistent mathematical notations about scoring functions, approximated penalty, and the mathematical formulation of mean replacing/back-propagation, and how they interact and combine with the other components. The examples of different scoring functions and approximated penalties can then be mathematically listed as special cases of the abstractions in the algorithm frame. For now, although the draft has been largely improved to show difference and connections between different methods, for a non-expert about neural network pruning like me, it is still very unclear how different the methodologies are and how the performance metrics are exactly (mathematically) defined. This also prevents me from accurately judging the difference and connection between the current work and the existing literature, as well as understanding the performance of different approaches given the diversity of experimental settings presented in the draft. This is especially the case given some remaining consistency in the notations, for example, what is the loss degradation \\\\Delta L? Is it just the pruning penalty? Similarly, what do \\\\nabla_a L(a) and \\\\Delta a stand for? And how does the i sampled from D_s come into play with them (as i does not even show up)? Also, how is the approximated penalty involved in the entire pruning approach? And I believe that all these can be largely solved with a general algorithm frame added.\\n\\nBut anyway, thanks for the responses and updates on the draft!\"}",
"{\"title\": \"Further Clarifications\", \"comment\": \"(1) Yes, it can be fair to say so. Propagating constant values to the next layer (Ye, 2018) and ablating units with their mean values (Morcos, 2018) were used/mentioned in recent work in different contexts. In addition to running a wide set of experiments, our work shows that replacing units with the mean activation value is the optimal bias update that minimizes the reconstruction loss in the next layer. We also introduce a new scoring function (i.e the first order approximation of the absolute change in the loss when mean replacement pruning is used).\\n\\n(4) In Figure 10 (networks trained on Cifar-10), we report measurements taken at the end of training. In other words, there are almost 30k fine-tuning steps after pruning. In Figs. 4 and 11 the pruning penalty is calculated after a small number of fine tuning steps (10-500). These initial results (Fig. 10) suggest that mean replacement reduces the number of fine-tuning steps needed, however the network converges to the same energy level if trained long enough. \\n\\nWhen we generate Fig. 4 with test loss, we see a similar picture (almost all points are under diagonal).\"}",
"{\"title\": \"Thanks and a couple more questions\", \"comment\": \"Thanks for your detailed response. Regarding (1), it seems that a reasonable summary would be that the mean replacement idea is described in both (Ye,2018) and (Morcos,2018), albeit in conjunction with other methods in both cases. So, the contribution of this paper is thoroughly experimenting on mean replacement in isolation. Is that fair?\\n\\nRegarding (3), thanks for providing some results with test accuracy. Fig. 10 shows on VGG-11 (with which dataset?) that mean replacement makes little/no difference with 100 fine tuning steps between pruning iterations. But presumably, based on the other plots like Figs. 4 and 11, test error is worse without mean replacement at lower levels of fine-tuning. You show this for training error, but do you have any results in the paper indicating this effect for test error?\"}",
"{\"title\": \"Thanks for your extensive review and comments.\", \"comment\": \"Thank you very much for your thorough comments and valuable suggestions. We hope the responses/clarifications below would help.\\n\\n(1) Ye et al. (2018), proposes a pruning method that penalizes the variance of the output distribution of a unit. When units have low variance (generating almost constant values), they remove the unit by propagating the mean values to the next layer. In our method we show that such a trick can be utilized in other pruning scenarios that don\\u2019t involve batch normalization or variance regularization. In particular, we show that mean replacement is the optimal update minimizing the reconstruction error on the next layer and empirically show that it does indeed reduce the pruning penalty in a variety of settings. Morcos et al. (2018) briefly mentions the possibility of using mean replacement in the ablation setting. In the single result shared in Section A1, they prune/ablate the last layer of their convolutional network, since that would be the only layer without batch normalization. \\n\\n(2) We agree that there is a strong trend with using batch normalization in neural network training. However there is no guarantee that this trend will last or/and become a standard. There are also cases where batch normalization is not practical/preferred, like small batch training (due to memory limitations of big networks) and RNN training. We would also like to point out that our method is applicable in networks with instance or layer normalization.\\n\\n(3) We updated the table of scoring functions in Section 3.4 with related references. We hope that it will guide the reader to understand differences between our work and previous work. Following the common feedback we got from the reviewers, we performed some additional experiments. In our experiments on Cifar-10, we didn't find any significant difference among different pruning methods despite the reported differences in others works. We share this surprising initial results in Section 6.6 along with a discussion about possible explanations. \\n\\n(4) We use the same notation as Morcos et al. 2018 in our work. Given a convolutional kernel W with dimensions (input channels, H, W, output channels), a unit i corresponds to the parameters W[:,:,:,i]. In the case of a dense layer with parameters W of size (input channels, output channels) it corresponds to W[:,i].\\n(5) Updated. \\n(6) We used the average change in the loss to evaluate the performance. We expect the two measures(absolute and average change in the loss) to be very similar, especially when the pruning fraction is high.\\n(7) We forgot to share these numbers, thanks for the note. It is 1000 and 10000 for cifar-10 and imagenet. We updated Section 3.4 to include these numbers.\\n(8) Thanks for the feedback, we updated the part and had a second pass on the whole paper.\\n(9) In Figures 2-6 we report pruning penalties (cross entropy loss) using a set of 1000 (Cifar-10) and 10000 (Imagenet2012) samples from the training set. Our motivation is that reducing the pruning penalty (training loss) would help the optimization by both/either reducing number of fine tuning steps needed and/or improving final performance. Our results support the first and undermine the second. \\n(10) In practice we would suggest using a running average of the mean outputs, which would require constant number of FLOPS per sample (linear in number of units in the network). Since our initial set of experiments don't have end-2-end pruning experiments we haven't implemented such a feature and measure its effectiveness.\"}",
"{\"title\": \"Updates and Clarifications\", \"comment\": \"Thank you for your review.\\n\\n(1) We updated the table of scoring functions in Section 3.4 with related references. We hope that it will guide the reader to understand differences between our work and previous work. We also updated the last paragraph of Section 2 to highlight the difference between our work and Ye et al, (2018), Morcos et al., (2018). \\n\\n(2) Even though pruning is widely used for model compression, recent work highlights a promising direction where pruning is used during training to reduce training time and improve final performance (Han et al. (2016), Frankle & Carbin (2018)). Therefore we focused on pruning experiments sampled from the entire length of a training job. Our motivation is that reducing the pruning penalty helps optimization by reducing the number of fine tuning steps needed for reaching the same level. However, following the common feedback we got from the reviewers, we performed some additional experiments. In our experimental setting, we didn't find any significant difference among different pruning methods. We share this surprising results in Section 6.6 along with a discussion about possible explanations. \\n\\n(3) Replacing a unit with zeros indeed corresponds to removing it along with all the outgoing weights (it represents the same function). In our experiments we use masking to emulate this behaviour, however as you suggest, one can establish a smaller network in practice immediately if needed. \\n\\n(4) Our method replaces units with their mean output values. In practice, constant values are propagated to the next layer and units are removed as normal (see Figure 1). Mean output values can be aggregated during the training in an online fashion and bias propagation is a very cheap operation (single matrix multiplication). Therefore, it is a practical method. \\n\\n(5) We are sorry about the mistakes slipped and we did further proofread the paper.\\n\\nHan, S., Pool, J., Narang, S., Mao, H., Gong, E., Tang, S., \\u2026 Dally, W. J. (2016). DSD: Dense-Sparse-Dense Training for Deep Neural Networks. Retrieved from http://arxiv.org/abs/1607.04381\"}",
"{\"title\": \"Framing our work and contributions better\", \"comment\": \"Thank you for your review and valuable comments. We would like to provide clarifications/updates about the questions raised.\\n\\n(1) Our work extends the setting in Ye et al, (2018) (networks with batch normalization and units with very small variance) to networks without batch-normalization, un-regularized training and shows that replacing a unit with its mean value is a better at reducing the immediate damage compared to mere removal. Morcos et al., (2018) compare mere removal with mean replacement in the context of ablation studies using a network with batch normalization. Since they use layers with batch normalization they are able to mean replace the final layer of the network only (Batch normalization should remove any constant signal coming from the previous layer and therefore we shouldn't see any difference between the two methods in their setting, except the output layer of the network.). Our experiments involve a much greater variety of settings and show that Mean Replacement indeed works better in general. \\n\\n(2) Our scoring function and Molchanov et al. (2016) are first order approximations of two different values. We updated the table of scoring functions in Section 3.4 with related references. We hope that it will guide the reader to understand differences between our work and previous work.\\n\\n(3) We intentionally avoided running end to end pruning experiments in our work. Our assumption was that reducing the change in the training loss should help any pruning strategy that employs any of the saliency scores used. However, we agree that this connection is not clear and needs further investigation. Therefore we ran additional experiments where we perform iterative pruning with fix training budget. In our experiments, surprisingly, we didn't find any significant difference among `non-random` methods despite the reported differences in others works. Even though the results don't support our case, we like to share these initial results in the appendix (Section 6.6) with a discussion on possible reasons.\\n\\n(3) And finally regarding minor suggestions, (1) we updated with 'Pruning Penalty` following your suggestion (2) Detecting lottery ticket early in the training is important, since one could reduce the size of the network and the training time. This result, if general enough, promises a whole new direction for pruning research. We updated our introduction indicating this point. (3) spell-checked.\"}",
"{\"title\": \"Some clarity issues\", \"review\": \"This paper presents a mean-replacement pruning strategy and utilizes the absolute-valued Taylor expansion as the scoring function for the pruning. Some computer vision problems are used as test beds to empirically show the effect of the employment of bias-propagation and different scoring functions. The empirical results validates the effectiveness of bias-propagation and absolute-valued Taylor expansion scoring functions.\\n\\nThe work is generally well-written and the results are promising, and the theoretical explanation in 3.3 is intriguing. However, I think the following issues need some further clarifications:\\n1. What's the exact difference and connection between the mean-replacement pruning technique, and the bias-propagation technique in Ye et al., (2018) and the mean activation technique in Morcos et al. (2018)? The authors only mention that mean replacement pruning extends the idea in Ye et al. (2018) to the non-constrained training setting, but it is very unclear what \\\"constraints\\\" are talked about. Some more detailed and formal comparisons should be added, together with potential empirical comparisons.\\n2. In the abstract, the authors claim that they \\\"adapt an existing score function ...\\\", but from the main text it seems that absolute-valued Taylor expansion score is exactly the same one in Molchanov et al. (2016). Is this a typo (or misleading claim) in the abstract?\\n3. There are no comparisons of the approach proposed in this paper with some existing state-of-the-art, apart from some simple comparisons between whether bias-propagation is adopted and some inner comparisons among different scoring functions.\\n\\nIt would also be much better if some charts/tables with certain metrics for improvement apart from pruning penalties (e.g., compression rates, or inference speed, etc.) instead of simply illustrative figures are shown. \\n\\n### some smaller suggestions/typos ###\\n1. The plot legends/labels are kind of inconsistent with the description before the figures. For example, in the main text the authors mainly use \\\"pruning penalty\\\", while in the figures the y-axes are typically labelled as \\\"\\\\Delta-loss after pruning\\\", and the plot tag at page 5 bottom is different from those used in the plots, which introduces some unnecessary confusion.\\n2. It is very unclear how the authors arrive at the conclusion \\\"This results suggests ... the training process\\\" from the \\\"winning ticket\\\" hypothesis.\\n3. Several typos that can be easily spell-checked (e.g., \\\"the effect or pruning\\\" -> \\\"the effect of pruning\\\", etc.).\\n\\nI hope the authors can address these issues. Thanks!\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Overall score: 4\", \"review\": \"1. Pruning neurons in pre-trained CNNs is a very important issue in deep learning, and there are a number of related works have been investigated in Section 2. However, it is very strange that, I did not see any comparison experiments to these related works in this paper.\\n\\n2. The presentation of the experiment part is also wired, to report compression rates, speed-up rates, and accuracy might have a more explicit demonstration.\\n\\n3. ''This is often done by replacing the these units with zeroes\\\". However, in previous works, we can directly establish a compact network with fewer neurons after pruning some unimportant neurons. Thus, some considerations and motivations in Section 3.2 seem wrong. \\n\\n4. It seems that the neural network after using the proposed method has the same architecture as that of the original network, but some of it neurons are represented as mean replacement. Therefore, the compression and speed-up rates of the proposed method would be hard to implement in practice.\\n\\n5. The paper should be further proofread. There are considerable grammar mistakes and unclear sentences.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting and simple method, but needs clarification w.r.t. related methods and results\", \"review\": \"This paper proposes a simple improvement to methods for unit pruning. After identifying a unit to remove (selected by the experimenter\\u2019s pruning heuristic of choice), the activation of that unit is approximately incorporated into the subsequent unit by \\u201cmean replacement\\u201d. The mean unit activation (computed on a small subset of the training set) is multiplied by each outgoing weight (or convolutional filter) and added to each corresponding bias instead. Experiments show this method is generally better than the typical method of zero-replacement before fine-tuning, though the advantage is smaller after several epochs of fine-tuning.\\n\\nWhile I find this paper intriguing and applaud the extensive experimentation and documentation, I have some concerns as well:\\n\\t1. There are unanswered questions about how this method relates to existing work. It is not clear from the paper how the \\u201cmean replacement\\u201d method differs from the two most related works (Ye, 2018) and (Morcos, 2018), which propose variations on replacing units with constant values or mean activations, respectively. Also, why does the method in this paper seem to yield good results, while the related method (Morcos, 2018) yields \\u201cinferior performance\\u201d?\\n\\t2. The results are stated to only apply to networks \\u201cwithout batch normalization\\u201d. The reason seems intuitive: any change that can be merely rolled into the bias will be lost after normalization (depending perhaps on the ordering of normalization and the non-linearities). This leaves an annually decreasing fraction of networks to which this method is applicable, given the widespread use of batch norm.\\n\\t3. Critically, it\\u2019s difficult to compare this work against other pruning works given the lack of results reported in terms of final test error and the lack of the ubiquitous \\u201cerror vs. %-pruned\\u201d plot.\\n\\t\\nOverall, this paper is lacking some clarity, may be limited in originality, may be helpful for some common networks and composable with other pruning methods (significance), but has a good quality evaluation (subject to the clarity issues). I\\u2019m rating this paper below the threshold given the limitations, but I\\u2019m willing to consider an upgrade to the score if these questions are addressed.\", \"other_notes\": \"4. What is your definition of a convolutional \\u201cpruning unit\\u201d? (From context, I\\u2019d presume it corresponds to an output activation map.)\\n\\t5. In Section 3.1: replace \\u201cin practice, people \\u2026\\u201d with something like \\u201cin practice, it is common to\\u201d.\\n\\t6. In Equation 3, is the absolute value of the pruning penalty used in the evaluation?\\n\\t7. In the footnote in Section 3.2, how many training samples are needed for a good approximation? How many are used in the experiments?\\n\\t8. There are a couple typos in Section 3.2: \\u201creplacing -the- these units with zeroes\\u201d and \\u201ceach of these output*s*\\u201d.\\n\\t9. Presumably the \\u201c\\\\Delta Loss after pruning\\u201d in Figures 2-6 is validation or test loss, not training loss? Is this the cross-entropy loss? Also, it would be much easier to compare to other papers if test accuracy were reported instead or in addition.\\n\\t10. In Figure 4, the cost to recover using fine-tuning seems to be only roughly 2% of the original training time. How much time is lost to the process of computing the average unit activation?\", \"update\": \"I've raised the score slightly to 5 after the rebuttals and revisions.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
|
BygRNn0qYX | P^2IR: Universal Deep Node Representation via Partial Permutation Invariant Set Functions | [
"Shupeng Gui",
"Xiangliang Zhang",
"Shuang Qiu",
"Mingrui Wu",
"Jieping Ye",
"Ji Liu"
] | Graph node representation learning is a central problem in social network analysis, aiming to learn the vector representation for each node in a graph. The key problem is how to model the dependence of each node to its neighbor nodes since the neighborhood can uniquely characterize a graph. Most existing approaches rely on defining the specific neighborhood dependence as the computation mechanism of representations, which may exclude important subtle structures within the graph and dependence among neighbors. Instead, we propose a novel graph node embedding method (namely P^2IR) via developing a novel notion, namely partial permutation invariant set function} to learn those subtle structures. Our method can 1) learn an arbitrary form of the representation function from the neighborhood, without losing any potential dependence structures, 2) automatically decide the significance of neighbors at different distances, and 3) be applicable to both homogeneous and heterogeneous graph embedding, which may contain multiple types of nodes. Theoretical guarantee for the representation capability of our method has been proved for general homogeneous and heterogeneous graphs. Evaluation results on benchmark data sets show that the proposed P^IR outperforms the state-of-the-art approaches on producing node vectors for classification tasks. | [
"graph embedding",
"set function",
"representation learning"
] | https://openreview.net/pdf?id=BygRNn0qYX | https://openreview.net/forum?id=BygRNn0qYX | ICLR.cc/2019/Conference | 2019 | {
"note_id": [
"B1xE5TLZlN",
"BJxVho0Jam",
"B1eIs99237",
"HkxD5X1s3m",
"H1lkbOAUjm"
],
"note_type": [
"meta_review",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1544805771703,
1541561259755,
1541347997695,
1541235599343,
1539921910809
],
"note_signatures": [
[
"ICLR.cc/2019/Conference/Paper1500/Area_Chair1"
],
[
"ICLR.cc/2019/Conference/Paper1500/AnonReviewer4"
],
[
"ICLR.cc/2019/Conference/Paper1500/AnonReviewer2"
],
[
"ICLR.cc/2019/Conference/Paper1500/AnonReviewer1"
],
[
"ICLR.cc/2019/Conference/Paper1500/AnonReviewer3"
]
],
"structured_content_str": [
"{\"metareview\": \"AR1 is concerned with the presentation of the paper and the complexity as well as missing discussion on recent embedding methods. AR2 is concerned about comparison to recent methods and the small size of datasets. AR3 is also concerned about limited comparisons and evaluations. Lastly, AR4 again points out the poor complexity due to the spectral decomposition. While authors argue that the sparsity can be exploited to speed up computations, AR4 still asks for results of the exact model with/without any approximation, effect of clipping spectrum, time complexity versus GCN, and more empirical results covering all these aspects. On balance, all reviewers seem to voice similar concerns which need to be resolved. However, this requires more than just a minor revision of the manuscript. Thus, at this time, the proposed paper cannot be accepted.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Evaluations, complexity, comparisons to the most recent methods.\"}",
"{\"title\": \"Computational costly heterogeneous graph embedding\", \"review\": \"This paper proposed a heterogeneous graph embedding method P^2IR. The author(s) first\\nargued that such an embedding should be invariant to partial permutations of nodes.\\nThen the authors gave a general formulation of such an embedding in theorem 3.1.\\nThen the authors instantiated this general formulation by a neural network\\nparametrization, which can be optimized based on the L^2 loss and a supervised\\nregularizer. The method is tested against graph embedding methods that do not\\nneed node attributes (in GCN the authors \\\"eliminated\\\" the node attributes) on\\nsemi-supervised node classification tasks, showing a significant improvement.\\n\\nMy main criticism is that the authors did not clarify or put any efforts on\\nsolve the high computational complexity.\\nThe proposed method needs to perform a spectral decomposition of\\nthe adjacency matrix, which has cubic complexity. This is unacceptable,\\nmaking the method less useful for real networks.\\nFurthermore, to optimize the embedding using SGD requires graph\\nFourier transformations that have quadratic complexity.\\nIn section 3.3, the exact complexity should be given, without which\\nthe technique is incomplete.\\n\\nAn important reference \\\"Graph Attention Networks. P. Veli\\u010dkovi\\u0107 et al. 2018.\\\"\\nis missing, which has the similar idea to automatically learn the neighborhood\\nproximities. It should be cited as this is a key idea to motivate the paper.\\n\\nThe presentation quality is not satisfactory. For example, in page 3, f() has K\\nmatrix arguments, then in page 4 theorem 3.1, f() takes KN arguments.\\nPlease make it consistent.\\n\\nIn page 5, the formulations from eq.(3) to eq.(5) can be further unified\\nand simplified. From eq.(3) to eq.(4) is not straightforward and need more\\nexplanations. If you use \\\\mathcal{R} as the embedding, it should appear in\\neq.(4) to be consistent.\\n\\nTable 1 has no contents.\\n\\nIn the experimental results, the performance of GCN with node attributes\\nshould be given for completeness (although the comparison is less fair).\\nA related question is how to incorporate node attributes in your framework?\\n\\nIn the heterogeneous experiments, for completeness, the authors are suggested\\nto compare against a heterogeneous version of GCN (again, with and without node\\nattributes) such as \\\"Modeling Relational Data with Graph Convolutional Networks.\\nSchlichtkrull et al. 2017.\\\"\\n\\nThe paper is longer than the recommended length.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"A unified way to incorporate high-order proximity information for graph embedding\", \"review\": \"The paper proposes a formulation for taking care of neighborhood of different distances for graph embedding. It makes use of a notion called permutation invariant function which defined as a function where if we swap any features in the inputs, the function value remains the same. Given this, they make two contributions to make the consideration of neighborhood of different distances for graph embedding possible. First, they make the assumption that the contribution of neighbours of same steps should be the same and thus permutable in defining how the embedding function of a node is depending on this neighbours. Another one is the use of 1-d NN for estimating the contribution from 1-step, 2-step and up to infinite-step. Then, the overall problem formulation is defined and can be learned using SDG.\\n\\n+ve:\\n1. The paper is well organized and clearly presented.\\n2. The technique proposed can handle neighborhood of different distances while the existing methods make explicit or implicit assumptions (and thus restrictions) about the neighborhood to be considered.\\n3. The proposed method performs consistently better than a number of representative deep graph models based on a number of benchmark datasets.\\n4. The method is applicable to both homogeneous and heterogeneous graphs.\\n\\n-ve:\\n1. The part after Eq.(4) and before Section 3.3 is important but a bit hard to read as compared to the other parts of the paper.\\n2. The graphs tested are not particular large. Larger ones should be tested.\\n3. The methods being compared are not the most recent ones (all published in 2016 or before).\\n4. Something wrong with Table 1?\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting idea, missing related work, missing results discussion and overall poor presentation\", \"review\": \"The paper explores the very interesting and relevant problem of universal node representation. It points out that although powerful models for representation learning on graphs exists, most existing works require to pre-define a pairwise node similarity or to specify model parameters. Hence, the authors propose a novel model that doesn\\u2019t require to pre-define neighbors nor to specify the dependence form between each node and its neighbors.\", \"pros\": [\"This work studies the important question of universal node embedding model that require minimal user-defined specifications.\", \"It proposes an original and novel solution to achieve universal node embedding based on partially permutation invariant function.\", \"Provides theoretical guarantee.\"], \"cons\": [\"Some recent works on structural node embedding are directly related to this work but missing in the related work section: struc2vec [1] and GraphWave [2].\", \"In the experiment section, it would be necessary to provide the values of the tuned hyper-parameters for each model for reproducibility.\", \"The results are not really analysed nor discussed beyond noticing that P^2IR performs better than other models in most cases. For instance, the authors don't discuss the complexity of the different models, or don\\u2019t give intuition as to whether the improvements are significant.\", \"It would be relevant to include some node embeddings models (such as [1,2]) in the baseline methods as they have been shown to outperform node2vec/deepwalk in some classification tasks.\"], \"minor_comments_on_the_text\": [\"on page 2, WFS instead of BFS\", \"on page 5, please spell out 'NN function'\", \"on page 6, in the last equation characterizing the mapping of node v, it is not clear why the subscript k in phi_k is there. (similarly for eq. (3) and (4) and subsequent mention of phi).\", \"on page 7, Table 1 is useless.\", \"1. Ribeiro, L. F., Saverese, P. H., and Figueiredo, D. R. (2017). Struc2vec: Learning node representations from structural identity\", \"2. Donnat, C., Zitnik, M., Hallac, D., and Leskovec, J. (2018). Learning structural node embeddings via diffusion wavelets.\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"The paper is good but quality of experiments can be further improved.\", \"review\": \"The authors introduce the idea of a partial permutation invariant set function and use this to learn node embeddings. The paper is well-written and the discussion is quite easy to follow along. The paper also introduces some interesting concepts like a partial permutation invariant set function. However, I find that some of the paper's main claims can be further backed up by more experiments.\\n\\nThe paper is well-written. The authors evaluate their approach on a large number of real-world homogeneous and heterogeneous graphs. Furthermore, they show the stability of their method by showing consistently good results even when training set size is varied. Defined the notion of a partial permutation invariant set function and provided theoretical guarantees pertaining to this.\\n\\nOne of the strengths of the proposed method is its ability to \\\"automatically decide the significance of nodes at different distances.\\\" The authors devote a good portion of their paper to talk about this. However, a recent paper published in NIPS '18 [1] with a pre-print available much earlier solves this problem by applying attention over powers of a transition matrix. The authors should talk about [1] and ideally compare against them.\\n\\nI feel that a lot of the authors main claims can be strengthened further if more experiments were shown to back these up. What I mean to say is, it would be nice to see some other experiments apart from just classification performance. To compare with [1], for instance, they show link prediction/classification results but on top of these they also show that their method may choose very different \\\"neighborhoods\\\" to attend to.\\n\\nThe authors gave a fairly comprehensive review of related literature (which was good!) and they mentioned that \\\"most existing methods either explicitly or implicitly restrict the dependence form of each node to its neighbors and also the depth of neighbors.\\\" I feel another approach they should compare against which does not seem to have this problem is [2] since the method learns \\\"role-based\\\" embeddings which are more dependent on structure rather than proximity.\\n\\nThe paper in its current form is fairly good. If comparison can be made against [1] & [2] and some additional experiments can be added, the quality of the paper can be improved further.\\n[1] Watch Your Step: Learning Node Embeddings via Graph Attention. Abu-El-Haija et al. In Proc. of NIPS 2018.\\n[2] Higher-Order Network Representation Learning. Rossi et al. In Proc. of WWW 2018.\", \"there_are_some_minor_errors_in_the_paper\": \"Table 1 in page 7 seems to be an error. It's an empty table and it is not referred to anywhere in the paper.\\n\\nThe format of some references needs double-checking. For example,\\n\\n\\\"Jian Tang, Meng Qu, and Qiaozhu Mei. Pte: Predictive text embedding through large-scale heterogeneous text networks. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1165\\u20131174. ACM, 2015a.\\\"\\n\\n(1) \\\"21th\\\" should be \\\"21st\\\".\\n\\n\\\"Shiyu Chang, Wei Han, Jiliang Tang, Guo-Jun Qi, Charu C Aggarwal, and Thomas S Huang. Heterogeneous network embedding via deep architectures. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 119\\u2013128. ACM, 2015.\\\"\\n\\n(2) \\\"21th\\\" should be \\\"21st\\\".\\n\\n\\\"Mingdong Ou, Peng Cui, Jian Pei, Ziwei Zhang, and Wenwu Zhu. Asymmetric transitivity preserving graph embedding. In Proceedings of the 22Nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD \\u201916, pp. 1105\\u20131114, 2016.\\\"\\n\\n(3) \\\"22Nd\\\" should be \\\"22nd\\\"\\n\\n\\\"Daixin Wang, Peng Cui, and Wenwu Zhu. Structural deep network embedding. In Proceedings of the 22Nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD\\u201916, pp. 1225\\u20131234, 2016.\\\"\\n\\n(4) \\\"22Nd\\\" should be \\\"22nd\\\"\\n\\n\\\"Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. Deepwalk: Online learning of social representations. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 701\\u2013710. ACM, 2014\\\"\\n\\n(5) \\\"international conference on Knowledge discovery and data mining\\\" should be \\\"International Conference on Knowledge Discovery and Data Mining\\\"\\n\\n\\\"Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pp. 3111\\u20133119, 2013.\\\"\\n\\n(6) \\\"Advances in neural information processing systems\\\" should be \\\"Advances in Neural Information Processing Systems\\\"\\n\\n\\\"Jian Tang, Meng Qu, Mingzhe Wang, Ming Zhang, Jun Yan, and Qiaozhu Mei. Line: Large-scale information network embedding. In Proceedings of the 24th International Conference on World Wide Web, pp. 1067\\u20131077. International World Wide Web Conferences Steering Committee, 2015b.\\\"\\n\\n(7) \\\"International World Wide Web Conferences Steering Committee\\\" should be removed.\\n\\nThe 3rd sentence in the abstract is a little bit too long. It would be better if the authors could break the sentence into shorter ones. Here is the sentence: \\\"While most existing approaches rely on defining the specific neighborhood dependence as the computation mechanism of representations, which may exclude important subtle structures within the graph and dependence among neighbors, we propose a novel graph node embedding method (namely P2IR) via developing a novel notion, namely partial permutation invariant set function.\\\"\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.