forum_id
stringlengths
9
20
forum_title
stringlengths
3
179
forum_authors
sequencelengths
0
82
forum_abstract
stringlengths
1
3.52k
forum_keywords
sequencelengths
1
29
forum_decision
stringclasses
22 values
forum_pdf_url
stringlengths
39
50
forum_url
stringlengths
41
52
venue
stringclasses
46 values
year
stringdate
2013-01-01 00:00:00
2025-01-01 00:00:00
reviews
sequence
Byg5QhR5FQ
Top-Down Neural Model For Formulae
[ "Karel Chvalovský" ]
We present a simple neural model that given a formula and a property tries to answer the question whether the formula has the given property, for example whether a propositional formula is always true. The structure of the formula is captured by a feedforward neural network recursively built for the given formula in a top-down manner. The results of this network are then processed by two recurrent neural networks. One of the interesting aspects of our model is how propositional atoms are treated. For example, the model is insensitive to their names, it only matters whether they are the same or distinct.
[ "logic", "formula", "recursive neural networks", "recurrent neural networks" ]
https://openreview.net/pdf?id=Byg5QhR5FQ
https://openreview.net/forum?id=Byg5QhR5FQ
ICLR.cc/2019/Conference
2019
{ "note_id": [ "SkeI3hUxxN", "B1lJNSp7JV", "SJlJykpRAQ", "rkxMYQi9A7", "HyeZl-j507", "H1gP6yjq0m", "H1xTZE693X", "HkeTjz3wnQ", "BJexSD_PnQ" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544740013886, 1543914791337, 1543585494720, 1543316345556, 1543315688813, 1543315390519, 1541227524912, 1541026469245, 1541011256421 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1384/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1384/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1384/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1384/Authors" ], [ "ICLR.cc/2019/Conference/Paper1384/Authors" ], [ "ICLR.cc/2019/Conference/Paper1384/Authors" ], [ "ICLR.cc/2019/Conference/Paper1384/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1384/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1384/AnonReviewer1" ] ], "structured_content_str": [ "{\"metareview\": \"This paper presents a method for building representations of logical formulae not by propagating information upwards from leaves to root and making decisions (e.g. as to whether one formula entails another) based on the root representation, but rather by propagating information down from root to leaves.\\n\\nIt is a somewhat curious approach, and it is interesting to see that it works so well, especially on the \\\"massive\\\" train/test split of Evans et al. (2018). This paper certainly piques my interest, and I was disappointed to see a complete absence of discussion from reviewers during the rebuttal period despite author responses. The reviewer scores are all middle-of-the-road scores lightly leaning towards accepting, so the paper is rather borderline. It would have been most helpful to hear what the reviewers thought of the rebuttal and revisions made to the paper.\\n\\nHaving read through the paper myself, and through the reviews and rebuttal, I am hesitantly casting an extra vote in favour of acceptance: the sort of work discussed in this paper is important and under-represented in the conference, and the results are convincing. I however, share the concerns outlined by the reviewers in their first (and only) set of comments, and invite the authors to take particular heed of the points made by AnonReviewer3, although all make excellent points. There needs to be some further analysis and explanation of these results. If not in this paper, then at least in follow up work. For now, I will recommend with medium confidence that the paper be accepted.\", \"confidence\": \"3: The area chair is somewhat confident\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Probably acceptable\"}", "{\"title\": \"Updated Review\", \"comment\": \"I've read the new version of the paper and the comments of other reviewers and I've decided to increase my score.\"}", "{\"title\": \"Please clarify position\", \"comment\": \"Thank you, reviewer 1, for your review. I appreciate and understand your position regarding the lack of explanation for the model's performance. However, our field is primarily empirical, and it is common for engineering-oriented papers to produce such results which will only be properly understood and explained by further work. The literature is rife with examples, from GANs to regularization tricks for RNNs. You must ask yourself: are the results sufficiently believable? is the study conducted rigorously? and have the authors attempted to explain and discuss them to a reasonable extent? Please read the author response, revisions to the paper, and be prepared to reconsider your assessment or provide further justification as to why you stand by your current score, if that is what you choose to do.\"}", "{\"title\": \"Reply to your review\", \"comment\": \"Thank you for your comments. You are right that the inner working of the model is unclear. A brief Section 3.1 was added to a revised version of the paper, where the produced models are shortly analyzed. Hopefully, it sheds some light on the model.\\n\\nConcerning your second point, RNNs are used because they fit nicely into the model. It makes it possible to have potentially unlimited number of occurrences of an atom and the number of distinct atoms in a formula, which is a nice feature of the model. Hence the model can evaluate formulae that contain more atoms than formulae used for training. It is possible to use feedforward NNs, but it seems that we then mimic the unfolding of RNNs.\\n\\nBoth your minor comments were incorporated into a revised version of the paper.\"}", "{\"title\": \"Reply to your review\", \"comment\": \"Thank you for your comments. A new Section 3.1 was added to a revised version of the paper, where the inner working of the model is briefly discussed. Although it is definitely far from being conclusive, it, hopefully, sheds some light on the model.\\n\\nYour description (point 7) of how the model can possible work corresponds to the idea behind the model as described in Section 2 and discussed in new Section 3.1. An interesting point in your text is that values may change their positions in lists of truth values. In fact, something like that can actually happen, but so far, it is really unclear how to do this, because such changes have to be (almost) consistent through the whole model. Moreover, to make things even more complicated, different atoms occur at different levels (their depth) in a formula.\\n\\nYou are right (point 2) that the model, in its current form, cannot produce suitable vector encodings of propositions. For example, the model is invariant to the renaming of atoms. However, for formulae where this is no longer an issue, e.g., sentences in FOL, it is possible to imagine such interpretations even using a top-down approach.\"}", "{\"title\": \"Reply to your review\", \"comment\": \"Thank you for your comments. Indeed, the question why and when a top-down model outperforms a bottom-up model is crucial. However, as you have pointed out, it is likely a difficult question to answer. A new Section 3.1 was added to a revised version of the paper, where the inner working of the model is briefly analyzed. A top-down model was also tested on formulae from another dataset. Although the results are hard to compare directly, it seems that the model does not exploit just one particular dataset. Similarly, we can reformulate a TAUT-problem as a SAT-problem by taking the negation of formula. The results remain similar on the dataset from Evans et al., however, this is hardly surprising, because the problem remains essentially the same from the point of view of a top-down approach.\\n\\nAll your minor comments were incorporated into a revised version of the paper.\"}", "{\"title\": \"Simple interesting neural-net model of logical formulae\", \"review\": [\"In this paper, the authors provide a new neural-net model of logical formulae. The key feature of the model is that it gathers information about a given formula by traversing its parse tree top-down. One neural net of the model traverses the parse tree of the formula from the root all the down toward the leaves, and generates vectors for the leaves of the tree. Then, another RNN-based neural net collects these generated vectors, and answers a query asked for the formula, such as logical entailment. When experimented with Evans et al.'s data set for logical entailment queries, the authors' model outperforms existing models that encode formulae by traversing their parse trees bottom-up.\", \"I found the idea of traversing a parse tree of a formula top-down and converting it to a vector very interesting. It is also good to know that the idea leads to a competitive model for at least one dataset.\", \"However, I am hesitant to be a strong supporter for this paper. I feel that the cons and pros of the model and its design decisions are not fully analyzed or explained in the paper; when reading this paper, I wanted to learn a rule of thumb for deciding when (and why if so) a top-down model of logical formulae works better than a bottom-up model. I understand that what I ask for is very difficult to answer, but experiments with more datasets and different types of queries (such as satisfiability) might have made me happier.\", \"Here are some minor comments.\", \"Abstract: I couldn't quite understand your point about atoms. According to Figure 1, there is a neural net for each propositional symbol, and this means that your model tracks information about which occurrences of propositional symbols are about the same one. Is your point about the insensitivity of your model to a specific name given to each symbol?\", \"p1: this future ===> this feature\", \"p2: these constrains ===> these constraints\", \"p2: recursively build model ===> recursively built model\", \"p2: Change the font of R in the codomain of ci.\", \"p3: p1 at the position of ===> p1 is at the position of\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Good but under-explored performance of a semi-original approach to an important problem in neural-symbolic computing\", \"review\": \"Cons\\n\\n1.\\tThere is no study of the representations developed by the model, which is unfortunate because this is a conference on learning representations and because there is little light shed on how the network achieves its rather high level of performance.\\n2.\\tIt seems less generally useful to have such a special-purpose network for computing global properties like tautologicality than to have a network that produces actual vector encodings of propositions, as typical of the bottom-up tree-structured models.\\n\\nPros\\n\\n3.\\tThe paper is quite clear.\\n4.\\tThe problem is important.\\n5.\\tThe paper pursues the familiar path of a tree-structured network isomorphic to the parse tree of a propositional-calculus formula, but with the original twist of passing information top-down rather than bottom-up.\\n6.\\tThe results are impressively strong. In particular, it improves by 10% absolute over the special-purpose and highly performant PossibleWorldNet on the most difficult category of problems, the \\u2018massive\\u2019 category, achieving 83.6% accuracy.\\n\\nPro/Con mix\\n\\n7.\\tAlthough the paper did not provide much insight into what was going on in the network to allow it to perform well (point 1 in \\u2018Cons\\u2019), I was able to convince myself I could understand a way the architecture *could* succeed (whether this possible approach matches the actual processing in the model I have no way of assessing). In brief, the vector that is passed down the network can be thought of as a list of truth values across multiple possible worlds of the tree node at which the vector resides. To search for a counterexample to tautologicalhood, the original input vector to the root node could be the zero (false) vector. If the kth value in the vector at a parent node labeled \\u2018or\\u2019 is 0 (the disjunction is false in world k) then in the two children the kth value must also be 0. If the kth value of the vector at an XOR node is 0, the kth value of the two children must both be 0 or both be 1; actually these values need not reside in position k so the children could both have value 0 at some position i and both have value 1 at another position j. Then in the RNN-Var component of the network, which checks for consistency across multiple tokens of the same proposition variable, each position k in all vectors for the same variable can be checked for equality, producing a value 1 in the output vector if all have value 1, producing 0 if all have value 0, and producing value -1 if the values do not all agree. Then RNN-All checks across all vectors for proposition variable types to see if there\\u2019s a position k in which no value -1 occurs; if so, the values of the variable vectors at position k give the truth values for all variables such that the overall proposition has the desired value 0: a counterexample exists. If no such position k exists, the proposition is a tautology. This seems roughly right, at least.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Review\", \"review\": \"In this paper the authors propose a neural model that, given a logical formula as input, predicts whether the formula is a tautology or not. Showing that a formula is a tautology is important because if we can classify a formula A -> B as a tautology then we can say that B is a logical consequence of A. The structure of the formula is a feedforward neural network built in a top-down manner. The leaves of this network are vectors (each of them represents a particular occurrence of an atom) which, after the construction of the formula, are processed by some recurrent neural networks.\\n\\nThe proposed approach seems interesting. However, my main doubt concerns the model. It seems to outperform the state-of-the-art, but the authors do not give any explanations why. There is no theoretical or intuitive explanation of why the model works. Why we need RNNs and not feedforward NNs? I think this is an big issue.\\nIn conclusion, I think that the paper is a bit borderline. The model should be better explained. However, I think that the approach is compelling and, after a minor revision, the paper could be considered for acceptance.\\n\\n[Minor comments]\\nPage 4. \\n\\u201cThe dataset contains train (99876 pairs)\\u201d, pairs of what?\\n\\nPage 5. \\nWhat is the measure of the values reported in Table 1? Precision?\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}" ] }
H1e572A5tQ
TarMAC: Targeted Multi-Agent Communication
[ "Abhishek Das", "Theophile Gervet", "Joshua Romoff", "Dhruv Batra", "Devi Parikh", "Mike Rabbat", "Joelle Pineau" ]
We explore the collaborative multi-agent setting where a team of deep reinforcement learning agents attempt to solve a shared task in partially observable environments. In this scenario, learning an effective communication protocol is key. We propose a communication protocol that allows for targeted communication, where agents learn \emph{what} messages to send and \emph{who} to send them to. Additionally, we introduce a multi-stage communication approach where the agents co-ordinate via several rounds of communication before taking an action in the environment. We evaluate our approach on several cooperative multi-agent tasks, of varying difficulties with varying number of agents, in a variety of environments ranging from 2D grid layouts of shapes and simulated traffic junctions to complex 3D indoor environments. We demonstrate the benefits of targeted as well as multi-stage communication. Moreover, we show that the targeted communication strategies learned by the agents are quite interpretable and intuitive.
[ "agents", "communication", "tarmac", "communication tarmac", "collaborative", "setting", "team", "deep reinforcement", "task", "observable environments" ]
https://openreview.net/pdf?id=H1e572A5tQ
https://openreview.net/forum?id=H1e572A5tQ
ICLR.cc/2019/Conference
2019
{ "note_id": [ "HkeK6o3wg4", "HyxYwslR0X", "H1ezrqgRCm", "rygy0x6aRm", "Bklshep6RX", "H1xbixp6Rm", "S1e-enr50Q", "B1g8KsS5Am", "HklSMiBqA7", "SJlVpcB50m", "r1lZZ5r5RQ", "HklTK_ChTX", "BJgU41z93X", "S1lmskpu27", "S1xvVXP_nm" ], "note_type": [ "meta_review", "official_comment", "comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "comment", "official_review", "official_review", "official_review" ], "note_created": [ 1545223105186, 1543535457017, 1543535161723, 1543520454803, 1543520434631, 1543520409084, 1543293928644, 1543293821781, 1543293708928, 1543293628504, 1543293433040, 1542412420803, 1541181230474, 1541095323109, 1541071662824 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1383/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1383/Authors" ], [ "~Akshat_Agarwal1" ], [ "ICLR.cc/2019/Conference/Paper1383/Authors" ], [ "ICLR.cc/2019/Conference/Paper1383/Authors" ], [ "ICLR.cc/2019/Conference/Paper1383/Authors" ], [ "ICLR.cc/2019/Conference/Paper1383/Authors" ], [ "ICLR.cc/2019/Conference/Paper1383/Authors" ], [ "ICLR.cc/2019/Conference/Paper1383/Authors" ], [ "ICLR.cc/2019/Conference/Paper1383/Authors" ], [ "ICLR.cc/2019/Conference/Paper1383/Authors" ], [ "~Akshat_Agarwal1" ], [ "ICLR.cc/2019/Conference/Paper1383/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1383/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1383/AnonReviewer1" ] ], "structured_content_str": [ "{\"metareview\": \"The reviewers raised a number of concerns including the lack of clarity of various parts of the paper, lack of explanation, incremental novelty, and insufficiently demonstrated significance of the proposed. The authors\\u2019 rebuttal addressed some of the reviewers\\u2019 concerns but not fully. Overall, I believe that the paper presents some interesting extensions for multi-agent communication but in its current form the paper lacks explanations, comparisons and discussions. Hence, I cannot recommend this paper for presentation at ICLR.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Metareview\"}", "{\"title\": \"Response to comment on targeted communication\", \"comment\": \"Yes, we think targeted communication implies targeting in both directions. Just the receiver deciding who to listen to would be targeted listening. Just the sender deciding who to send messages to would be targeted speaking/broadcasting. What we have is targeted two-way communication.\"}", "{\"comment\": \"1) To the best of my understanding, this still means that all agents are sending a message to each other, and the attention weights depend both on a \\\"signature\\\" produced by the sender, and a \\\"query\\\" produced by the receiver. In my opinion, targeted communication would generally mean that the sender chooses a subset of agents to send a message to at each time step, while you seem to mean that the sender has an impact on how much attention the receiver chooses to pay to its message. I think that was the primary confusion behind my question.\\n2) Agreed.\\n\\nThanks for your reply!\", \"title\": \"thanks for your response\"}", "{\"title\": \"Request for feedback\", \"comment\": \"Hi Reviewer1 \\u2014 thank you once again for your feedback on our work! We were wondering if you had any updated thoughts / feedback / questions following our response. We'd be happy to address additional concerns (if any). Please let us know either way. Thanks!\"}", "{\"title\": \"Request for feedback\", \"comment\": \"Hi Reviewer2 \\u2014 thank you once again for your feedback on our work! We were wondering if you had any updated thoughts / feedback / questions following our response. We'd be happy to address additional concerns (if any). Please let us know either way. Thanks!\"}", "{\"title\": \"Request for feedback\", \"comment\": \"Hi Reviewer3 \\u2014 thank you once again for your feedback on our work! We were wondering if you had any updated thoughts / feedback / questions following our response. We'd be happy to address additional concerns (if any). Please let us know either way. Thanks!\"}", "{\"title\": \"Response to Reviewer 1 (part 2)\", \"comment\": \"> \\u201cThe messages is factorized into two parts k and u leading to a vector of size D -- what happens should we have one message of size D (rather than factorizing into 2), something like this would control for any improvements obtained from increases the parameters of the model.\\u201d\\n\\nSee figure 3, where we compare effect of increasing message size (i.e. adding more parameters) vs. multiple rounds of communication. Although this is still with the factorization into two parts, it captures change in performance with increase in model parameters. We find that simply increasing message size has little change in performance, and most of the gains come from multiple rounds of communication.\\n\\n> \\u201cFinally, if the premises of the paper is to define more effective communication protocols, evident in the use of continuous communication, (rather than studying what form can multi-agent communication etc etc), a necessary baseline (especially in cases where agents share reward), is to communicate the full observation (rather than a function of it). This baseline is not presented here and it's absolutely necessary.\\u201d\\n\\nOn all 3 tasks studied in this work, a setting where each agent communicates its complete observation as the message, performs as well as TarMAC, and both outperform no attention (i.e. mean pooling messages). This is expected, since our environments are perceptually less complex than real-world scenarios. In principle, learning to communicate a function of the observation as in TarMAC allows compact representations of the observation to be transmitted, which is desirable in high-dimensional real-world observation spaces, where it would be infeasible and/or expensive to communicate complete observations, for example, a network of cars perceiving through a host of sensors, or a team of robots playing soccer.\"}", "{\"title\": \"Responses to Reviewer 1 (part 1)\", \"comment\": \"We thank the reviewer for their insightful feedback!\\n\\n> \\u201cMy main concern is the following: the method is not about targeting, but about selectively hearing. If agents are sharing the reward then why should targeted communication be beneficial at all? Isn't the optimal strategy to just communicate everything to everyone? I understand that they should be selective at the listening side to properly integrate only the relevant information (so, attend over all received messages), but why should we expect the speaker to apriori know who this message should go to? Moreover, I don't really understand how targeted communication can even work (in the way the authors explain it) since the agents have partial information (e.g., in shapes they only see 5x5 around them), so they don't really know who is where -- but I could potentially see this working should the agents put information about their own identity and location. So, given the positive results that the authors get, my understanding is that the signature doesn't have information about who should the recipient of the information be but more about what where the properties of the sender of this information. So, based on my understanding, I don't feel that the flow of the story quite matches what is really happening and this might be very confusing for prospective readers. Can the authors elaborate on this, aim i getting things wrong?\\u201d\\n\\nIn the SHAPES environment, in addition to a 5x5 image observation as input, the agents also get as input -- 1) an embedding of the goal they are supposed to navigate to, and 2) their own coordinates. 2 was missing from the description in the paper, we have added it in the revised version (apologies for this). So yes, agents do know where they are, and each agent is free to communicate any of this information with other agents.\\n\\nEach agent predicts three vectors for communication \\u2014 signature, query, and value (Fig 1 and Eq 1). The communication is targeted because the attention probabilities are a function of both the sender\\u2019s signature and receiver's query vectors. So it is not just the receiver deciding how much of each message to listen to. That is, it is not just targeted listening. The sender also sends out signatures that affects how much of each message is sent to each receiver. \\n\\nThe sender's signature could encode parts of its observation most relevant to other agents' goals (for example, it would be futile to convey coordinates in the signature). And the message value could contain the agent's own location. For example, in Fig 2a, we see that when agent 2 passes by blue, agent 4 starts attending to agent 2. Here, agent 2's signature encodes the color it observes (which is blue), and agent 4's query encodes its goal (which is also blue) leading to high attention probability. And agent 2's message value likely encodes coordinates for agent 4 to navigate to. We have included a discussion on this in Section 5.1.\\n\\n> \\u201cThere is literally no information about model size (or at least I wasn't able to find any). Is there any weight-sharing across agents? Do you obtain CommNets by using the implementations of the authors or by ablating the signature-part of your model? \\u201c\\n\\nEach agent's GRU hidden state is 128-d, message signature/query is 16-d, and message value is 32-d (unless specified otherwise). We have updated section 5 with model size details (in orange). And yes, as mentioned in section 4, all agents share the same set of parameters.\\n\\nResults for CommNets are from their paper (https://arxiv.org/abs/1605.07736), and we benchmark our models on the same environment configurations as their paper using code obtained from the authors.\\n\\n> \\u201cMoreover, why do agents have a limited view window on the SHAPES -- is (targeted) communication redundant when agents have full observability?\\u201d\\n\\nYes, communication is not needed when agents have full observability. In SHAPES and House3D, agents would know where the goal is, and in Traffic Junction, they would know the position of every other car, so they can navigate and maximize reward without having to communicate.\\n\\n> \\u201cThe part about how multi-staged communication is implemented is quite cryptic at the moment -- is multi-staged the fact that the message is outputted by processing with a recurrent unit?\\u201d\\n\\nMulti-stage communication refers to the fact that agents are allowed to aggregate and exchange messages multiple times before taking one action in the environment. Concretely, Eq 4 is used to compute an updated hidden state for each agent from the aggregated message at previous timestep, followed by repeating Eq 1-3 to perform the next round of exchange of messages.\"}", "{\"title\": \"Response to Reviewer 2\", \"comment\": \"We thank the reviewer for their insightful feedback!\\n\\n> \\u201cEqn (4) looks like a vanilla RNN. Did you experience any issues around exploding or vanishing gradients when doing multiple rounds of communication? Why not use a gated architecture here?\\u201d\\n\\nYes, for a fair comparison to CommNets (Sukhbaatar et al., 2016) we use a formulation similar to a vanilla RNN. Indeed, gated recurrent units and other techniques can be employed to stabilize training in RNNs in multi-stage communication and would be interesting to explore in the future. This is orthogonal to the goal of this work though -- which is to develop a simple inter-agent targeting mechanism through attention. Moreover, our vanilla network trains fairly reliably for the 3 tasks we studied in our work.\\n\\n> \\u201c\\\"Centralized Critic\\\" section: This equation is from the COMA paper, ie. a centralised critic with policy gradients rather than DDPG. What did you use for the variance reduction baseline to estimate the advantage? Also, did you try conditioning the critic on the central state rather than the concat of observations? Formally this is required for the algorithm to be convergent.\\u201d\\n\\nThanks for the pointer, we\\u2019ve cited both in the revised version. The equation corresponds to equation 4 from Lowe et al., 2017 (https://arxiv.org/abs/1706.02275) as well. Following Lowe et al., 2017, we do not condition the critic on the global state, but only on joint observations of all agents, and we do not use a variance reduction baseline. We will experiment with conditioning the critic on global state in future.\\n\\n> \\u201cHow many independent seeds are the results averaged over? Did you check if any of these numbers are significant? This is my single biggest concern with the paper. Currently it's unclear whether attention is required at all in the settings presented.\\u201d\\n\\nAll results are averaged over 5 independent runs with different seeds. The revised version has standard errors for all results. And yes, all the discussed trends are significant, i.e. wherever our submission claimed superior performance over no-attention across all 3 tasks (Table 2-4), they still hold.\"}", "{\"title\": \"Response to Reviewer 3\", \"comment\": \"We thank the reviewer for their insightful feedback!\\n\\n> \\u201c1) The idea of multi-stage communication is great, but the paper doesn't have a strong point to support this contribution. Could the authors illustrate the benefit of multi-stage e.g. vs. the communication channel width?\\u201d\\n\\nWe evaluated TarMAC on the hard variant of the traffic junction task with 1-stage and 2-stage communication and varying message value sizes. As can be seen in figure 3 in the revised draft, multiple rounds of communication leads to significantly higher performance than simply increasing message size, demonstrating the advantage of multi-stage communication. In fact, decreasing message size to a single scalar performs almost as well as when the message is 64-d (note that signature and query sizes were fixed at 32-d while we changed message value size), perhaps because even a single real number can be sufficiently partitioned to cover the space of meanings/messages that need to be conveyed for this task. We have added this discussion at the end of section 5.2.\\n\\n> \\u201c2) In DIAL, the authors introduce a \\\"null\\\" action, what is the difference of that and multi-stage?\\u201d\\n\\nOur understanding is that the reviewer is referring to the \\\"None\\\" action in the switch riddle game in DIAL. If that's the case, then the main difference is that the \\\"None\\\" action is an environment action that has an impact on the environment itself - whereas during our multi-stage communication, no environment actions are taken, but rather the agents are deliberating internally, sending back-and-forth messages multiple times before taking an environment action.\\n\\n> \\u201c3) It is not clear to the reader what is the contribution of targeted communication vs. non-targeted as it looks a solution to the mean-pooling. Could the authors include at least one more experiment with on an architecture that doesn't use mean pooling. From an architecture perspective there is a scalability benefit of using pooling, but if that's the only one it has to be made more clear. 4) Following (3) based on Reddit there was a recent code release in python https://github.com/minqi/learning-to-communicate-pytorch. An alternative would be to evaluate TarMAC to one of the test beds, but the paper misses baselines.\\u201d\\n\\nThe \\\"No attention\\\" baselines in tables 2 and 4, and CommNets in table 3 all rely on mean-pooling, as opposed to TarMAC, which makes use of attentional pooling. TarMAC outperforms all mean-pooling baselines across SHAPES, Traffic Junction, and House3D. Results for CommNets are from their paper (https://arxiv.org/abs/1605.07736), and we benchmark our models on the same environment configurations as their paper using code obtained from the authors.\\n\\nThe learnt communication is targeted because the attention probabilities are a function of both the sender\\u2019s signature and receiver's query vectors. So it is not just the receiver deciding how much of each message to listen to. That is, it is not just targeted listening. The sender also sends out signatures that affects how much of each message is sent to each receiver. For example in SHAPES, the sender can direct a message to \\u201cthose looking for red objects\\u201d by encoding this in the signature. We have included a detailed discussion on this at the end of section 5.1 in the revised version.\\n\\nAn architecture with no message pooling mechanism (attentional, mean, etc.) and with message concatenation instead has several crucial limitations -- 1) number of parameters scale linearly with number of agents, 2) no support for variable number of agents at training/test time -- both severely limiting scalability. For instance, in the traffic junction environment, the number of active cars in the system keeps changing across timesteps (violet curve in Fig 4c), so this experiment just cannot be run in this environment.\\n\\nSo yes, TarMAC provides scalability benefits owing to attentional pooling -- by supporting a compact model size while allowing variable team sizes -- but also imparts intermediate interpretability to the communication channel through predicted attention probabilities, and outperforms mean-pooling across experiments.\"}", "{\"title\": \"Response to question about 1) targeted communication, 2) comparison to VAIN\", \"comment\": \"Thanks for your comments!\\n\\n> \\u201cThis paper claims that the agents choose who to send messages to, however from Figure 1 and Section 4 it appears that each agent outputs a message=<signature,value> which is sent to ALL agents, who then use dot-product attention to give more or less weight (or importance) to messages from certain agents. So, each agent receives a message from all other agents, and then uses attention to aggregate these messages (instead of just taking a mean as done in CommNet, Sukhbaatar et al. 2016). Can you please elaborate on how the communication is targeted?\\u201d\\n\\nThe learnt communication is targeted because the attention probabilities are a function of both the sender\\u2019s signature and receiver's query vectors. So it is not just the receiver deciding how much of each message to listen to. That is, it is not just targeted listening. The sender also sends out signatures that affects how much of each message is sent to each receiver. \\n\\nFor example in SHAPES, the sender can direct a message to \\u201cthose looking for red objects\\u201d by encoding this in the signature. We have included a detailed discussion on this at the end of section 5.1 in the revised version.\\n\\n> \\u201cAnother point: in 'VAIN: Attentional Multi-agent Predictive Modeling' (Hoshen 2017, published in NIPS 2017), each agent uses a similar attention mechanism to aggregate messages from other agents (however it uses an exponential kernel function instead of dot-product). Apart from the particular form of attention, can you please elaborate on the difference between your work and VAIN?\\u201d\\n\\nYes, VAIN proposes to replace averaging by a similar attentional mechanism to allow targeted interactions between agents. While closely related to our communication architecture, their work only considers fully supervised one-next-step prediction tasks, while we tackle the full reinforcement learning problem with tasks requiring planning over time horizons. Our submission already includes a discussion on this in section 2.\"}", "{\"comment\": \"This paper claims that the agents choose who to send messages to, however from Figure 1 and Section 4 it appears that each agent outputs a message=<signature,value> which is sent to ALL agents, who then use dot-product attention to give more or less weight (or importance) to messages from certain agents. So, each agent receives a message from all other agents, and then uses attention to aggregate these messages (instead of just taking a mean as done in CommNet, Sukhbaatar et al. 2016). Can you please elaborate on how the communication is targeted?\", \"another_point\": \"in 'VAIN: Attentional Multi-agent Predictive Modeling' (Hoshen 2017, published in NIPS 2017), each agent uses a similar attention mechanism to aggregate messages from other agents (however it uses an exponential kernel function instead of dot-product). Apart from the particular form of attention, can you please elaborate on the difference between your work and VAIN?\", \"title\": \"How is communication targeted when each agent is receiving messages sent by all other agents and then deciding how much importance to give to each agent's message?\"}", "{\"title\": \"Interesting extensions for multi-agent communication, it misses some baselines to illustrate the benefits of the contribution.\", \"review\": \"The authors present a multi-agent communication architecture where, agents can use targeted communication and can perform multiple communication steps. The paper is well written and easy to follow.\", \"comments\": \"1) The idea of multi-stage communication is great, but the paper doesn't have a strong point to support this contribution. Could the authors illustrate the benefit of multi-stage e.g. vs. the communication channel width?\\n\\n2) In DIAL, the authors introduce a \\\"null\\\" action, what is the difference of that and multi-stage?\\n\\n3) It is not clear to the reader what is the contribution of targeted communication vs. non-targeted as it looks a solution to the mean-pooling. Could the authors include at least one more experiment with on an architecture that doesn't use mean pooling. From an architecture perspective there is a scalability benefit of using pooling, but if that's the only one it has to be made more clear.\\n\\n4) Following (3) based on Reddit there was a recent code release in python https://github.com/minqi/learning-to-communicate-pytorch. An alternative would be to evaluate TarMAC to one of the test beds, but the paper misses baselines.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"An interesting extension of the 'learning to communicate' work using targeted messages and multiple rounds of communication.\", \"review\": \"The authors propose a new architecture for learning communication protocols. In this architecture each message consists of a key and a value. When receiving the message the listener produces an attention key that is used to selectively attend to some messages more than other using soft attention. This differs from the typical 'broadcasting' protocols learned in literature.\\n\\nQuestions / Comments: \\n- Eqn (4) looks like a vanilla RNN. Did you experience any issues around exploding or vanishing gradients when doing multiple rounds of communication? Why not use a gated architecture here? \\n- \\\"Centralized Critic\\\" section: This equation is from the COMA paper, ie. a centralised critic with policy gradients rather than DDPG. What did you use for the variance reduction baseline to estimate the advantage? Also, did you try conditioning the critic on the central state rather than the concat of observations? Formally this is required for the algorithm to be convergent. \\n- How many independent seeds are the results averaged over? \\n- The attention mechanism seems to provide very little value across all experiments: \\n-- 84.9% vs 82.7% \\n-- 89.5% vs 89.6% \\n-- 64.3% vs 68.9% \\nDid you check if any of these numbers are significant? This is my single biggest concern with the paper. Currently it's unclear whether attention is required at all in the settings presented. It would be good to see eg. the TarMAC 2-stage on the traffic junction (97.1%) ablated without attention.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Paper review\", \"review\": \"The authors present a study on multi-agent communication.\\nSpecifically, they adapt communication to be targeted and multi-staged.\\nExperiments on 2 synthetic datasets and 1 3D visual dataset confirm that both additions are beneficial\\n\\nOverall, this paper was somewhat clear and more importantly includes experiments on House3D, a more realistic dataset.\", \"my_main_concern_is_the_following\": \"the method is not about targeting, but about selectively hearing.\\nIf agents are sharing the reward then why should targeted communication be beneficial at all? Isn't the optimal strategy to just communicate everything to everyone? I understand that they should be selective at the listening side to properly integrate only the relevant information (so, attend over all received messages), but why should we expect the speaker to apriori know who this message should go to? Moreover, I don't really understand how targeted communication can even work (in the way the authors explain it) since the agents have partial information (e.g., in shapes they only see 5x5 around them), so they don't really know who is where -- but I could potentially see this working should the agents put information about their own identity and location. So, given the positive results that the authors get, my understanding is that the signature doesn't have information about who should the recipient of the information be but more about what where the properties of the sender of this information. So, based on my understanding, I don't feel that the flow of the story quite matches what is really happening and this might be very confusing for prospective readers. Can the authors elaborate on this, aim i getting things wrong?\\n\\nThere is literally no information about model size (or at least I wasn't able to find any). Is there any weight-sharing across agents? Do you obtain CommNets by using the implementations of the authors or by ablating the signature-part of your model? Moreover, why do agents have a limited view window on the SHAPES -- is (targeted) communication redundant when agents have full observability? The part about how multi-staged communication is implemented is quite cryptic at the moment -- is multi-staged the fact that the message is out-putted by processing with a recurrent unit? The messages is factorized into two parts k and u leading to a vector of size D -- what happens should we have one message of size D (rather than factorizing into 2), something like this would control for any improvements obtained from increases the parameters of the model.\\n\\nFinally, if the premises of the paper is to define more effective communication protocols, evident in the use of continuous communication, (rather than studying what form can multi-agent communication etc etc), a necessary baseline (especially in cases where agents share reward), is to communicate the full observation (rather than a function of it). This baseline is not presented here and it's absolutely necessary.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}" ] }
S1ecm2C9K7
Feature-Wise Bias Amplification
[ "Klas Leino", "Emily Black", "Matt Fredrikson", "Shayak Sen", "Anupam Datta" ]
We study the phenomenon of bias amplification in classifiers, wherein a machine learning model learns to predict classes with a greater disparity than the underlying ground truth. We demonstrate that bias amplification can arise via inductive bias in gradient descent methods resulting in overestimation of importance of moderately-predictive ``weak'' features if insufficient training data is available. This overestimation gives rise to feature-wise bias amplification -- a previously unreported form of bias that can be traced back to the features of a trained model. Through analysis and experiments, we show that the while some bias cannot be mitigated without sacrificing accuracy, feature-wise bias amplification can be mitigated through targeted feature selection. We present two new feature selection algorithms for mitigating bias amplification in linear models, and show how they can be adapted to convolutional neural networks efficiently. Our experiments on synthetic and real data demonstrate that these algorithms consistently lead to reduced bias without harming accuracy, in some cases eliminating predictive bias altogether while providing modest gains in accuracy.
[ "bias", "bias amplification", "classification" ]
https://openreview.net/pdf?id=S1ecm2C9K7
https://openreview.net/forum?id=S1ecm2C9K7
ICLR.cc/2019/Conference
2019
{ "note_id": [ "SylHw4RWlE", "Hyxcbl9SyV", "SJlKHfoNJN", "SJl3QeBz1E", "SkeSFyHzyN", "B1l3c_5e14", "rkek5d6R0Q", "Syx3fvaAAm", "r1ei7VroCX", "S1x3UjW7RQ", "ByelNylQ0m", "S1xctjvOaQ", "SJxsEivOaQ", "S1g-livup7", "H1lSw16ch7", "BkldIeZq27", "r1xgFKumnQ" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544836189315, 1544032258292, 1543971393098, 1543815203807, 1543815037457, 1543706772338, 1543587974657, 1543587604121, 1543357474659, 1542818644340, 1542811431775, 1542122370055, 1542122291148, 1542122217197, 1541226333094, 1541177424443, 1540749688461 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1382/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1382/Authors" ], [ "ICLR.cc/2019/Conference/Paper1382/Authors" ], [ "ICLR.cc/2019/Conference/Paper1382/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1382/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1382/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1382/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1382/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1382/Authors" ], [ "ICLR.cc/2019/Conference/Paper1382/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1382/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1382/Authors" ], [ "ICLR.cc/2019/Conference/Paper1382/Authors" ], [ "ICLR.cc/2019/Conference/Paper1382/Authors" ], [ "ICLR.cc/2019/Conference/Paper1382/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1382/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1382/AnonReviewer3" ] ], "structured_content_str": [ "{\"metareview\": \"The authors identify a source of bias that occurs when a model overestimates the importance of weak features in the regime where sufficient training data is not available. The bias is characterized theoretically, and demonstrated on synthetic and real datasets. The authors then present two algorithms to mitigate this bias, and demonstrate that they are effective in experimental evaluations.\\nAs noted by the reviewers, the work is well-motivated and clearly presented. Given the generally positive reviews, the AC recommends that the work be accepted. The authors should consider adding additional text describing the details concerning Figure 3 in the appendix.\", \"confidence\": \"3: The area chair is somewhat confident\", \"recommendation\": \"Accept (Poster)\", \"title\": \"well written paper with theoretical and experimental validation\"}", "{\"title\": \"Thank you for following up\", \"comment\": \"Thank you for your continued feedback. We ran the experiment suggested, where \\\\mu_1 = (1,0,1,0,1,...,0), and this results in no systematic bias (with a setup similar to that of Figure 2(a), but with 200 weak features - 100 per class - and N=1000, the average bias over 100 trials was 0.00031, which would round to 0.0% using the same precision as in Table 1). We believe this result makes sense: since permuting the order of the features does not affect the result, it would not be possible to have bias when the features are entirely symmetric, because the orientation of the features can be reversed simply by permuting them when there are the same number of features oriented in each direction. We show empirically that the weaker features are more likely to be overestimated by SGD, but without any asymmetry, we would expect that this would affect both classes equally in expectation.\\n\\nWe agree that the asymmetry need not be precisely in the *number* of weak features, as it was in the synthetic data. For example, some weak features may be weaker than others, and there may be a disparity in the total strength of the features for each class. Thus, more complicated cases may be slightly harder to analyze. In this vein, on the real data, feature parity is likely often overly simple, as it doesn\\u2019t necessarily balance the total strength of the features for each class. Experts are more targeted towards balancing the strength of the features rather than only the number, which was likely more appropriate for most of the real datasets.\\n\\nThank you for your suggestion regarding the additional information on the datasets. In Section 5 we briefly note that we selected datasets based on high feature dimensionality, but we can also include a table with further details in the appendix.\\n\\nWe agree that overfitting likely happens to some extent on these datasets during training. What is interesting is that our techniques are post-hoc, meaning that models are not retrained following feature selection, they are simply pruned. Intuitively, this could perhaps be interpreted to mean the strong features were learned well and the overfitting happens primarily with the weak features. Aside from being interesting from the perspective of understanding the bias/overfitting, our techniques are specifically targeted towards removing bias when improving accuracy, while we see, e.g., L1 cannot typically accomplish both of these goals. Comparison to some of the methods in Li et al. would be interesting in the context of bias. However, even if the performance of other methods were the same, our methods could still be preferable in some contexts because they are easily and quickly applied post-hoc, and are easily extended to deep networks.\\n\\nWe apologize, the equation bias + accuracy <= 100 is correct; upon reviewing our data it appears we rounded 0.0980 incorrectly to 0.0100 when writing it into Table 1 (the bias and accuracy were 0.0980 -> 9.8 and 0.9019 -> 90.2 respectively). We will update Table 1 with this correction.\"}", "{\"title\": \"Thank you for following up\", \"comment\": \"Apologies; you are correct that the direction of the bias is not immediately clear from Table 1, as Table 1 reports absolute bias (since this is what we would like to minimize). In our experiments we observed that the bias was in fact in the same direction as the feature asymmetry in prostate (i.e., bias with sign is -47.3); while we do not highlight this fact in the paper specifically, we will update the table to include the sign of the bias so the direction agreement is also clear.\"}", "{\"title\": \"thanks\", \"comment\": \"Thanks for your suggestions. I made my criticisms more clear and left a comment for the authors.\"}", "{\"title\": \"Thanks for clarifying\", \"comment\": \"Thanks for your answer and the revision. The writing and the structure of the paper are much better now. I still have two issues with the paper.\\n\\n1) You\\u2019ve suggested asymmetry of the features is one of the reasons that SGD leads to systematic bias (e.g., you have written: \\u201cWhen the data is distributed asymmetrically with respect to features\\u2019 orientation towards a class, gradient descent may lead to systematic bias\\u201d). I\\u2019m wondering what the reason behind this claim is?\\n\\nIn your synthetic dataset (Figure 2) all the features are asymmetric; and, you did not study presence of bias when there are lots of symmetric weak features (e.g., instead of \\\\miu_1 = (1,0,1,1,...1); assuming \\\\miu_1 = (1,0,1,-1,1,-1,\\u2026, 1)\\n\\nIn the experiment section building upon this claim, you introduced feature parity to mitigate bias; however, feature parity does not have a very good performance in comparison to the other method (Experts). So, I\\u2019m not sure how much I can believe that asymmetry causes bias.\\n\\n\\n2) I took a look at the statistics of some of the datasets in your experiment (datasets from Li et al., 2016), and I realized in some datasets there are 10X to 100X more features than instances. E.g., the prostate has 100 instances while 50K features (I would suggest having a small table about statistics of the datasets).\\nGiven these statistics, it is somehow clear that overfitting happens in the training; therefore, improvement of the accuracy is not surprising (note that in the prostate increment in accuracy causes the reduction in bias; all the error (all the 10%) are still toward one of the classes).\\n\\nI\\u2019m wondering if there is anything special about your feature selection methods. I mean if I use other feature selection methods how do they perform regarding the bias reduction? As I checked (in Li et al., 2016), some other feature selection methods increase the accuracy comparable or sometimes better than your methods.\\n\\nAgain, I would like to mention that I really liked the idea of showing weak features cause systematic bias; and I liked that you experimentally showed even with p*=0.5, SGD leads to systematic bias.\", \"minor\": \"Is the below equation right?\\nBias <= 100 \\u2013 accuracy\\nwhy does this not hold for prostate dataset?\"}", "{\"title\": \"Thank you for clarifying\", \"comment\": \"The rewritten sections are much clearer. The comparison between LR/SVM & L-BFGS/ SGD is really impressive. The comparison between LR/SVM without SGD makes it even more interesting to identify when bias asymmetry will be linked to bias, and so that when feature re-balancing helps.\\n\\n\\\"namely, the bias is typically in the direction of the feature imbalance, even when this is at odds with the prior bias (as is the case in prostate).\\\" I am confused. prostate has asymm<0.5 and bias>0. Is it the same direction?\"}", "{\"title\": \"Very good clarification\", \"comment\": \"I really like the statement: \\\"Rather, we aim to point out that in the case of \\u201cavoidable\\u201d bias, there is no such trade-off, as bias and accuracy are not in conflict.\\\" It's entirely possible that I just missed it, but I think a statement of this type and some discussion of the broader trade-offs would go very well in the introduction. People are thinking a lot about this issue and I think this paper makes a good argument that, in fact, there may be some low hanging fruit where there is basically no trade-off at all.\"}", "{\"title\": \"Please be specific in these criticisms\", \"comment\": \"I think I disagree on some of these criticisms. In particular:\\n\\n1) \\\"they did not show this experimentally or theoretically. ( for example, by making synthetic datasets with different amount of asymmetry)\\\": Figure 2 shows a number of synthetic experiments examining the source of the bias. If there is a specific experiment that you think the authors should run, please give details so that the authors can improve their work.\\n\\n2) Can you be specific about why you found their results unsurprising? Why does the composition of the dataset make it unsurprising that the proposed method works? As I see it, they proposed a method to solve a problem and it successfully solved that problem, surprise seems irrelevant. If you think that the solution has appeared somewhere else or that there is an existing method that would correct this bias, please specify it so that the authors can add the appropriate citation or comparison.\"}", "{\"title\": \"Thank you for following up\", \"comment\": \"Thank you for your further feedback on the story of the paper. To answer your specific questions: we do not believe that the form of bias amplification identified in this paper as \\u201cavoidable\\u201d occurs whenever unbalanced features are present, as we observed that linear SVM models trained using SMO do not exhibit it (Figure 3 in the appendix); this form of bias amplification does not just occur in linear models, as we observed it in the two deep convolutional networks presented in our evaluation (Table 1). We agree that a more general result that pinpoints why SGD overestimates weak features is interesting and an avenue of future work. We see the contributions in this paper as a necessary first step towards answering these more general \\u201cwhy\\u201d questions, and look forward to further analysis of this phenomenon as future work.\\n\\nWe appreciate your suggestions on framing our claims, and will revise the writing accordingly prior to future submission or publication to ensure that our precise claims are clear and not overstated.\\n\\nWe certainly agree that in some cases we may reasonably want to sacrifice accuracy for bias. In these cases we might, e.g., use a notion of fairness to guide how we handle the trade-off. It was not our intention to take a specific position on this trade-off, or to weigh in on defining fairness. Rather, we aim to point out that in the case of \\u201cavoidable\\u201d bias, there is no such trade-off, as bias and accuracy are not in conflict. Mitigating feature-wise bias may be used in conjunction with other techniques in the context of fairness.\"}", "{\"title\": \"Clarification\", \"comment\": \"I think I should clarify a bit what I mean when I say \\\"this would be a much more general result\\\" and why I think it would make the paper better. As I see it, the main contribution of the paper is an observation that weak features lead to bias amplification in logistic regression models when the parameters are estimated using SGD. To be clear, I think this is a valuable observation in and of itself, and the authors are rigorous in confirming and describing this observation (section 3.2 of the updated paper); however, the scope of this observation is unclear. For example, does bias amplification occur in any setting with weak features regardless of the model and optimization method used (I assume not, but this is not evaluated)? Does bias amplification occur in any classification model trained using SGD or only linear models? At the core of these questions is the \\\"why\\\" question: \\\"what are the properties of LR, SGD, or their combination that lead to bias amplification in the presence of weak features?\\\" Answering this question would be a more general result because it would let us identify the problem in other settings without the need for experimentation and would allow us to propose fixes that are based on addressing the root cause rather than heuristics.\"}", "{\"title\": \"Thank you for clarifying\", \"comment\": \"RE: The source of bias - In light of this comment, I think you need to be *very* careful about how you describe the sources of bias in the paper. For example, the second paragraph of section 3.2 in the updated paper says \\\"Logistic regression models make fewer assumptions about the data and are therefore more widely-applicable, but as we demonstrate in this section, this flexibility comes at the expense of an inductive bias that can lead to systematic bias in predictions.\\\" I read this as implying that LR is the source of the bias which your experiments seems to suggest it isn't. As another example, the last paragraph of section 3.2.1 in the updated paper says \\\"Figure 2c suggests that overestimation of weak features is precisely the form of inductive bias exhibited by gradient descent when learning logistic classifiers.\\\" Your analysis suggests that it is largely due to SGD rather than general gradient descent so I would replace any mention of \\\"gradient descent\\\" with \\\"SGD\\\". In light of the updates, I think this paper would be a lot stronger if it focused on identifying and describing the source of the bias (this would be a much more general result), but is still worth publishing if the authors are careful about the scope of their claims.\", \"re\": \"\\\"it can be considered equally problematic to sabotage accuracy in order to reduce bias\\\" - I would argue that this is exactly what we want to do in many settings where we care about bias. For example, we should be willing to sacrifice accuracy in recidivism prediction in order to avoid racial bias. A focus on accuracy first is exactly the mindset that has led to algorithmic fairness becoming a serious issue.\"}", "{\"title\": \"Thank you for your thoughtful feedback\", \"comment\": \"While logistic regression is often on the logit scale, we tried to consistently use the probability scale in our analysis and experiments. If the paper contains any inconsistencies on this matter, we would appreciate knowing where they appeared so that we can address them. However, we would like to better understand the reviewer\\u2019s concern about unbiasedness failing to be invariant under transformation, and how we could have otherwise targeted our approach to better address the problem. With additional details, we hope to be able to address your concern.\\n\\nIn (7) (formerly 6), we are minimizing the bias of the model over the choices of alpha and beta subject to not harming accuracy. It is true that when optimizing, the bias and accuracy of the model are necessarily obtained via an empirical estimation, so it is possible that the alpha and beta chosen wouldn\\u2019t generalize well to the test data. We treated these as normal hyperparameters in our experiments. The numbers reported in Table 1 report the bias and accuracy on the test data, while the optimization problem from (7) was solved on the training data, so we are reasonably confident that in practice the optimal alpha and beta generalize well, even in high-dimensional settings.\\n\\nOur aim was to identify the phenomenon of feature-wise bias on a class of problems that are sufficiently controlled so that we can make reasonable conclusions about the source of the bias. In the general case, beyond mean-field Gaussian, it may be harder to identify the source of the bias, as many sources may be interacting at once (e.g., feature-wise, class-imbalance, correlated features, etc.). We believe the results in Table 1 shed some light on the general case, namely, the bias is typically in the direction of the feature imbalance, even when this is at odds with the prior bias (as is the case in prostate). Furthermore, on some of the datasets (arcene in particular), balancing the number of features was quite effective at removing bias while improving accuracy, suggesting that a reasonable portion of the bias was caused by feature asymmetry.\"}", "{\"title\": \"Thank you for your thoughtful feedback\", \"comment\": \"We agree that the results we have presented do not indicate that SGD is the exclusive cause of the bias-inducing behavior examined in the paper. We note that LR will, given enough data, converge to the Bayes-optimal classifier, and because the data used in Figure 2 has an unbiased prior, we would expect no bias in the predictions according to Thm. 1. However, we posit that feature-wise bias occurs when the learner has not seen enough data to converge. While we observed this consistently with models trained using SGD, it may indeed happen when other methods are used to learn the coefficients from insufficient data. On the other hand, different methods may yield different models when training ends prior to convergence.\\n\\nWe have updated the paper with additional results that shed more light on the sources of bias in linear models. Figure 3 in the appendix depicts the bias of classifiers trained using the same data as in Figure 2, including LR trained with either L-BFGS or SGD, linear SVM trained with either SMO or SGD, and SGD using modified Huber and squared hinge losses. In short, while LR trained with L-BFGS does exhibit some bias, it is not as pronounced or consistent as it is in models trained with SGD, whereas all the models trained with SGD exhibited nearly identical bias trends. In slightly more detail, LR trained without SGD was less sensitive to the number of weak features, i.e., there was less bias than LR trained with SGD until there was a sufficiently high number of weak features, and even then, the effect was not as strong. Furthermore, SVM trained without SGD exhibited essentially no such bias, while SVM trained with SGD exhibited the same bias as LR with SGD. These results suggest that while the bias-inducing behavior may occur when other methods are used, they consistently follow from the use of SGD.\\n\\nThank you for your feedback on the related work section, we have moved it to the front of the paper as suggested.\\n\\nThank you for your comment about L1 versus experts method parameters--upon review, the wording in the experiments section is not clear. We did use the same procedure for finding the hyperparameter for L1 regularization as for the experts technique, i.e. we optimized for minimizing bias subject to the constraint that accuracy should not decrease from the original model. You may have noticed that on the glioma dataset, the accuracy goes down for L1. We conjecture that this is caused by the hyperparameter not generalizing well to the test data, as we evaluated hyperparameters on the training data. We have updated the writing in Section 4 to clarify this.\\n\\nIt\\u2019s not immediately clear what distinguishes the prostate data from the others, but upon inspection, prostate has a rather high Mahalanobis distance between classes compared to many of the other datasets. This might suggest there was a lot of room for improvement on this dataset (i.e., the bias was largely preventable because the classes are well-separated). Like most of the other datasets, prostate had a huge disparity in the number of data points (small) to features (large), so it is perhaps unsurprising that despite having the classes fairly well-separated in its feature space, a model with no regularization was unable to generalize well on it. Furthermore, prostate was the only dataset for which the feature disparity opposed the prior bias (and moreover the bias went in the direction of the features rather than the prior), so perhaps the feature-wise bias was the most significant source of bias in this example. It may be an interesting avenue for future work to investigate whether, e.g., Mahalanobis distance between classes, is a good predictor for the effectiveness of our techniques on real data.\\n\\nIn Section 3 (previously Section 2), paragraph 2, we state the goal (minimizing 0-1 loss) of the \\u201cstandard binary classification problem,\\u201d not the overall goal of our paper. In fact, our goal is not exactly to generally minimize bias along with loss; we note that there are multiple possible sources of bias, only some of which are avoidable when optimizing accuracy. Namely, as stated in Theorem 1, an optimal classifier may necessarily be biased in some cases. Our goal is to remove bias that is not \\u201cnecessary\\u201d in this way, which is not easily captured by additional terms in the training objective. Our work identifies feature-wise bias as one type of preventable or \\u201cunnecessary\\u201d bias, and attempts to remove it in a targeted fashion with post-hoc feature selection. In other words, we want our model to be no more biased than the most accurate predictor, which may still have some bias according to Theorem 1 (but we consider this bias unavoidable because it can be considered equally problematic to sabotage accuracy in order to reduce bias).\\n\\nThank you for your minor comments as well, we have addressed them in the updated paper.\"}", "{\"title\": \"Thank you for your thoughtful feedback\", \"comment\": \"Thank you for your comments regarding the previous work section. We have included a more in-depth comparison to other work around bias in GNB in our update to the paper.\\n\\nWe have updated Section 2.2 (now Section 3.2) with a more precise description of the data used in that section, which was constructed to exemplify the feature asymmetry we describe. We hope that it clears up some of the confusion in that part of the paper, and are willing to revise with additional clarifications if needed.\\n\\nRegarding the claim that bias follows from an inductive bias of SGD, the argument is that because we see bias when we train SGD-LR in a setting where the Bayes-optimal classifier would have no bias, the bias cannot be explained by Theorem 1 (i.e., as bias that is inevitable when optimizing accuracy), hence we conclude the bias must have been caused by the learning rule (SGD-LR). While the inductive bias may not be uniquely attributable to SGD, and instead may be a consequence of using LR regardless of how the coefficients were obtained, we found that LR models trained on the same data using other methods, such as L-BFGS, did not result as much consistent bias as LR trained with SGD. Moreover, training with SGD using other loss functions, such as hinge, modified-Huber, and perceptron, resulted in the same bias characteristics as shown in Figure 2. Thus, linear classifiers trained with SGD consistently show the inductive bias we describe, whereas comparable classifiers trained using other methods may not. We have included an additional figure (Fig. 3 in the appendix) that details these results.\\n\\nIn our experiments we compare our feature selection method targeted at feature-wise bias to L1 regularization. We are not aware of other feature selection methods intended to mitigate the bias we target in the paper, but are willing to include additional comparisons if there are comparable approaches that we missed.\\n\\nWe additionally added results for L1 regularization on CIFAR. In general, L1 is harder to apply to the deep network scenarios because training takes a long time, making the hyperparameters hard to tune.\\n\\nThank you also for your formatting comments; we have addressed them in the updated version of the paper.\"}", "{\"title\": \"Interesting result, need more comparison in the experiment section, need more explaining of related work\", \"review\": \"In this paper, the authors studied bias amplification. They showed in some situations bias is unavoidable; however, there exist some situations in which bias is a consequence of weak features (features with low influence to the classifier and high variance). Therefore, they used some feature selection methods to remove weak features; by removing weak features, they reduced the bias substantially while maintaining accuracy (In many cases they even improved accuracy). Showing that weak features cause bias is very interesting, especially in their real-world dataset in which they improved bias and accuracy simultaneously.\\n\\n\\nMy main concerns about this paper are its related work and its writing.\\nAuthors did a great job in reviewing related work for bias amplification in NLP or vision. \\nHowever, they studied bias amplification in binary classification, in particular, they looked at GNB; and they did not review the related work about bias in GNB. I think it is clear that using MAP causes bias amplification. Therefore, I think changing theorem 1 to a proposition and shifting the focus of the paper to section 2.2 would be better. Right now, I found feature orientation and feature asymmetry section confusing and hard to understand. In the paper, the authors claimed bias is a consequence of gradient descent\\u2019s inductive bias, but they did not expound on the reasoning behind this claim. Although the authors ran their model on many datasets, there is no comparison with previous work. So it is hard to understand the significance of their work. It is also not clear why they don\\u2019t compare their model with \\\\ell_1 regularization in CIFAR.\", \"minor\": \"Paper has some typos that can be resolved.\\nCitations have some errors, for example, Some of the references in the text does not have the year, One paper has been cited twice in two different ways, For more than two authors you should use et al., sometimes \\\\citet and \\\\citep are used instead of each other.\\nAuthors sometimes refer to the real-world experiment without first explaining the data which I found confusing.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Well-motivated paper with a good balance of novel insight and practical methods\", \"review\": \"Overall, I found the paper well-written, the problem well-motivated, and the proposed methods clear and reasonable. While I have a few concerns about presentation and experimentation, these are issues that can easily be remedied and I recommend acceptance.\", \"major_comments\": [\"The authors repeatedly say that gradient descent is the cause of the bias amplification (e.g. Section 2.2 title, \\\"...features that are systematically overestimated by gradient descent.\\\", \\\"... i.e., a consequence of gradient descent's inductive bias.\\\", \\\"... gradient descent may lead to systematic bias...\\\"). The inductive bias they describe is coming from the use of logistic regression, not the use of gradient descent. Specifically, a logistic regression model has a convex likelihood, which means that regardless of what algorithm is used to maximize the likelihood, it should converge to the same point. In fact, most off-the-shelf implementations of logistic regression do not use vanilla gradient descent. Further, gradient descent may be used to estimate the parameters of any number of models which may or may not have the same inductive bias the authors describe.\", \"I thought the related work section was well-written and would strongly recommend moving it to the beginning of the paper as it motivates the entire problem. I also think it could be helpful to ground the technical definitions of bias amplification in a meaningful example.\", \"I think that the experimental setup for comparing \\\\ell_1 regularization to the proposed feature selection methods is not quite fair. In particular, the hyperparameters of the \\\"expert\\\" method are selected to minimize bias subject to the constraint that loss not increase. In contrast, the \\\\ell_1 regularization hyperparameter is selected purely to minimize bias. Instead, I would select the \\\\ell_1 regularization hyperparameter in the same way as the expert method, that is, to minimize bias subject to a constraint on loss. In general, I think hyperparameters should be selected using the same criterion for all methods.\", \"The authors make a point of highlighting results on the \\\"prostate\\\" which showed a large increase in accuracy along with a large decrease in bias. I think the paper would benefit from some exploration of why this happened. Specifically, it would be valuable to answer the question: what are the properties of the \\\"prostate\\\" dataset that make this method so effective and are these properties general and identifiable a priori?\", \"Section 2, paragraph 2, line 5: The stated goal in this paragraph is \\\"minimizing 0-1 loss on unknown future i.i.d. samples\\\". As stated in the introduction, this is, in fact, not the goal. The goal is to minimize loss while also minimizing bias. A larger criticism that I would have of this work is: if minimizing bias is a first order goal, then why are we using empirical risk minimization in the first place? Put another way, why use post-hoc correction for an objective function that does not match our actual stated goals rather than using an objective function that does?\"], \"minor_comments\": [\"Section 1, paragraph 4, line 2: \\\"Weak\\\" is not clearly defined here. Is it different than \\\"moderately-predictive\\\"?\", \"Section 2.1, last paragraph, line 1: I understand what the authors are saying when they say \\\"Bias amplification is unavoidable\\\", but it is avoidable by changing our objective function. I would consider rewording this statement to something like \\\"Using an ERM objective will lead to bias amplification when the learning rule...\\\"\", \"Equation 4: I believe h should be changed to f in this equation.\", \"Equation 6: L is not defined anywhere.\", \"Table 1: As defined in equation 1, B_D(h_s) should be between 0 and 1. Also, the accuracy results for the glioma dataset have the wrong result in bold.\", \"Section 4, methodology paragraph, line 5: forthe --> for the\", \"Section 5, paragraph 5, lines 5-6: Feature selection is not used \\\"only to improve accuracy\\\". For example, Kim, Shah, and Doshi-Valez (2015) use feature selection to improve interpretability (https://beenkim.github.io/papers/BKim2015NIPS.pdf).\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"some insights on predictive bias and weak features\", \"review\": \"update: The authors' feedback has addressed some of my concerns. I update my rating to 6.\\n=================\", \"original\": \"This paper provides some new insights into classification bias. On top of the well known unbalanced group size, it shows that a large number of weak but asymmetry weak features also leads to bias. This paper also provides a method to reduces bias and remain the prediction accuracy.\\n\\nIn general, the paper is well written, but some description can be clearer. Some notation seems inconsistent. For example, D in equation (1) denotes the joint distribution (x,y), but it also refers to the marginal distribution of x somewhere else. \\n\\nIn the high level, I am not totally convinced of how significant the result is. In particular, the bias this paper defines is on the probability (softmax) scale, but logistic regression is on logit scale-- not even aimed at the unbiasedness in the original scale. So the result in section 2 seems to be expected. Given the fact that unbiasedness is not invariant under transformation, I am wondering why it should be the main target in the first place. \\n\\nIn the bias reduction methods in equation 5 and 6, both the objective function and the constraint are empirical estimations. Will it be too noisy to adapt to the high dimensional setting? On the other hand, adding some sparsity regularization improves prediction seems well known in practice.\\n\\nI would also encourage the authors to have extended work both theoretically and experimentally. The asymmetry feature is only illustrated by a single logistic regression. Is it a problem of weak features, or indeed a problem of logistic regression? What will happen in a more general case beyond mean-field Gaussian? I would imagine in this simple case the authors may even derive the closed form expression to verify their heuristics. \\n\\nBased on the evaluations above, I would recommend a weak reject.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}" ] }
rJxF73R9tX
Knows When it Doesn’t Know: Deep Abstaining Classifiers
[ "Sunil Thulasidasan", "Tanmoy Bhattacharya", "Jeffrey Bilmes", "Gopinath Chennupati", "Jamal Mohd-Yusof" ]
We introduce the deep abstaining classifier -- a deep neural network trained with a novel loss function that provides an abstention option during training. This allows the DNN to abstain on confusing or difficult-to-learn examples while improving performance on the non-abstained samples. We show that such deep abstaining classifiers can: (i) learn representations for structured noise -- where noisy training labels or confusing examples are correlated with underlying features -- and then learn to abstain based on such features; (ii) enable robust learning in the presence of arbitrary or unstructured noise by identifying noisy samples; and (iii) be used as an effective out-of-category detector that learns to reliably abstain when presented with samples from unknown classes. We provide analytical results on loss function behavior that enable automatic tuning of accuracy and coverage, and demonstrate the utility of the deep abstaining classifier using multiple image benchmarks, Results indicate significant improvement in learning in the presence of label noise.
[ "deep learning", "robust learning", "abstention", "representation learning", "abstaining classifier", "open-set detection" ]
https://openreview.net/pdf?id=rJxF73R9tX
https://openreview.net/forum?id=rJxF73R9tX
ICLR.cc/2019/Conference
2019
{ "note_id": [ "SJxomwfNlE", "ryxJqsOcAQ", "H1ga8q_cCX", "rkxlX7u9Am", "SklpLawcAm", "rkgyC8D9Am", "B1l87E4-CQ", "BklSo4p9nm", "Hklv2HpdhX", "rJlVL2PuhX" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544984354808, 1543306119023, 1543305812856, 1543303959824, 1543302485079, 1543300806856, 1542698013643, 1541227677069, 1541096879098, 1541073995976 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1381/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1381/Authors" ], [ "ICLR.cc/2019/Conference/Paper1381/Authors" ], [ "ICLR.cc/2019/Conference/Paper1381/Authors" ], [ "ICLR.cc/2019/Conference/Paper1381/Authors" ], [ "ICLR.cc/2019/Conference/Paper1381/Authors" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper1381/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1381/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1381/AnonReviewer3" ] ], "structured_content_str": [ "{\"metareview\": \"The reviewers felt that the method was natural and the writing was mostly clear (although could be improved by providing better signposting and fixing typos). However, there was also general agreement that comparison to other methods was weak; one reviewer also points out that the way that the reported numbers compare the methods on different sets of data, which might be an inaccurate measure of performance (this is more minor than the overall issue of lack of comparisons). While the authors provided more comparison experiments during the author response, it is in general the responsibility of authors to have a close-to-final work at the time of submission.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"natural idea but insufficient comparison to other methods\"}", "{\"title\": \"On baselines and comparisons\", \"comment\": \"Thank you for your suggestions.\\n\\nPlease see updated Section 3.1 and 3.3 for risk coverage curves involving softmax thresholds and the selective guaranteed risk method described in [1]. We use the authors' implementation in [2]. \\n\\nThe updated results in Section 4 on CIFAR_10 and CIFAR-100 report the performance of the DAC on residual and wide residual networks. See Section 4, and Table 1.\\n\\nRegards,\\nDAC Authors.\\n\\n[1] - Geifman, Yonatan, and Ran El-Yaniv. \\\"Selective classification for deep neural networks.\\\" Advances in neural information processing systems. 2017.\\n\\n[2] https://github.com/geifmany/selective_deep_learning\"}", "{\"title\": \"Response to AnonReviewer3\", \"comment\": \"Thank you for the detailed comments and numerous pointers to existing work; these were very helpful. We have taken these into account in the updates to the paper.\\n\\n In particular, based on your suggestion, we have added [1] and [2] as baselines in the updated results in Section 4 (Table 1). We note (as we did in the summary above), that this is the most significant update to the paper. Using these and other comparisons, we present results in Section 4 that illustrate the strong performance benefits of the DAC in the label cleaning scenario. As you suggested., we have also added discussion (Section 4) on the advantages of the DAC compared to numerous other existing works in this field.\\n\\nIn regards to your point 1., structured noise is an occurrence in real-world data in many scenarios. See for example the discussions in [3],[4] and [5] (which we have added to the paper). Also, in our own work with cancer data, we have seen correlations between the features of the data and the reliability of the labels. The noise in these cases are seldom i.i.d.\\n\\nRegards,\\nDAC Authors.\\n\\n\\n[1] Z. Zhang and M. Sabuncu. Generalized cross entropy loss for training deep neural networks with noisy labels. In NIPS, 2018.\\n\\n[2] L. Jiang, Z. Zhou, T. Leung, L. Li, and L. Fei-Fei. Mentornet: Learning data-driven curriculum for very deep neural networks on corrupted labels. In ICML, 2018.\\n\\n[3]Nico G\\u00f6rnitz, Anne Porbadnigk, Alexander Binder, Claudia Sannelli, Mikio Braun, Klaus-Robert\\nM\\u00fcller, and Marius Kloft. Learning and evaluation in presence of non-iid label noise. In Artificial\\nIntelligence and Statistics, pp. 293\\u2013302, 2014.\\n\\n[4]Anne K Porbadnigk, Nico G\\u00f6rnitz, Claudia Sannelli, Alexander Binder, Mikio Braun, Marius Kloft,\\nand Klaus-Robert M\\u00fcller. When brain and behavior disagree: Tackling systematic label noise in\\neeg data with machine learning. In Brain-Computer Interface (BCI), 2014 International Winter\\nWorkshop on, pp. 1\\u20134. IEEE, 2014.\\n\\n[5]Carla E Brodley and Mark A Friedl. Identifying mislabeled training data. Journal of artificial\\nintelligence research, 11:131\\u2013167, 1999.\"}", "{\"title\": \"Response to AnonReviewer2\", \"comment\": \"Thank you for your comments and suggestions on improving the paper.\\n\\nWe have added new comparisons to abstention mechanisms based on softmax thresholding and selective guaranteed risk (see response above) in Sections 3.1 and 3.2. Results show the performance boost resulting from using the DAC in scenarios with structured noise. These include numerous accuracy-coverage curves for clearer comparisons.\\n\\nNumerous comparisons have also been added to the results in Section 4. Please see Table 1 and accompanying discussion.\\n\\nRegarding the need for smudging in the openset detection task (Section 6), please see caption for figure 5 (this was inadvertently missing in the first submission) and the discussion in Section 6 that illustrates the procedure. The training process of the DAC results in the smudge (or any fixed feature) being strongly associated with the abstention class, except in the presence of features of known classes. In the latter case, the activation of the fixed feature is suppressed and class features are dominant. When class features are not present, the fixed feature is dominant. The filter visualizations in Figure 5 illustrate this phenomenon.\\n\\nDuring inference, the image to be classified is augmented with the fixed feature, and unless known class features suppress the activation of the fixed feature, the classification is always routed to the abstention class. One might think of the fixed feature as a \\\"feature threshold\\\" that needs to be overcome by an object from a known class to be recognized as one.\\n\\nNote, that it is merely a matter of convenience that we chose the same fixed feature (smudge) in the open set detection task as in Section 3.1. The feature can be any pattern that is not expected to occur in the images of interest.\\n\\nFinally, as you suggested, a layout description has been added at the end of Section 1 to better guide the reader.\\n\\nRegards,\\nDAC Authors.\"}", "{\"title\": \"Response to AnonReviewer1\", \"comment\": \"Thank you for your comments and suggestions on improving the paper.\\n\\n1. The loss function introduced in the DAC allows for a new way of learning in the presence of noise. By allowing an abstention option while training (which to the best of our knowledge, has not been explored elsewhere), the DAC is able to very effectively learn signals that are indicative of noise (this is the structured noise scenario) and improve classification performance. We also provide an updated discussion on motivation at the beginning of section 3.\\n\\n2. Please see updated results and risk-coverage plots in section 3.1 and 3.2 where we compare to other abstention mechanisms. In particular, we compare to softmax thresholds as well as the recently proposed selective guaranteed risk in [1]. While the DAC offers improved performance (in terms of accuracy and coverage) in these settings, it can also be used alongside these methods for quantifying uncertainty.\\n\\n3. Typos, citations, formatting errors and missing captions have been fixed.\\n\\nIn addition new results, comparison to multiple baselines and discussion of existing works have been added to Section 4. \\n\\nRegards,\\nDAC Authors.\\n\\n1] - Geifman, Yonatan, and Ran El-Yaniv. \\\"Selective classification for deep neural networks.\\\" Advances in neural information processing systems. 2017.\"}", "{\"title\": \"Summary of Updates to Paper\", \"comment\": \"The authors thank the reviewers and commenters for their feedback and actionable suggestions for the paper. Since the main concern raised was lack of comparisons to existing work, we mainly address that in the update to the paper. The most significant update in this regard is Section 4 \\u2014learning in the presence of unstructured noise \\u2014 as most existing works tackle this kind of noise. We compare to multiple baselines and demonstrate the strong performance of the DAC in this setting (Section 4, Table 1) .\\n\\nThe DAC was originally conceived as a representation learner for structured noise (even though it has proved useful in other scenarios as well, as detailed in the paper). The beginning of section 3 has been updated with discussions on the motivation and citations to relevant work that discuss this type of noise . Even though here are very few works addressing the issue of structured noise in deep learning, as suggested by reviews and comments, we have added comparisons of the DAC to other abstention mechanisms in this setting (Sections 3.1 and 3.2) . We also show how the noise learning property of the DAC can be used in conjunction with such mechanisms to improve predictive performance.\\n\\nIn summary, updated results indicate that the DAC is a very effective booster of performance in the presence of multiple types of noise. The added performance gain as well as the simplicity of implementation makes it a strong contender for being part of a deep learning pipeline that involves learning in the presence of noise.\"}", "{\"comment\": \"The idea presented in this paper is sound. However, I feel that the experimental part is a bit weak in the demonstration of the method performance itself, and it is more focused on presenting the properties and use-cases of the method (such as DAC as a data cleaner).\\n\\n- It would be interesting to see a direct comparison (in the sense of risk coverage curves) to [1] (a post training thresholding method). \\n- Some other uncertainty estimation methods such as MC-dropout [2], KNN distance [3] and ensemble [4] can also be compared as a post-training thresholding uncertainty measure.\\n-The VGG network for Cifar-10/100 is over-parameterized. It is interesting to see your results over a top performing architecture for these dataset (e.g., Wide residual networks or Dense-net) or even a modified version of VGG that have been adapted to these datasets.\\n\\n\\n[1] - Geifman, Yonatan, and Ran El-Yaniv. \\\"Selective classification for deep neural networks.\\\" Advances in neural information processing systems. 2017.\\n[2] - Gal, Yarin, and Zoubin Ghahramani. \\\"Dropout as a Bayesian approximation: Representing model uncertainty in deep learning.\\\" international conference on machine learning. 2016.\\n[3] - Mandelbaum, Amit, and Daphna Weinshall. \\\"Distance-based Confidence Score for Neural Network Classifiers.\\\" arXiv preprint arXiv:1709.09844 (2017).\\n[4] - Lakshminarayanan, Balaji, Alexander Pritzel, and Charles Blundell. \\\"Simple and scalable predictive uncertainty estimation using deep ensembles.\\\" Advances in Neural Information Processing Systems. 2017.\", \"title\": \"Missing baselines and direct comparison to existing work\"}", "{\"title\": \"Good paper, writing and comparison need to be improved\", \"review\": \"The paper introduces a new loss function for training a deep neural network which can abstain.\\nThe paper was easy to read, and they had thorough experiments and looked at their model performance in different angles (in existence of structured noise, in existence of unstructured noise and open world detection). However, I think this paper has some issues which are listed below:\\n\\n\\n1) Although there are very few works regarding abstaining in DNN, I would like to see what the paper offers that is not addressed by the existing literature. Right now, in the experiment, there is no comparison to the previous work, and in the introduction, the difference is not clear. I think having an extra related work section regarding comparison would be useful.\\n\\n2) The experiment section was thorough, and the authors look at the performance of DAC at different angles; however, as far as I understand one of the significant contributions of the paper is to define abstain class during training instead of post-processing (e.g., abstaining on all examples where the network has low confidence). Therefore, I would like to see a better comparison to a network that has soft-max score cut-off rather than plain DNN. In figure 1-d the comparison is not clear since you did not report the coverage. I think it would be great if you can compare either with related work or tune a softmax-score on a validation set and then compare with your method. \\n\\n3) There are some typos, misuse of \\\\citet instead of \\\\citep spacing between parenthesis; especially in figures, texts overlap, the spacing is not correct, some figures don\\u2019t have a caption, etc.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Re: Abstention classifiers\", \"review\": [\"This manuscript introduces deep abstaining classifiers (DAC) which modifies the multiclass cross-entropy loss with an abstention loss, which is then applied to perturbed image classification tasks. The authors report improved classification performance at a number of tasks.\", \"Quality\", \"The formulation, while simple, appears justified, and the authors provide guidance on setting/auto-tuning the hyperparameter.\", \"Several different settings were used to demonstrate their modification.\", \"There are no comparisons against other rejection/abstention classifiers or approaches. Post-learning calibration and abstaining on scores that represent uncertainty are mentioned and it would strengthen the argument of the paper since this is probably the most straightforward altnerative approach, i.e., learn a NN, calibrate predictions, have it abstain where uncertain.\", \"The comparison against the baseline NN should also include the performance of the baseline NN on the samples where DAC chose not to abstain, so that accuracies between NN and DAC are comparable. E.g. in Table 1, (74.81, coverage 1.000) and (80.09, coverage 0.895) have accuracies based on different test sets (partially overlapping).\", \"The last set of experiments adds smudging to the out-of-set (open set) classification tasks. It is somewhat unclear why smudging needs to be combined with this task.\", \"Clarity\", \"The paper could be better organized with additional signposting to guide the reader.\", \"Originality\", \"Material is original to my knowledge.\", \"Significance\", \"The method does appear to work reasonably and the authors provide detail in several use cases.\", \"However, there are no direct comparison against other abstainers and the perturbations are somewhat artificial.\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Comparsion with \\\"Generalized cross entropy loss for training deep neural networks with noisy labels\\\".\", \"review\": \"This paper formulates a new deep method called deep abstaining classifer. Their main idea is to introduce a new modified loss function that utilizes an absention output allowing the DNN to learn when abstention is a better option. The core idea resemble KWIK framework [1], which has been theoretical justified.\", \"pros\": \"1. The authors find a new direction for learning with noisy labels. Based on Eq. (1) (the modified loss), the propose \\\\alpha auto-tuning algorithm, which is relatively novel. \\n\\n2. The authors perform numerical experiments to demonstrate the efficacy of their framework. And their experimental result support their previous claims.\\nFor example, they conduct experiments on CIFAR-10 and CIFAR-100. Besides, they conduct experiments on open-world detection dataset.\", \"cons\": \"We have three questions in the following.\\n\\n1. Clarity: in Section 3, the author claim real-world data is corrupted in some non-arbitrary manner. However, in practice, it is really hard to reason the corrpution procedure for agnostic noisy dataset like Clothing1M [2]. The authors are encouraged to explain this point more.\\n\\n2. Related works: In deep learning with noisy labels, there are three main directions, including small-loss trick [3], estimating noise transition matrix [4,5], and explicit and implicit regularization [6]. I would appreciate if the authors can survey and compare more baselines in their paper.\\n\\n3. Experiment: \\n3.1 Baselines: For noisy labels, the author should compare with [7] directly, which is highly related to your work. Namely, designing new loss function can overcome the issue of noisy labels. Without this comparison, the reported result has less impact. Moreover, the authors should add MentorNet [2] as a baseline https://github.com/google/mentornet\\n\\n3.2 Datasets: For datasets, I think the author should first compare their methods on symmetric and aysmmetric noisy data. Besides, the authors are encouraged to conduct 1 NLP dataset.\", \"references\": \"[1] L. Li, M. Littman, and T. Walsh. Knows what it knows: a framework for self-aware learning. In ICML, 2008.\\n\\n[2] T. Xiao, T. Xia, Y. Yang, C. Huang, and X. Wang. Learning from massive noisy labeled data for image classification. In CVPR, 2015.\\n\\n[3] L. Jiang, Z. Zhou, T. Leung, L. Li, and L. Fei-Fei. Mentornet: Learning data-driven curriculum for very deep neural networks on corrupted labels. In ICML, 2018.\\n\\n[4] G. Patrini, A. Rozza, A. Menon, R. Nock, and L. Qu. Making deep neural networks robust to label noise: A loss correction approach. In CVPR, 2017.\\n\\n[5] J. Goldberger and E. Ben-Reuven. Training deep neural-networks using a noise adaptation layer. In ICLR, 2017.\\n\\n[6] T. Miyato, S. Maeda, M. Koyama, and S. Ishii. Virtual adversarial training: A regularization method for supervised and semi-supervised learning. ICLR, 2016.\\n\\n[7] Z. Zhang and M. Sabuncu. Generalized cross entropy loss for training deep neural networks with noisy labels. In NIPS, 2018.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}" ] }
ryxY73AcK7
Sorting out Lipschitz function approximation
[ "Cem Anil", "James Lucas", "Roger B. Grosse" ]
Training neural networks subject to a Lipschitz constraint is useful for generalization bounds, provable adversarial robustness, interpretable gradients, and Wasserstein distance estimation. By the composition property of Lipschitz functions, it suffices to ensure that each individual affine transformation or nonlinear activation function is 1-Lipschitz. The challenge is to do this while maintaining the expressive power. We identify a necessary property for such an architecture: each of the layers must preserve the gradient norm during backpropagation. Based on this, we propose to combine a gradient norm preserving activation function, GroupSort, with norm-constrained weight matrices. We show that norm-constrained GroupSort architectures are universal Lipschitz function approximators. Empirically, we show that norm-constrained GroupSort networks achieve tighter estimates of Wasserstein distance than their ReLU counterparts and can achieve provable adversarial robustness guarantees with little cost to accuracy.
[ "deep learning", "lipschitz neural networks", "generalization", "universal approximation", "adversarial examples", "generative models", "optimal transport", "adversarial robustness" ]
https://openreview.net/pdf?id=ryxY73AcK7
https://openreview.net/forum?id=ryxY73AcK7
ICLR.cc/2019/Conference
2019
{ "note_id": [ "H1x1KpBqfN", "SkxeNOzqfV", "B1geukTDGV", "rylEXpJnl4", "Hkx58AXfgV", "rkeWyGAwp7", "BJx3P-AvpX", "Hkx-jx0v6Q", "HkgwSlAPpQ", "BJgZYz8167", "B1evMmjA2X", "rJxFhEd63Q", "HkgdKwx93Q", "H1gaveTEn7" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "comment", "official_review", "official_review", "official_review" ], "note_created": [ 1547488630957, 1547474984211, 1547321191756, 1545497883853, 1544859217674, 1542083032620, 1542082915612, 1542082713132, 1542082623186, 1541526136939, 1541481230945, 1541403825187, 1541175168286, 1540833381045 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1380/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1380/Authors" ], [ "ICLR.cc/2019/Conference/Paper1380/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1380/Authors" ], [ "ICLR.cc/2019/Conference/Paper1380/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1380/Authors" ], [ "ICLR.cc/2019/Conference/Paper1380/Authors" ], [ "ICLR.cc/2019/Conference/Paper1380/Authors" ], [ "ICLR.cc/2019/Conference/Paper1380/Authors" ], [ "ICLR.cc/2019/Conference/Paper1380/Authors" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper1380/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1380/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1380/AnonReviewer1" ] ], "structured_content_str": [ "{\"title\": \"Sorry! Did not intend to make statement about your paper\", \"comment\": \"Thanks for raising this issue. My comment was not intended to be a statement about your paper, but I understand your concern. So to clarify, I was not saying that your paper was a \\\"rough draft\\\", etc.; that was meant in reference to what the anonymous commenter's proposed policy would incentivize. I agree with you that it would be inappropriate for an area chair to refer to a submission in that way (especially without providing concrete details).\\n\\nI do think it is absolutely appropriate (and perhaps even necessary) to explain the reasoning behind an accept/reject decision, and so I stand by my decision to respond to the anonymous commenter. For instance, I have now explicitly stated a policy that I follow regarding papers that make substantial changes during the revision phase, and why I follow that policy. This allows the community to discuss the policy and judge whether it is good or bad, which will hopefully allow me and others to improve the review process in the future. The main change I would make in retrospect is to explicitly clarify that my comment was not directed at the present submission.\\n\\nI also do acknowledge that you got less engagement from the reviewers on your submission than would be ideal, and hope that you fare better in the next round of submission. I personally found the paper to be an interesting read.\"}", "{\"title\": \"OpenReview is not the place to debate hypotheticals\", \"comment\": \"Posting publicly because the AC has not responded to our private comment.\\n\\nI get that it's fun to engage in debates about our community's publication standards, and that you want to defend your decision against criticism. But I wish you would be a bit more careful in this context. OpenReview is archival, and you are posting in your official capacity as AC, so any comment you make will be interpreted as referring to our submission. Your (I assume inadvertent) implication that our submission was the sort of thing the process needs to disincentivize is careless and misleading.\\n\\nOur original submission was a finished paper, and all of the algorithms and mathematical results were already in more or less their final form. In the revision, we added a bunch of new experiments, and did a global rewrite for clarity (which I think is what the revision period is intended for). Our original submission was not perfect by any means, and two of the three reviewers gave us insightful and constructive feedback that helped us improve the paper. But my students and I simply do not submit half-baked work.\\n\\nThe anonymous commenter does raise an important issue with the review process, namely that R1 did not take the time to read the paper even once. This wasn't because it was \\\"incomplete\\\", but because it had some typos scattered throughout, such as \\\\citet vs. \\\\citep. I would never, as an AC, endorse this as a legitimate reason not to write a proper review. I did have some papers in my AC batch which were genuinely incomplete (e.g. 6 pages), and even then I still insisted the reviewers read the paper and write real reviews.\\n\\n- Roger\"}", "{\"title\": \"Need to think about incentives\", \"comment\": \"The problem with the policy that you suggest is that it creates poor incentives that would further strain an already strained reviewing system. If we allowed substantial revisions past the reviewing deadline, then everyone would be incentivized to submit rough drafts at time of submission and then revise later. In such a world, would there even be a point in reviewers looking at papers at time of submission? This would be equivalent to pushing the submission deadline a month later (the notification deadline would also have to be pushed back since reviewers would need time to review all the revisions). I understand wanting to propagate ideas sooner but it is the author's responsibility to have a finished paper by the submission deadline.\\n\\nIn addition, there are many aspects of reviewing a paper other than deciding whether the paper is interesting (e.g. does the idea actually make sense, are all key claims substantiated, etc.). My judgment was that reviewers would not be able to adequately judge these without essentially performing a second set of reviews.\"}", "{\"title\": \"\\\\citet vs. \\\\citep is no excuse not to write a real review\", \"comment\": \"We understand that R1 is probably very busy and did not want to read our paper twice. But we would have appreciated if they could find the time to read it once.\"}", "{\"metareview\": \"This paper presents an interesting and theoretically motivated approach to imposing Lipschitz constraints on functions learned by neural networks. R2 and R3 found the idea interesting, but R1 and R2 both point out several issues with the submitted version, including some problems with the proof--probably fixable--as well as a number of writing issues. The authors submitted a cleaned-up revised version, but upon checking revisions it appears the paper was almost completely re-written after the deadline. I do not think reviewers should be expected to comment a second time on such large changes, so I am okay with R1's decision to not review the updated version. Future reviewers of a more polished version of the paper will be in a better position to assess its merits in detail.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"interesting and potentially impactful idea but needs revisions\"}", "{\"title\": \"Sorry for the poor presentation. Please, take another look!\", \"comment\": \"We are deeply sorry that you felt the paper was not in a position to be given a complete review. We acknowledge that the paper was certainly lacking polish (as also noted by reviewer 2) and accept that this may have made the paper difficult to read in places.\\n\\nWe have uploaded a revised version which is tidier and without so many of the unfortunate errors you spotted previously. The revised version also presents the theoretical results more cleanly with some substantial improvements to the experiments. We hope that you will provide a more complete review at this time.\"}", "{\"title\": \"Feedback integrated into revised version. Thank you!\", \"comment\": \"Thank you for your detailed comments. We have uploaded a revised version of the paper which we believe addresses the majority of your concerns. You can find more detailed responses below.\", \"concern_1\": \"Is GroupSort leading to bad networks? (Integrate the topology of inputs)\\n\\nCould you clarify what you mean by \\u201cintegrate the topology of inputs\\u201d? Interpreting this as \\u201cis GroupSort a niche activation?\\u201d, we respond with the following: GroupSort is able to recover many common activation functions, for example ReLU, MaxOut, Concatenated ReLU, absolute value (now detailed in Appendix A). Importantly, it is often able to do this even with norm constrained weights (note that ReLU cannot recover GroupSort in this case). The main difference then will be how difficult GroupSort networks are to train. We have found practically that GroupSort networks are typically as easy to train as their ReLU counterparts. We trained wide ResNets using MaxMin and achieved comparable performance to ReLU. We also trained CelebA WGANs using MaxMin activations in the critic network without any issues. Importantly, in each case we used the suggested optimization hyperparameters tuned for ReLU and found that MaxMin worked too.\", \"concern_2\": \"Proof of Theorem 1 is incorrect.\\n\\nThank you for taking the time to carefully investigate this result. While we are confident that the result is correct, we have rewritten the proof in an attempt to make it clearer.\", \"to_address_your_points_directly\": \"we have modified the statement to hold almost everywhere, in which case we need not discuss sub differentials and may use differentiability directly. For your comment about the Cauchy-Schwarz inequality, note that the product cannot be larger than 1, as each individual component of the product has to be less than or equal to 1 (by the 1-Lipschitz constraint). Hence, each component must itself be 1. We have made this explicit in the revised proof, by bounding the product of norms above and below by 1. We have also removed the three-line result expressed in the appendix and instead baked it into the proof as part of the induction step. Finally, we have extended this result to hold in the setting of vector-valued inputs. Thank you for pointing out these issues to us. We hope that the improvements we\\u2019ve presented will clarify the proof for you but would be happy to discuss this further.\", \"concern_3\": \"Why not use GroupSort only at the end of the network?\\n\\nThe universal construction must use GroupSort for the intermediate layers as well. We construct the final network by taking the max/min of increasingly wide and deep networks (which are themselves max/mins). The final result is a network which uses MaxMin throughout and is able to represent the max/min of arbitrarily complicated Lipschitz functions.\", \"concern_4\": \"Table 3 shows FullSort doing worse\\n\\nThis is true and perhaps not particularly surprising. In Section 4.1 (Section 3.2 in old version) we state that while FullSort and MaxMin are equally expressive, the former leads to a more challenging optimization problem. The full-sort activation sorts the entire activation vector. We were surprised that the network was able to learn anything reasonable at all (especially with dropout!) and presented this column as a surprising observation - we do not suggest that practitioners adopt FullSort for classification as it is harder to optimize and more computationally expensive. We would be happy to clarify this further in the paper.\\n\\nWe hope that our responses above adequately resolve your concerns. Although we believe the current revision does a much better job of presenting these arguments, we warmly encourage you to provide any criticisms that may help us further express these points more clearly.\"}", "{\"title\": \"Thank you for the feedback\", \"comment\": \"Thank you for your kind feedback!\\n\\nWe agree with your comments on the empirical results presented in the original paper. We are pleased to present several improvements in our revised version. We include much improved adversarial robustness results which contain provable robustness guarantees and strong empirical evidence that MaxMin leads to significantly more expressive networks than ReLU (see Fig. 8 in revised version). We also compared MaxMin to ReLU on Wide ResNets and found that MaxMin had comparable performance over the training schemes we explored (we used a limited hyperparameter search around the optimal ReLU settings). Finally, we used MaxMin to train a WGAN-GP model on CelebA and generated images qualitatively on-par with the carefully tuned Leaky-ReLU model. We believe that these new additions show that MaxMin is more than just a niche activation function and in Lipschitz-constrained settings may lead to significant practical gains.\"}", "{\"title\": \"Uploaded revision and individual comments\", \"comment\": \"We thank each of the reviewers for their time and comments. We have uploaded a revised version of our paper which addresses the notes from each reviewer and includes substantial improvements to the writing. The new version provides improved presentation of theoretical content and some new additions to the experiments section. We emphasize that the scope of the paper has not changed at all. Alongside these changes, we have also modified the title of our paper to \\u201cSorting out Lipschitz function approximation\\u201d.\\n\\nWe have also responded to each of the reviewers in kind and welcome further discussion!\"}", "{\"title\": \"Thank you for bringing this to our attention\", \"comment\": \"Thank you for bringing this preprint to our attention! OPLU is indeed identical to GroupSort with a grouping size of 2 (which we call MaxMin). We will cite this paper and credit it for proposing MaxMin and observing that it is norm-preserving.\\n\\nThe focus of the OPLU paper is to preserve the norm of gradients during backpropagation to allow the training of extremely deep networks. In our latest revision of the paper, we also discuss this property in terms of dynamical isometry [1]. In our work, our primary focus is on training expressive Lipschitz-constrained architectures and we identify gradient norm preservation as an important condition for which MaxMin is one such suitable activation. We also prove that using MaxMin we are able to recover universal approximation of Lipschitz functions (which other common activations fail to achieve).\\n\\nThe revised version and our response to the reviewers will be posted soon. \\n\\n[1]: Pennington et al. \\u201cResurrecting the sigmoid in deep learning through dynamical isometry: theory and practice\\u201d https://arxiv.org/abs/1711.04735\"}", "{\"comment\": \"MaxMin, GroupSort with a grouping size of 2, looks the same as OPLU (orthogonal permutation linear unit) proposed in https://arxiv.org/abs/1604.02313. The motivation of OPLU was also norm preserving.\", \"title\": \"Related work: OPLU (orthogonal permutation linear unit)\"}", "{\"title\": \"Review of \\\"Universal Lipschitz Functions\\\"\", \"review\": \"This paper introduces GroupSort. The motivation is to find a good way to impose Lipschitz constraint to the learning of neural networks. An easy approach is \\\"atomic construction\\\", which imposes a norm constraint to the weight matrix of every network layer. Although it guarantees the network to be a Lipschitz function, not all Lipschitz functions are representable under this strong constraint. The authors point out that this is because the activation function of the network doesn't satisfy the so called Jacobian norm preserving property.\\n\\nThen the paper proposes the GroupSort activation which satisfies the Jacobian norm preserving property. With this activation, it shows that the network is not only Lipschitz, but is also a universal Lipschitz approximator. This is a very nice theoretical result. To my knowledge, it is the first algorithm for learning a universal Lipschitz function under the architecture of neural network. The Wasserstein distance estimation experiment confirms the theory. The GroupSort network has stronger representation power than the other networks with traditional activation functions.\\n\\nAdmittedly I didn't check the correctness of the proof, but the theoretical argument seems like making sense.\\n\\nDespite the strong theoretical result, it is a little disappointing to see that the GroupSort doesn't exhibit any significant advantage over traditional activation function on image classification and adversarial learning. This is not surprising though.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Interesting paper but missing details and some formal polishing required\", \"review\": \"summary:\\n\\nA paper that states that a new activation function, which sorts coordinates in a vector by groups, is better than ReLU for the approximation of Lipschtiz functions.\", \"pros\": [\"interesting experiments\", \"lots of different problems evaluated with the technique\"], \"cons\": [\"the GroupSort activation is justified from the angle of approximating Lipschitz transformations. While references are given why Lip is good for generalisation, I cannot see why GroupSort does not go *against* the ability of deep architectures to integrate the topology of inputs (see below).\", \"the proof of Theorem 1 requires polishing (see below)\", \"experiments require some polishing\"], \"detail\": [\"The proof of Theorem 1 has three problems, first in the main file argument: since ReLU is not differentiable, you cannot use the partial derivative. Maybe a sub differential ? Second, in the RHS after the use of the Cauchy-Schwartz inequality (no equation numbering\\u2026) you claim that the product of all three norms larger than 1 implies *each* of the last two is 1. This is wrong: it tell nothing about the the value of each, only about the *product* of each, which then make the next two identities a sufficient *but not necessary* condition for this to happen and invalidates the last identity. Last, the Theorem uses a three lines appendix result (C) which is absolutely not understandable. Push this in the proof, make it clear.\", \"Section D.1 (proof of Theorem 2) the proof uses group size 2 over a vector of dimension 2. This, unless I am mistaken, is the only place where the group sort activation is used and so the only place where GroupSort can be formally advocated against ReLU. If so, what about just using ReLUs and a single group sort layer somewhere instead of all group sort ? Have the authors tried this experimentally ?\", \"If I strictly follow Algorithm 1, then GroupSort is carried out by *partitioning* the [d] indexes in g groups of the same size. This looks quite arbitrary and for me is susceptible to impair the capacity of deep architectures to progressively integrate the topology of inputs to generalise well. Table 3 tends to display that this is indeed the case as FullSort does much worse than ReLU.\", \"Table 5: replace accuracies by errors, to be consistent with other tables.\", \"in the experiments, you do not always specify the number of groups (Table 4)\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Potentially interesting but unfinished work\", \"review\": \"The paper proposes a new \\\"sorting\\\" layer in neural networks that offers\\nsome theoretical properties to be able to learn network which are 1-Lipschitz\\nfunctions.\\n\\nThe paper contains what seems to be a nice contribution but the manuscript\\nseems to have been written in a rush which makes it full of typos\\nand very hard to read. This unfortunately really feels like unfinished work.\", \"just_to_name_a_few\": [\"Please check the use of \\\\citep and \\\\citet. See eg Szegedy ref on page 3.\", \"Unfinished sentence \\\"In this work ...\\\" page 3.\", \"\\\"]\\\" somewhere at the bottom of page 4.\", \"\\\"Hence, neural network has cannot to lose Jacobian norm... \\\" ???\", \"etc...\", \"Although I would like to offer here a comprehensive review I consider\", \"that the authors have not done their job with this submission.\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
rygFmh0cKm
On Difficulties of Probability Distillation
[ "Chin-Wei Huang", "Faruk Ahmed", "Kundan Kumar", "Alexandre Lacoste", "Aaron Courville" ]
Probability distillation has recently been of interest to deep learning practitioners as it presents a practical solution for sampling from autoregressive models for deployment in real-time applications. We identify a pathological optimization issue with the commonly adopted stochastic minimization of the (reverse) KL divergence, owing to sparse gradient signal from the teacher model due to curse of dimensionality. We also explore alternative principles for distillation, and show that one can achieve qualitatively better results than with KL minimization.
[ "Probability distillation", "Autoregressive models", "normalizing flows", "wavenet", "pixelcnn" ]
https://openreview.net/pdf?id=rygFmh0cKm
https://openreview.net/forum?id=rygFmh0cKm
ICLR.cc/2019/Conference
2019
{ "note_id": [ "SJxLAQjbxN", "HkeCRRt7RX", "rkejbRFXA7", "B1lvYTFX0X", "rJllYM08p7", "BJxBtwJqhX", "SygyL6Otnm" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544823758242, 1542852310067, 1542852098898, 1542851967436, 1542017656361, 1541171068559, 1541143879233 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1379/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1379/Authors" ], [ "ICLR.cc/2019/Conference/Paper1379/Authors" ], [ "ICLR.cc/2019/Conference/Paper1379/Authors" ], [ "ICLR.cc/2019/Conference/Paper1379/AnonReviewer4" ], [ "ICLR.cc/2019/Conference/Paper1379/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1379/AnonReviewer3" ] ], "structured_content_str": [ "{\"metareview\": \"The paper proposes new methods for optimization of optimization of KL(student_model||teacher_model).\\n\\nThe topic is relevant. The paper also contains interesting ideas and the proposed methods are interesting; they are elegant and seems to work reasonably well on the tasks tried.\\n\\nHowever, the reviewers do not all agree that the paper is well written. The reviewers have pointed out several issues that need to be addresses before the paper can be accepted.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Meta-Review\"}", "{\"title\": \"Response to AnonReviewer3\", \"comment\": \"Thank you for the constructive feedback! We address the concerns below:\\n\\n1. Thanks for spotting the typo! We\\u2019ve fixed it.\\n\\n2. We used c initially to express the formula generally, for any vector c. This is a stylistic choice, but we agree that it might be confusing for some readers, and for the sake of readability, we\\u2019ve changed the c to rho.\\n\\n3. We've updated Sec3.1 in the hope to better motivate and explain the analysis. Here's some clarification. \\n a. We\\u2019ve included a derivation and explanation of the path derivative in the appendix; in short, the path derivative consists of the gradient direction wrt the sample x, which directly affects how the parameters of the distribution will be updated to change the shape of p_S. \\n b. p_S is optimal when p_S=p_T (iff KL=0). In practice, what we found is that when the algorithm tries to minimize KL, p_S tends to fit to the mode of p_T and the mass is overly concentrated. This \\u201cmode collapse\\u201d problem, as manifested in practical issues such as the whispering characteristic present in the reverse-KL trained Parallel Wavenet, is indeed the key issue motivating our study!\\n c. p_T being a high dimensional gaussian is simply a \\u201cmodel of the problem\\u201d to demonstrate how stochastic optimization can be inefficient due to the \\u201cunbalanced\\u201d gradient distribution. Assume now p_S is sharper than p_T, the gradient in expectation should point to a direction that expands the probability mass of p_S (along the high density valley under p_T). Our analysis suggests that even when this is true in expectation, one might have exponentially low probability to sample a gradient with the required expansion signal: e.g. more contractive gradients with smaller magnitude and less expansive gradients with larger magnitude. \\n\\n4. Thanks for spotting the typo! We\\u2019ve fixed it to set $\\\\mu=[2,...,2]^\\\\top$ to be a vector of $T$ $2$'s\\n\\n5. We\\u2019ve updated our submission to include more details, which we also present here: For the vocoder experiment, we used the L1 loss, Gaussian IAF as in ClariNet. Each flow consists of 10 residual dilated convolution blocks (these are the standard blocks used in Parallel WaveNet) with kernel-width 2 and 64 output channels. We compute the regularized KL in closed form (for the baseline experiments with KL), using vocoder as input.\"}", "{\"title\": \"Response to AnonReviewer2\", \"comment\": \"Thank you for your positive review and feedback! We address the comments below:\\n\\n1. Reverse KL minimization has been used in many other contexts other than the recent application of distilling an autoregressive model which we focus on in this paper. Most common applications have been in areas such as variational inference [1], variational continual learning [2], energy based GAN [3], policy based reinforcement learning [4], etc.\\n\\n2. The point of Proposition 2 is to show that z-recon loss behaves like a distance between T^{-1}(z) and S(z). So when z-recon is minimized, it implies S gets closer to T^{-1} in the sense of the induced metric. We\\u2019ve updated the paper to explicitly state this. \\n\\n3. Thanks for the suggestion, but we believe you mean Sec 3.2 (IIUC). We added the pointer there in the new version.\\n\\nThank you again for the feedback and interest. \\n\\n[1] Auto-Encoding Variational Bayes\\n[2] Variational Continual Learning\\n[3] Calibrating Energy-based Generative Adversarial Networks\\n[4] Latent Space Policies for Hierarchical Reinforcement Learning\"}", "{\"title\": \"Response to AnonReviewer4\", \"comment\": \"Thank you for the close reading and detailed feedback! We address the comments below:\\n\\n1. Indeed, by \\u201csparsity\\u201c, we mean the probability of effective gradient signals that point away from the mode of the teacher is small, the complement of which are the gradient signals that are either zero or point towards the model of the teacher (when this is viewed binarily: point-toward being 0 and point-away being 1, it means signal 1 is sparse). We have tried to make it clearer in the paper by paraphrasing it as the gradient distribution being skewed, or imbalanced over the orientation of push with respect to the origin.\\n\\n2. Depending on the chosen family of the student, this might have different effects. (we've updated the paper to include this discussion and better motivate the analysis in Sec3.1)\\n a. If p_S is independent gaussian (mean field assumption), the best student will be extremely concentrated at the mode of the teacher in high dimension. Here the optimality according to the reverse KL will more likely sacrifice the norm of the samples drawn from the student (see Fig1.3 of [1]). Even in this case, the probability of point-away signal will still not be the same as the probability of point-toward signal. They are only equal when weighted by the magnitude (which means the expected gradient is zero at optimality). \\n b. If p_S is multivariate gaussian,\\n (i) One can rotate and rescale both p_T and p_S according to the covariance matrix of p_S, such that the latter once again becomes standard normal. Doing this is to show that our analysis is without loss of generality: the probability of receiving a point-away gradient signal is determined by the \\u201crelative\\u201d covariance of p_T (after transformation under covariance of p_S) with respect to the standard normal. \\n (ii) As we argue in the paper, as long as there is correlation present in the now transformed p_T, the condition coefficient (eq3) can be extremely small due to the exponential decrease in the volume of hyper-cone (shaded area of Fig1b). The training algorithm can constantly make progress \\u201cin expectation\\u201d, but since we\\u2019re using SGD in practice, the rate at which p_S becomes better now depends on how likely it is to get a point-away signal, assuming p_S fits to the mode of p_T first. The latter assumption and gradient sparsity posing a problem for optimization were also validated by our experiment (Fig 2a). \\n\\n3. The Neural Vocoder experiment used closed-form KL and the rest used monte carlo estimate. \\n\\n4. We\\u2019ve updated our paper to include more experimental details on the neural vocoder experiment. In particular, we have clarified that we use closed form reverse KL proposed in ClariNet. Each of our flow consists of 10 residual dilated convolution block with kernel width of 2 and 64 output channels.\\n\\n5. We\\u2019ve added one more comparison with reconstruction loss + power loss, which is included in the following link: https://soundcloud.com/inverse-matching/sets/samples-for-inverse-matching\\n\\nAgain, we thank you for your constructive feedback. \\n\\n[1] Two problems with variational expectation maximisation for time-series models\"}", "{\"title\": \"Interesting ideas but the paper has some issues\", \"review\": \"This paper proposes new methods for distilling a feed-forward generative model (student) from an autoregressive generative model (teacher) as an alternative to the reverse-KL divergence. The first part of the paper analyses optimization issues with the reverse KL divergence while in the second part of the paper alternatives are proposed (x-reconstruction and z-reconstruction).\", \"detailed_comments\": \"1.\", \"in_abstract_and_other_places\": \"\\\"sparse gradient signal from the teacher\\\".\\nSparsity implies that many of the values are exactly zero, while Section 3.1 seems to imply that some of the values might be small (or pointing towards the origin).\\n\\n2.\\nIn Section 3.1 and 3.2 the authors discuss a potential failure mode of the reverse KL:\\n\\nBut, proposition 1 boils down to the fact that if the student's mass is more spread out than the teacher is some direction, that it should shrink that mass closer to zero as well.\", \"in_the_example_of_the_paper\": \"if an eigenvalue of T is smaller than 1, it would mean that the student which is spherical Gaussian, would adjust its probability mass to also be smaller in that eigenvector's direction.\\n\\nAs training progresses, the students mass would be much closer to the teacher and the probability of 'pointing away' from the origin would be about as likely as pointing towards.\\n\\nSo it's not clear at all that the described property is problematic for optimization, as it could as well be interpreted as the student trying to fit the teacher's distribution better.\\n\\n3.\\nWas the KL between P_S(x_i | z_<i) and P_T(x_i | x_<i) computed analytically? If these conditional distributions are Gaussian (which they are in many of the examples) this should be trivial.\\n\\n4.\", \"section_4_about_the_neural_vocoder_needs_to_be_expanded\": \"many details are missing here and although it's one of the more important experiments in the paper it's relatively neglected compared to the other parts of the paper.\\n\\n5.\", \"in_the_section_4\": \"the experiment with reverse-KL is a straw man comparison: For audio the reverse KL was only proposed in combination with the power loss (Oord et al). Two additional experiments would make the result a lot stronger: KL+power-loss and X-recon+power-loss. Because if the x-recon method does not work well together with the power-loss, its practical applicability seems limited.\\n\\n\\nThe proposed methods are interesting, because they are elegant and seems to work reasonably well on the tasks tried. The first part of the paper about gradient sparsity/orientation needs to be addressed. Section 4 should be expanded and an additional comparison should be made.\\n\\nI would change my rating if these issues were addressed.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Convincing paper about of a potentially not-too-widespread technical issue.\", \"review\": \"The paper studies the problem of distilling a student probabilistic model (that\\nis easy to sample from) from a complex teacher model (for which sampling is\\nslow). The authors identify a technical issue with a recent distillation\\ntechnique, namely that positive gradient signals become increasingly unlikely\\nas the dimensionality of the teacher model increases. They then propose two\\nalternative technique that sidestep this issue.\\n\\nThe topic is definitely relevant. The paper focus on a single method for\\nprobability distillation, which limits the significance of the contribution.\\n\\nThe paper is very well written and well structured. Section 4 is may be a bit\\ntoo dense for the uninitiated; it may make sense to clarify that calT and calS\\nrefer to the teacher and student models---it is only obvious while reading this\\nsection for the second time around.\\n\\nAll contributions seem novel. The fact that the (reverse) KL can lead to bad\\nmodels is known; the issue identified in this paper, however, seems novel.\\n\\nI could not spot any major flaws with the paper.\\n\\nThe evaluation is satisfactory. The issue of KL-based training is very clear,\\nas is the advantage of the encoder-decoder alternatives.\\n\\nI especially appreciated the link between distillation and encoder-decoder\\narchitectures.\", \"detailed_comments\": \"1 - How widespread is the issue identified in this paper? In other words, is\\nreverse KL realistically used in applications other than probability\\ndistillation?\\n\\n2 - It is unclear to me why Proposition 2 is important. This should be\\nexplicitly stated.\\n\\n3 - It would make sense to add a forward pointer to Figure 3c in Section 3.1,\\nto provide another example of mode-seeking.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}", "{\"title\": \"Interesting idea & fair results\", \"review\": \"This paper analyzes the limitation of probability density distillation with reverse KL divergence, and proposes two practical methods for probability distillation.\", \"detailed_comments\": \"1) Typo: should be WaveNet, not Wavenet.\\n\\n2) In Proposition 1. $c_i$ should be $\\\\rho_i$.\\n\\n3) One may explain \\u201cpath derivative\\u201d with more details. Also, I am really confused by Proposition 1 and its underlying implication. Given p_s and p_t are centered at the origin, isn\\u2019t p_s(x) already the optimal if it\\u2019s just a unit Gaussian. Why do we need a derivative pointing away from the origin? At least, one need parameterize p_s as N(0, \\\\phi)?\\n\\n4) In section 3.2, \\u201cset $\\\\mu = [2, 2]^T$\\u201d? Isn\\u2019t $\\\\mu$ a T dimensional vector?\\n\\n5) A lot of important details are missing in neural vocoder experiment. For x-reconstruction, do you use L1 or L2 loss? For student model, do you use Gaussian IAF with WaveNet architecture as in ClariNet, or Logistic IAF as in Parallel WaveNet? Following this question, do you compute KLD in closed-form? Do you use the regularization term introduced in ClariNet? Student with KL loss and power loss outperforms x-reconstruction. Did you try x-reconstruction along with power loss?\", \"pros\": \"Certainly, there are some interesting ideas in this paper.\", \"cons\": \"The experiment results are not good enough. The paper is poorly written. A lot of important details are missing. \\n\\nHowever, I would like to raise my rating to 6, if these comments can be properly addressed.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
HkgYmhR9KX
AD-VAT: An Asymmetric Dueling mechanism for learning Visual Active Tracking
[ "Fangwei Zhong", "Peng Sun", "Wenhan Luo", "Tingyun Yan", "Yizhou Wang" ]
Visual Active Tracking (VAT) aims at following a target object by autonomously controlling the motion system of a tracker given visual observations. Previous work has shown that the tracker can be trained in a simulator via reinforcement learning and deployed in real-world scenarios. However, during training, such a method requires manually specifying the moving path of the target object to be tracked, which cannot ensure the tracker’s generalization on the unseen object moving patterns. To learn a robust tracker for VAT, in this paper, we propose a novel adversarial RL method which adopts an Asymmetric Dueling mechanism, referred to as AD-VAT. In AD-VAT, both the tracker and the target are approximated by end-to-end neural networks, and are trained via RL in a dueling/competitive manner: i.e., the tracker intends to lockup the target, while the target tries to escape from the tracker. They are asymmetric in that the target is aware of the tracker, but not vice versa. Specifically, besides its own observation, the target is fed with the tracker’s observation and action, and learns to predict the tracker’s reward as an auxiliary task. We show that such an asymmetric dueling mechanism produces a stronger target, which in turn induces a more robust tracker. To stabilize the training, we also propose a novel partial zero-sum reward for the tracker/target. The experimental results, in both 2D and 3D environments, demonstrate that the proposed method leads to a faster convergence in training and yields more robust tracking behaviors in different testing scenarios. For supplementary videos, see: https://www.youtube.com/playlist?list=PL9rZj4Mea7wOZkdajK1TsprRg8iUf51BS The code is available at https://github.com/zfw1226/active_tracking_rl
[ "Active tracking", "reinforcement learning", "adversarial learning", "multi agent" ]
https://openreview.net/pdf?id=HkgYmhR9KX
https://openreview.net/forum?id=HkgYmhR9KX
ICLR.cc/2019/Conference
2019
{ "note_id": [ "SJevy0zbeV", "B1leQMqqAQ", "HkefcSaM6m", "rJlb9G2fp7", "ryl__Au-a7", "Hkg8D92gTm", "r1xYj34yp7", "rkxScidq27", "Hyxga8D52m" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_review" ], "note_created": [ 1544789471166, 1543311896218, 1541752201802, 1541747336939, 1541668464067, 1541618269786, 1541520545296, 1541208972729, 1541203640344 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1378/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1378/Authors" ], [ "ICLR.cc/2019/Conference/Paper1378/Authors" ], [ "ICLR.cc/2019/Conference/Paper1378/Authors" ], [ "ICLR.cc/2019/Conference/Paper1378/Authors" ], [ "ICLR.cc/2019/Conference/Paper1378/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1378/Authors" ], [ "ICLR.cc/2019/Conference/Paper1378/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1378/AnonReviewer3" ] ], "structured_content_str": [ "{\"metareview\": \"The paper presents an adversarial learning framework for active visual tracking, a tracking setup where the tracker has camera control in order to follow a target object. The paper builds upon Luo et al. 2018 and proposes jointly learning tracker and target policies (as opposed to tracker policy alone). This automatically creates a curriculum of target trajectory difficulty, as opposed to the engineer designing the target trajectories. The paper further proposes a method for preventing the target to fast outperform the tracker and thus cause his policy to plateau. Experiments presented justify the problem formulation and design choices, and outperform Luo et al. . The task considered is very important, active surveillance with drones is just one sue case.\\n\\nA downside of the paper is that certain sentences have English mistakes, such as this one: \\\"The authors learn a policy that maps raw-pixel observation to control signal straightly with a Conv-LSTM network. Not only can it save\\nthe effort in tuning an extra camera controller, but also does it outperform the...\\\" However, overall the manuscript is well written, well structured, and easy to follow. The authors are encouraged to correct any remaining English mistakes in the manuscript.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"adversarial learning for active visual tracking with interesting components\"}", "{\"title\": \"Irrelevant to our response\", \"comment\": \"We appreciate your feedback, but we feel the comments you just post seem irrelevant to our response. We've addressed your concerns. In the updated manuscript we also add empirical results on real-world scenarios.\\n\\nYour point about the similarity between tracking and navigation is interesting, but the according discussion is out the scope of this paper and deserves a separate study. In this paper we focus on how to learn a robust visual tracker.\"}", "{\"title\": \"Change Logs\", \"comment\": \"We have updated our paper during the rebuttal period, which could be summarized as below:\\n\\na) To emphasize our major contribution and clarify the non-trivial different with Luo et al. (2018), we've rewritten Abstract and modified the Introduction. \\nb) We've modified Section 3.3. The motivation for the tracker-awareness is added. Explanations are given for why we cannot do a target-aware tracker. \\nc) Supplementary videos are updated in: https://www.youtube.com/playlist?list=PL9rZj4Mea7wOZkdajK1TsprRg8iUf51BS\", \"the_videos_contains\": \"1. Training the target and tracker jointly via AD-VAT (2D);\\n 2. Testing the AD-VAT tracker in four testing environments (2D);\\n 3. Using the learned target to attack the baseline trackers (2D);\\n 4. Training the target and tracker via AD-VAT in DR Room (3D);\\n 5. Testing the tracker in Realistic Environments (3D);\\n 6. Passively testing tracker on real-world video clips.\\nd) Appendix.A is modified for better explaining the partial zero-sum reward.\\ne) Appendix.B is added. It visualizes the training process via drawing the position distribution in different training stages.\\nf) Appendix.C is added. It provides evaluation results on video clips to demonstrate the potential of transferring the tracking ability to the real world.\\ng) Table.1 is updated. We add the testing result that the adversarial target is tracked by the three trackers in two different maps, and update the average performance simultaneously. The results demonstrate that the target learned in AD-VAT could effectively challenge the two baseline trackers.\"}", "{\"title\": \"Reply to AnonReviewer3\", \"comment\": \"Thanks for appreciating our partial-zero-sum idea. Our primary contribution is the adversary/dueling RL mechanism for training a robust tracker. To stabilize and accelerates the training, we devised the techniques of the partial-zero-sum and the asymmetrical target model. These two techniques are critical for a successful training, and we hope to see their applications to other domains involving adversary/dueling training.\\n\\nAs for the comments on \\\"real-world test and results\\\", we've taken a qualitative testing on some real-world video clips from VOT dataset [Kristan et al. (2016)]. In this evaluation, we feed the video clips to the tracker and observe the network output actions. In general, the results show that the output action is consistent with the position and scale of the target. For example, when the target moves from the image center to the left until disappearing, the tracker outputs actions ``move forward\\\", ``move forward-left\\\", and ``turn left\\\" sequentially. The testing demonstrates the potential of transferring the tracking ability to real-world. \\n\\nPlease see Appendix.C in our updated submission and watch the demo video here: https://youtu.be/jv-5HVg_Sf4\"}", "{\"title\": \"Reply to AnonReviewer2\", \"comment\": \"Thanks for the review. Our feedback goes below.\", \"q1\": \"\\\"I think the contributions of this work is incremental compared with [Luo et al (2018)] in which the major difference is the partial zero sum reward structure is used and the observations and actions information from the tracker are incorporated into the target network\\\"\", \"a1\": \"Our method is fundamentally different from Luo et al. (2018), please see our reply to R#1 (the Q2-A2) for detailed explanations. In short, the major difference is that we employ Multi-Agent RL to train both the tracker and the target object, while Luo et al. (2018) only train the tracker with Single-Agent RL (where they pre-define/hand-tune the moving path for the target object). Our method turns out better in the sense that it produces a stronger tracker via the proposed asymmetrical dueling training.\\n\\nThe Multi-Agent RL training in our VAT task is unstable and slow to converge. To address these issues, we derived the two techniques: the partial zero sum and the asymmetrical target object model.\", \"q2\": \"\\\"In addition, the explanation about importance of the tracker awareness to the target network seems not sufficient. The ancient Chinese proverb is not a good explanation. It would be better if some theoretical support can be provided for such design.\\\"\", \"a2\": \"The tracker awareness mechanism for the target object is \\\"cheating\\\". This way, the target object would appear to be \\\"stronger\\\" than the tracker as it knows what the tracker knows. Such a treatment accelerates the training by inducing a reasonable curriculum to the tracker and finally helps training a much stronger and more generalizable tracker. Note we cannot apply this trick to the tracker as it cannot cheat when deploying. See also our reply to R#1 (Q3-A3).\\n\\nAs for the details of the tracker-aware model, it not only uses the observation and action of the tracker as extra input information but also employs an auxiliary task to predict the tracker's immediate reward. The auxiliary task could help the tracker learn a better representation for the adversarial policy to challenge the tracker.\", \"q3\": \"\\\"For active object tracking in real-world/3D environment, designing the reward function only based on the distance between the expected position and the tracked object position can not well reflect the tracker capacity. The scale changes of the target should also be considered when designing the reward function of the tracker. However, the proposed method does not consider the issue, and the evaluation using the reward function based on the position distance may not be sufficient.\\\"\", \"a3\": \"The scale of a target object showing up in the tracker's image observation will be implied by the distance between tracker and object, which we've considered when designing the reward function.\\n\\nConsider a simple case of projecting a line in 3D space onto a camera plane. The length (l) of the line on the 2D image plane is derived by an equation as below:\\n l = L*f/d, \\nwhere L is the original length in 3D space, f is the distance between the 2D plane and the focal center, and d is the distance between the line and the focal center.\\nIn the VAT problem\\uff0cf depends on the intrinsic parameters of the camera model, which is fixed; L depends on the 3D model of the target object, which also could be regarded as constant. Thus, the scale of the object in the 2D image plane is impacted only by d, the distance between the target and the tracker. It is not difficult to derive that, the farther the distance d is, the smaller the target is observed. This suggests that the designed distance-based reward function has well considered the scale of the object.\\n\\nNote that calculating the scale of the target in an image is of high computational complexity. It requires to extract the object mask and calculate the area of the mask. In contrast, our distance-based reward is computationally cheap, thanks to the simulator's APIs by which we can easily access the tracker's and target's world coordinate in the bird view map.\"}", "{\"title\": \"Incremental contribution and unclear rationales\", \"review\": \"This work aims to address the visual active tracking problem in which the tracker is automatically adjusted to follow the target. A training mechanism in which tracker and the target serve as mutual opponents is derived to learning the active tracker. Experimental evaluation in both 2D and 3D environments is conducted.\\n\\nI think the contributions of this work is incremental compared with [Luo et al (2018)] in which the major difference is the partial zero sum reward structure is used and the observations and actions information from the tracker are incorporated into the target network, while the network architecture is quite similar to [Luo et al (2018)].\\nIn addition, the explanation about importance of the tracker awareness to the target network seems not sufficient. The ancient Chinese proverb is not a good explanation. It would be better if some theoretical support can be provided for such design.\\n\\nFor active object tracking in real-world/3D environment, designing the reward function only based on the distance between the expected position and the tracked object position can not well reflect the tracker capacity. The scale changes of the target should also be considered when designing the reward function of the tracker. However, the proposed method does not consider the issue, and the evaluation using the reward function based on the position distance may not be sufficient.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Thanks for the review.\", \"comment\": \"Thanks for the review. Our feedback goes below.\", \"q1\": \"\\\"Contrived task\\\"\", \"a1\": \"Visual object tracking is widely recognized as an important task in Computer Vision. In this study, we propose a principled approach of how to train a robust tracker.\", \"q2\": \"\\\"The work is very incremental over Luo et al. (2018) \\\"End-to-end Active Object Tracking and Its Real-world Deployment via Reinforcement Learning\\\", as the only two additions are extra observations o_t^{alpha} for the target, and a reward function that has a fudge factor when the target gets too far away\\\"\", \"a2\": \"Our method is fundamentally different from Luo et al. (2018), as explained below.\\n\\nLuo et al. (2018) adopted pre-defined target object moving path, coded in hand-tuned scripts. Thus, only the tracker is trainable, and the settings are single-agent RL. \\n\\nIn our method, the target object is also implemented by a neural network, learning how to escape the tracker during training. Both the tracker and the target object are trained jointly in an adversary/dueling way, and the settings are multi-agent RL. \\n\\nWe show the advantage of our method over Luo et al. (2018). Note that the pre-defined target object moving path in Luo et al. (2018) can hurt the generalizability of the tracker. In reality, the target object can move in various hard patterns: Z-turn, U-turn, sudden stop, walk-towards-wall-then-turn, etc., which can pose non-trivial difficulties to the tracker during both training and deployment. Moreover, such moving patterns are difficult to be thoroughly covered and coded by the hand-tuning scripts as in Luo et al. (2018).\\n\\nThe trainable target object in our method, however, can learn the proper moving path in order to escape from the tracker solely by the adversary/dueling training, without hand-tuned path. The smart target object, in turn, induces a tracker that well follows the target no matter how wild the target object moves. Eventually, we obtain a much stronger tracker than that of Luo et al. (2018), achieving the very purpose of our study: to train a robust tracker for VAT task.\", \"q3\": \"\\\"Should not the asymmetrical relationship work the other way round, with the tracker knowing more about the target?\\\"\", \"a3\": \"We should not do that.\\n\\nNote that the additional \\\"asymmetrical\\\" information is way of \\\"cheating\\\". As our goal is to train a tracker, we don't need to consider deploying a target object. Therefore, we can simply let the target object cheat during training by feeding to it the tracker's observation/reward/action. Such a \\\"peeking\\\" treatment accelerates the training and ultimately improves the tracker's training quality, as is shown in the submitted paper.\\n\\nThe tracker, however, is unable to \\\"cheat\\\" when deployed (e.g., in a real-world robot). It has to predict the action using its own observations. There is no way for the tracker to acquire the information (observation/reward/action) from a target object.\", \"q4\": \"\\\"The paper would have benefitted from a proper analysis of the trajectories taken by the adversarial target as opposed to the heuristic ones, ...\\\"\", \"a4\": \"We have added to Appendix some texts for the analysis, see Appendix.B in the updated submission. The target object does show intriguing behaviors when escaping the tracker, see the supplementary videos available at https://www.youtube.com/playlist?list=PL9rZj4Mea7wOZkdajK1TsprRg8iUf51BS\", \"q5\": \"\\\"...and from comparison with non-RL state-of-the-art on tracking tasks.\\\"\", \"a5\": \"Luo et al. (2018) had done the comparisons and shown their method improves over several representative non-RL trackers in the literature.\\nOur method outperforms that of Luo et al. (2018).\", \"q6\": \"\\\"Citing Sun Tzu's \\\"Art of War\\\" (please use the correct citation format)...\\\"\", \"a6\": \"We have fixed this in the updated submission.\", \"q7\": \"\\\"Further multi-agent tasks could also have been considered, such as capture the flag tasks as in \\\"Human-level performance in first-person multiplayer games with population-based deep reinforcement learning\\\"\\\"\", \"a7\": \"The method developed in that paper is for playing the First Person Shooting game, where it has to ensure the fairness among the intra- and inter-team players. In our study, the primary goal is to train a tracker (player 1), permitting us to leverage the asymmetrical mechanism for the target object (player 2). This technique effectively improves the adversary/dueling training and eventually produces a strong tracker.\"}", "{\"title\": \"Contrived task\", \"review\": \"This paper presents a simple multi-agent Deep RL task where a moving tracker tries to follow a moving target. The tracker receives, from its own perspective, partially observed visual information o_t^{alpha} about the target (e.g., an image that may show the target) and the target receives both observations from its own perspective o_t^{beta} and a copy of the information from the tracker's perspective. Both agents are standard convnet + LSTM neural architectures trained using A3C and are evaluated in 2D and 3D environments. The reward function is not completely zero-sum, as the tracked agent's reward vanishes when it gets too far from a reference point in the maze.\\n\\nThe work is very incremental over Luo et al (2018) \\\"End-to-end Active Object Tracking and Its Real-world Deployment via Reinforcement Learning\\\", as the only two additions are extra observations o_t^{alpha} for the target, and a reward function that has a fudge factor when the target gets too far away. Citing Sun Tzu's \\\"Art of War\\\" (please use the correct citation format) is not convincing enough for adding the tracker's observations as inputs for the target agent. Should not the asymmetrical relationship work the other way round, with the tracker knowing more about the target?\\n\\nExperiments are conducted using two baselines for the target agent, one a random walk and another an agent that navigates to a target according to a shortest path planning algorithm. The ablation study shows that the tracker-aware observations and a target's reward structure that penalizes when it gets too far do help the tracker's performance, and that training the target agent helps the tracker agent achieve higher scores. The improvement is however quite small and the task is ad-hoc.\\n \\nThe paper would have benefitted from a proper analysis of the trajectories taken by the adversarial target as opposed to the heuristic ones, and from comparison with non-RL state-of-the-art on tracking tasks. Further multi-agent tasks could also have been considered, such as capture the flag tasks as in \\\"Human-level performance in first-person multiplayer games with population-based deep reinforcement learning\\\".\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"novel reward function in adversarial VAT appliation\", \"review\": \"This is in a visual active tracking application. The paper proposes a novel reward function - \\\"partial zero sum\\\", which only encourages the tracker-target competition when they are close and penalizes whey they are too far.\\n\\nThis is a very interesting problem and I see why their contribution could improve the system performance.\", \"clarity\": \"the paper is well-written. I also like how the author provides both formulas and a lot of details on implementation of the end-to-end system.\", \"originality\": \"Most of the components are pretty standard, however I value the part that seems pretty novel to me - which is the \\\"partial zero-sum\\\" idea.\", \"evaluation\": \"the result obtained from the simulated environment in 2d and 3d are convincing. However, if 1) real-world test and results 2) a stronger baseline can be used, that would be a stronger acceptance.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
HJlt7209Km
Theoretical and Empirical Study of Adversarial Examples
[ "Fuchen Liu", "Hongwei Shang", "Hong Zhang" ]
Many techniques are developed to defend against adversarial examples at scale. So far, the most successful defenses generate adversarial examples during each training step and add them to the training data. Yet, this brings significant computational overhead. In this paper, we investigate defenses against adversarial attacks. First, we propose feature smoothing, a simple data augmentation method with little computational overhead. Essentially, feature smoothing trains a neural network on virtual training data as an interpolation of features from a pair of samples, with the new label remaining the same as the dominant data point. The intuition behind feature smoothing is to generate virtual data points as close as adversarial examples, and to avoid the computational burden of generating data during training. Our experiments on MNIST and CIFAR10 datasets explore different combinations of known regularization and data augmentation methods and show that feature smoothing with logit squeezing performs best for both adversarial and clean accuracy. Second, we propose an unified framework to understand the connections and differences among different efficient methods by analyzing the biases and variances of decision boundary. We show that under some symmetrical assumptions, label smoothing, logit squeezing, weight decay, mix up and feature smoothing all produce an unbiased estimation of the decision boundary with smaller estimated variance. All of those methods except weight decay are also stable when the assumptions no longer hold.
[ "Adversarial examples", "Feature smoothing", "Data augmentation", "Decision boundary" ]
https://openreview.net/pdf?id=HJlt7209Km
https://openreview.net/forum?id=HJlt7209Km
ICLR.cc/2019/Conference
2019
{ "note_id": [ "BJxEw0-zeE", "HylC0e-C2m", "BJgewee0hQ", "BkxDE0KTnX", "BJgRZ_Yd3Q", "B1eWYFtDnQ", "BklBpKu6tm", "r1lVAt8TtQ", "rJgUHOmTtX", "Byxwh9zpYQ", "Hkl3Eh-TFm" ], "note_type": [ "meta_review", "official_review", "comment", "comment", "official_review", "official_review", "comment", "comment", "comment", "comment", "comment" ], "note_created": [ 1544851035576, 1541439701819, 1541435480277, 1541410350522, 1541081094106, 1541015928638, 1538259389284, 1538251212493, 1538238526281, 1538235055320, 1538231348211 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1377/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1377/AnonReviewer3" ], [ "(anonymous)" ], [ "~Marius_Mosbach1" ], [ "ICLR.cc/2019/Conference/Paper1377/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1377/AnonReviewer1" ], [ "(anonymous)" ], [ "(anonymous)" ], [ "(anonymous)" ], [ "(anonymous)" ], [ "(anonymous)" ] ], "structured_content_str": [ "{\"metareview\": \"The paper proposes a feature smoothing technique as a new and \\\"cheaper\\\" technique for training adversarially robust models.\", \"pros\": [\"the paper is generally well written and the claimed results seem quite promising\", \"the theory contribution are interesting\"], \"cons\": [\"the main technique is fairly incremental\", \"there were concerns regarding the comprehensiveness of evaluations and baselines used\"], \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Interesting proposal but requires more comprehensive evaluation and comparison\"}", "{\"title\": \"Some interesting proposals, with weak justification and experimental verification.\", \"review\": \"In this paper the authors introduce a novel method to defend against adversarial attacks that they call feature smoothing. The authors then discuss feature smoothing and related \\u201ccheap\\u201d data augmentation-based defenses against adversarial attacks in a nice general discussion. Next, the authors present empirical data comparing and contrasting the different methods they introduce as a means of constructing models that are robust to adversarial examples on MNIST and CIFAR10. The authors close by attempting to theoretically motivate their strategy in terms of reducing variance of the decision boundary.\\n\\nOverall, I found this paper pleasant to read. However, it is unclear to me exactly how novel its contributions are. As discussed by the authors, there are strong similarities between feature smoothing and mixup although I did enjoy the unifying exposition presented in the text. It also seems as though the paper suffers from some simplifying assumptions considered by the authors. For example, in sec. 2 the authors claim that \\\\tilde x will be closer to the decision boundary than x. However, this is only true if the decision boundary is convex. \\n\\nI appreciated the extensive experiments run by the authors. However, I wish they had included results from adversarial training. It seems (looking at Madry\\u2019s paper) that the defense offered by these cheap methods is still significantly worse than adversarial training. I feel that some discussion of this is warranted even if the goal is to reduce computational complexity.\\n\\nFinally, I am not sure what to make of the theory presented. While it is nice to see that the variance of the decision boundary is reduced by regularization in the case of 1-dimensional linear regression, I am not at all convinced by the authors generalization to neural networks. In particular, their discussion seems to only hold for one-hidden-layer networks. Although the authors don\\u2019t offer much clarity here. For example eq. 2 is literally just a statement that ReLU is a convex function. However, it is clearly the case that multiple layers of the network will violate this hypothesis. Overall, I did not find this discussion particularly compelling.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}", "{\"comment\": \"Hello,\\nPlease refrain from making sweeping generalizations. Is there a proof that \\\"Cheap methods are unlikely to be robust under stronger PGD attack\\\"? The analysis in your paper that you cite is limited to logit squeezing.. why does this imply that \\\"any\\\" cheap method is likely to not be robust?\", \"title\": \"Sweeping generalizations\"}", "{\"comment\": \"Regarding your results on MNIST, I would like to point you to Table 1 in our paper https://arxiv.org/abs/1810.12042 where we show that logit squeezing (combined with gaussian noise), as proposed by Harini et al., does not provide actual robustness. We could successfully break it not only on MNIST but also CIFAR10 and Tiny ImageNet. Further, we find that the robustness of logit squeezing mainly comes from the fact that it makes gradient based optimization in the input space significantly more difficult by introducing many local maxima near the clean inputs. This can be seen as gradient masking. Crucial for our evaluation was the fact that we performed many random restarts when performing PGD (up to 10000) and additionally performed a proper grid search over the step size used during optimization.\\n\\nTherefore, it would be interesting to see the robustness of your models against a PGD attack with large number of iterations, large step size, and many random restarts. Based on our experiments, we would expect that this should reduce the adversarial accuracy of \\\"cheap methods\\\" (logit squeezing + noise, label smoothing + noise, feature smoothing + noise) down to (almost) 0%.\", \"title\": \"Cheap methods are unlikely to be robust under stronger PGD attack\"}", "{\"title\": \"An interesting paper whose novelty seems incremental to the reviewer\", \"review\": \"The authors proposed a feature smoothing method without adding any computational burden for defensing against adversarial examples. The idea is that both feature smoothing and Gaussian noise can help extend the range of data. Moreover, the authors combined these methods together to gain a better test and adversarial accuracy. They further proved 3 theorems to try to analyze the biases and variances of decision boundary based on the fisher information and delta method.\\n\\nIn my opinion, the main contribution of this paper is to prove that the boundary variance will decrease due to adding one additional regularization term to the loss function.\", \"main_comments\": \"1.\\tThe proposed feature smoothing method seems less novel to me. In contrast to the mixup method, the proposed method appears to remove the label smoothing part, so it is better to explain or justify why this could be better theoretically. Moreover, in the PGD and PGD-cw results, the performance is not as good as the Gaussian random noise method. Can the authors offer any discussion or comments on the possible reasons?\\n2.\\tSome details of the proof of Theorem 4.1 seemed to be omitted. I am a bit confused about this. \\na.\\t\\u201cWithout loss of generality, we further assume b = 0 and w > 0.\\u201d With smaller magnitude, b=0 is reasonable, but why to assume w>0?\\nb.\\tCould you present the derivation details or the backing theory of the approximation of var(b), when one more regularization term are added? \\n3.\\tIn addition, a method of modifying the network is proposed to adapt to the feature smoothing method. However, no experimental results are reported to support its effectiveness. I would believe some empirical evaluations may further strengthen the paper.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"review\", \"review\": \"The paper proposes a feature smoothing technique, which generates virtual data points by interpolating the input space of two randomly sampled examples. The aim is to generate virtual training data points that are close to adversarial examples. Experimental results on both MNIST and Cifar10 datasets show that the proposed method augmented with other regularization techniques are robust to adversarial attacks and obtain higher accuracy when comparing with some testing baselines. Also, the paper presents some theoretical analyses showing that label smoothing, logit squeezing, weight decay, Mixup and feature smoothing all produce small estimated variance of the decision boundary when regularizing the networks.\\n\\nThe paper is generally well written, and the experiments show promising results. Nevertheless, the proposed method is not very novel, and the method is not comprehensively evaluated with experiments.\", \"major_remarks\": \"1.\\tThe experiments show that feature smoothing has to combine with other regularizers in order to outperform other testing methods. In this sense the contribution of the feature smoothing along is not clear. For example, without integrating other regularizers, Mixup and feature smoothing obtain very close results for BlackBox-PGD, BlackBoxcw and Clean, as shown in Table 1. In addition, in the paper, the feature smoothing along is only validated on the MNIST (not even tested on Cifar10 in Table2). Consequently, it is difficult to evaluate the contribution of the proposed smoothing technique. \\n2.\\tExperiments are conducted on datasets MNIST and Cifar10 with small number of target classes. Empirically, it would be useful to see how it performs on more complex data set such as Cifar100 or ImageNet.\\n3.\\tThe argument for why the proposed feature smoothing method works is presented in Theorem4.3 in Section 4.2, but the theorem seems to rely on the assumption that one can add data around the true decision boundary. However, how we can generate samples near the true decision boundary and how we should chose the mixing ratio to attain this goal is not clear to me in the paper. In addition, how we can sure that the adding synthetic data from one class does not collide with manifolds of other classes as suggested in AdaMixup (Guo et al., MixUp as Locally Linear Out-Of-Manifold Regularization)? This is particular relevant if the proposed feature smoothing strategy prefers to create virtual samples close to the true decision boundary.\\n4.\\tAt the end of page4, the authors claim that both feature smoothing and Mixup generate new data points that are closer to the true boundary. I wonder if the authors could further justify or show that either theoretically or experimentally. \\n5.\\tThe proposed method is similar to SMOTE (Chawla et al., SMOTE: Synthetic Minority Over-sampling Technique). In this sense, comparison with SMOTE would be very beneficial.\", \"minor_remarks\": \"1.\\tIn the paper Mixup, value 1 was carefully chosen as the mixing policy Alpha for Cifar10 (otherwise, underfitting can easily occur as shown in AdaMixUp), and it seems in the paper the authors used a very large value of 8 for Mixup\\u2019s Beta distribution, and I did not see the justification for that number in the paper.\\n2.\\tTypo in the second paragraph of page2: SHNV should be SVHN\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"comment\": \"Their LS-PGA seems specific to their thermometer method. When I do experiments on other defenses, I always refer to PGD on CW_\\\\infty as Logit space PGD(PGA). My personal habit, this is my bad, I should explain it clearly.\\nI also agree with your other comments.\", \"title\": \"My bad (I did not explain all the things clearly)\"}", "{\"comment\": \"This previous ICLR paper (https://openreview.net/pdf?id=S18Su--CW) proposes an attack called LS-PGA, which is what I thought you were referring to. If you just mean PGD on the logits, then yes, agreed.\", \"title\": \"LS-PGA terminology\"}", "{\"comment\": \"PGD-CW use the CW_\\\\infty loss, which is actually similar to LS-PGA (PGD using logits, PGD in logit space => PGD using a CW_\\\\infty loss => PGD-CW (l_\\\\infty-bounded))\", \"title\": \"Exactly, what I mean is the solution in the appendix.\"}", "{\"comment\": [\"I don't disagree with your general sentiment, but two minor notes:\", \"Ensemble Adversarial Training itself does not claim to solve MNIST in the white-box setting. It is in Appendix C.1 where the authors note that binarization for L_infinity MNIST is effective.\", \"LS-PGA doesn't have anything to do with their attack method (and I don't think they claim it does). PGD-CW is (if I understand correctly, becuase this paper doesn't explain it) PGD from Madry et al. (2018) with the loss function from Carlini & Wagner (2017).\"], \"one_more_observation_about_the_paper\": \"BlackBox-CW on CIFAR-10 accuracy (17%) should not be a stronger attack and have have lower accuracy than than White Box PGD (32%).\", \"title\": \"EAT does not solve MNIST\"}", "{\"comment\": \"An interesting method. But have you noticed this work (https://arxiv.org/abs/1705.07204), which proposes a very simple binarization solution on MNIST? Therefore, achieving robustness against l_\\\\infty attacks on MNIST is very simple. And the results on CIFAR10 are not good enough. It seems the accuracy against PGD-cw (LS-PGA) is only 9.03%?\\nLS-PGA means Logit Space Projected Gradient Ascent (PGD in logit space => PGD using a CW_\\\\infty loss => PGD-CW (l_\\\\infty-bounded))(Just an explanation for the next comment)\\nFor comparison, MadryLab's model achieves 44.71% under DAA and 45.21 under 10 random start PGD. Besides, as you mentioned, this method is more efficient. So have you ever tested on large datasets like ImageNet?\", \"title\": \"Interesting but the results seem not good enough\"}" ] }
H1ltQ3R9KQ
Causal Reasoning from Meta-reinforcement learning
[ "Ishita Dasgupta", "Jane Wang", "Silvia Chiappa", "Jovana Mitrovic", "Pedro Ortega", "David Raposo", "Edward Hughes", "Peter Battaglia", "Matthew Botvinick", "Zeb Kurth-Nelson" ]
Discovering and exploiting the causal structure in the environment is a crucial challenge for intelligent agents. Here we explore whether modern deep reinforcement learning can be used to train agents to perform causal reasoning. We adopt a meta-learning approach, where the agent learns a policy for conducting experiments via causal interventions, in order to support a subsequent task which rewards making accurate causal inferences.We also found the agent could make sophisticated counterfactual predictions, as well as learn to draw causal inferences from purely observational data. Though powerful formalisms for causal reasoning have been developed, applying them in real-world domains can be difficult because fitting to large amounts of high dimensional data often requires making idealized assumptions. Our results suggest that causal reasoning in complex settings may benefit from powerful learning-based approaches. More generally, this work may offer new strategies for structured exploration in reinforcement learning, by providing agents with the ability to perform—and interpret—experiments.
[ "meta-learning", "causal reasoning", "deep reinforcement learning", "artificial intelligence" ]
https://openreview.net/pdf?id=H1ltQ3R9KQ
https://openreview.net/forum?id=H1ltQ3R9KQ
ICLR.cc/2019/Conference
2019
{ "note_id": [ "SJeS2P7teN", "BkxZIo17J4", "rJxLL_L40m", "rkxv6DIVRm", "SyxDjPH10m", "rylNDB5n6m", "r1gd5N52pQ", "S1eTXE52TX", "Syl5shBjpQ", "S1ldPsHiT7", "HyeYcqHs6X", "rkeS2DrjpX", "BkgPvmG96X", "SJlspzn9nm", "H1gHF1d4nQ" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1545316268781, 1543859016666, 1542903886435, 1542903742762, 1542571934646, 1542395227613, 1542395023979, 1542394916522, 1542311073969, 1542310752078, 1542310545398, 1542309804731, 1542230878545, 1541223107229, 1540812668792 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1376/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1376/Authors" ], [ "ICLR.cc/2019/Conference/Paper1376/Authors" ], [ "ICLR.cc/2019/Conference/Paper1376/Authors" ], [ "ICLR.cc/2019/Conference/Paper1376/AnonReviewer4" ], [ "ICLR.cc/2019/Conference/Paper1376/Authors" ], [ "ICLR.cc/2019/Conference/Paper1376/Authors" ], [ "ICLR.cc/2019/Conference/Paper1376/Authors" ], [ "ICLR.cc/2019/Conference/Paper1376/Authors" ], [ "ICLR.cc/2019/Conference/Paper1376/Authors" ], [ "ICLR.cc/2019/Conference/Paper1376/Authors" ], [ "ICLR.cc/2019/Conference/Paper1376/Authors" ], [ "ICLR.cc/2019/Conference/Paper1376/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1376/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1376/AnonReviewer1" ] ], "structured_content_str": [ "{\"metareview\": \"The reviewers raised a number of concerns including insufficiently demonstrated benefits of the proposed methodology, lack of explanations, and the lack of thorough and convincing experimental evaluation. The authors\\u2019 rebuttal failed to alleviate these concerns fully. I agree with the main concerns raised and, although I also believe that the work can result eventually in a very interesting paper, I cannot suggest it at this stage for presentation at ICLR.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"metareview\"}", "{\"title\": \"Summary of changes in the revised manuscript\", \"comment\": \"We have endeavored to address all the reviewers' concerns, and hope they will find our manuscript much improved after the additions and clarifications detailed below:\\n\\n>> Changed the title to \\u201cCausality from Meta Reinforcement Learning\\u201d incorporating feedback from reviewer 1.\\n>> Changed phrasing in the abstract and discussion as suggested by reviewers 3 and 4.\\n>> Improved the explanation of our sampling procedure in response to reviewers 2 and 3.\\n>> Improved the explanation of the Passive agents\\u2019 performance (and why it is higher than the Active agents\\u2019) in response to reviewers 2 and 3.\\n>> Included 2 new baselines (MAP for observational data, and Optimal counterfactual) in response to reviewer 3.\\n>> Included experiment 5 in the appendix for larger graphs (N = 6), with separation of equivalence classes, as well as drew attention to Experiment 4 in the appendix, to demonstrate generalizability, in response reviewers 2, 3, and 4.\\n>> Experiment 5 also compares the performance of the active agent with an agent with a random intervention policy as suggested by reviewer 4.\\n>> Added a section \\u201cSummary of results\\u201d.\\n>> Explained our specific choice of episode length and intervention values.\\n>> Clarified Footnote 1 in response to reviewer 3.\\n>> Included reward distribution plot in Appendix A in response to reviewer 2.\"}", "{\"title\": \"Reply from authors, thank you for the review (2/2)\", \"comment\": \">>(*) Previous reviewers have already made this point - I think it\\u2019s crucial - and it\\u2019s also related to the previous concern: It is not clear how difficult the tasks facing these agents actually are, nor is it clear that solving them genuinely requires causal understanding. What seems to be shown is that, by supplying information that\\u2019s critical for the task at hand, a sufficiently powerful learning agent is able to harness that information successfully. But how difficult is this task, and why does it require causal understanding? I do think that some of the work the authors did is quite helpful, e.g., dividing the test set between the easy and hard cases (orphan / parented, unconfounded / confounded). But I do not feel I have an adequate understanding of the task as seen, so to say, from the perspective of the agent. Specifically:\\n\\n>>(*) I completely second the worry one of the reviewers raised about equivalence classes and symmetries. The test set should be chosen more deliberately - not randomly - to rule out deflationary explanations of the agents\\u2019 purported success. I\\u2019m happy to hear that the authors will be looking more into this and I would be interested to know how the results look.\\n\\n\\nWe are currently running simulations on larger graphs and testing on held out equivalence classes, and look forward to the reviewers comments on these new results.\\n\\n--------------------------------------------------------------------\\n\\n>>(*) The \\u201cbaselines\\u201d in this paper are often not baselines at all, but rather various optimal approaches to alternative formulations of the task. I feel we need more actual baselines in order to see how well the agents of interest are doing. I don\\u2019t know how to interpret phrases like \\u201cclose to perfect\\u201d without a better understanding of how things look below perfection. \\n\\n>>As a concrete case of this, just like the other reviewers, I was initially quite confused about the passive agents and why they did better than the active agents. These are passive agents who actually get to make multiple observations, rather than baseline passive agents who choose interventions in a suboptimal way. I think it would be helpful to compare against an agent who makes the same number of observations but chooses them in a suboptimal (e.g., random) way. \\n\\n\\nWe agree that this would be a great addition, and thank the reviewer for suggesting it. We will improve our explanation of the active vs passive agents and will also include a new baseline in our updated draft where an agent performs random interventions in the information phase.\\n\\n--------------------------------------------------------------------\\n\\n>>(*) In relation to the existing literature on causal induction, it\\u2019s telling that implementing a perfect MAP agent in this setting is even possible. This makes me worry further about how easy these tasks are (again, provided one has all of the relevant information about the task). But it also shows that comparison with existing causal inference methods is simply inappropriate here, since those methods are designed for realistic settings where MAP inference is far from possible. I think that\\u2019s fine, but I also think it should be clarified in the paper. The point is not (at least yet) that these methods are competitors to causal inference methods that do \\u201crequire explicit knowledge of formal principles of causal inference,\\u201d but rather that we have a proof-of-concept that some elementary causal understanding may emerge from typical RL tasks when agents are faced with the right kinds of tasks and given access to the right kinds of data. That\\u2019s an interesting claim on its own. The penultimate paragraph in the paper (among other passages) seems to me quite misleading on this point.\\n\\nIn the current work we haven't explored scalability. The indicated paragraph of our discussion section was intended to motivate the potential future practical value of RL algorithms that learn to make causal inferences end-to-end on large, high dimensional data. We will edit the paragraph to be more clear about how the present work is a proof-of-concept.\\n\\n--------------------------------------------------------------------\\n\\n>>(*) One very minor question I have is why actions were softmax selected even in the quiz phase. What were the softmax parameters? And would some of the agents not perform a bit better if they maximized?\\n\\nWe refrained from maximizing when checking test performance in order to retain maximum similarity with the training phase (during which a softmax is necessary to propagate gradients through the network). The temperature parameter was not manipulated or optimized and was kept fixed to the default of 1. However, empirically, by the end of training, the network's policy outputs (i.e., the inputs to the softmax) typically had a sufficiently large scale to make choices almost deterministic.\\n\\n--------------------------------------------------------------------\"}", "{\"title\": \"Reply from authors, thank you for the review (1/2)\", \"comment\": \"We thank the reviewer for their comments. We appreciate their interest in our work and believe that we can address their remaining concerns with the new simulations detailed in previous responses, and the clarifications and minor changes detailed below.\\n\\n>>Note: This review is coming in a bit late, already after one round of responses. So I write this with the benefit of having read the helpful previous exchange. \\n\\n>>I am generally positive about the paper and the broader project. The idea of showing that causal reasoning naturally emerges from certain decision-making tasks and that modern (meta-learning) RL agents can become attuned to causal structure of the world without being explicitly trained to answer causal questions is an attractive one. I also find much about the specific paper elegant and creative. Considering three grades of causal sophistication (from conditional probability to cause-effect reasoning to counterfactual prediction) seems like the right thing to do in this setting.\\n\\n>>Despite these positive qualities, I was confused by many of the same issues as other reviewers, and I think the paper does need some more serious revisions. Some of these are simple matters of clarification as the authors acknowledge; others, however, require further substantive work. It sounds like the authors are committed to doing some of this work, and I would like to add one more vote of encouragement. While the paper may be slightly too preliminary for acceptance at this time, I am optimistic that a future version of this paper will be a wonderful contribution.\\n\\n>>(*) The authors say at several points that the approach \\u201cdid not require explicit knowledge of formal principles of causal inference.\\u201d But there seem to be a whole of lot of causal assumptions that are critically implicit in the setup. It would be good to understand this better. In particular, the different agents are hardwired to have access to different kinds of information. The interventional agent is provided with data that the conditional agent simply doesn\\u2019t get to see. Likewise, the counterfactual agent is provided with information about noise. Any sufficiently powerful learning system will realize that (and even how) the given information is relevant to the decision-making task at hand. A lot of the work (all of the work?) seems to be done by supplying the information that we know would be relevant.\\n\\nThis is an interesting point. To reason effectively about causality in the real world, a major challenge is isolating the information relevant to causal reasoning. But for our agents, we sidestep this challenge by providing the information needed in a digestible format. \\n\\nWe agree that in the long run, it will be important to develop agents that extract the right information from complex streams of raw data. In the current work, though, we just wanted to test the simplest version of a hypothesis -- whether it is possible at all for model-free RL to give rise to a causally-aware learning algorithm, when given access to the information it needs. \\n\\nWe think that by limiting the kinds of data available to the agent in different ways, we were able to demonstrate the 3 tiers of causal reasoning in the most controlled way. If all of the agents were supplied with all of the information, it would be harder to directly assess the different aspects of causal and counterfactual reasoning. We generally agree that sufficiently powerful learning systems will realize that information is relevant to a task -- however, before now it was unknown whether standard model-free RL algorithms can induce a causal reasoning algorithm, even if provided with the relevant data. \\n\\n--------------------------------------------------------------------\\nContinued\\u2026\"}", "{\"title\": \"Promising paper on an appealing topic, but needs a bit more work\", \"review\": \"Note: This review is coming in a bit late, already after one round of responses. So I write this with the benefit of having read the helpful previous exchange.\\n\\nI am generally positive about the paper and the broader project. The idea of showing that causal reasoning naturally emerges from certain decision-making tasks and that modern (meta-learning) RL agents can become attuned to causal structure of the world without being explicitly trained to answer causal questions is an attractive one. I also find much about the specific paper elegant and creative. Considering three grades of causal sophistication (from conditional probability to cause-effect reasoning to counterfactual prediction) seems like the right thing to do in this setting.\\n\\nDespite these positive qualities, I was confused by many of the same issues as other reviewers, and I think the paper does need some more serious revisions. Some of these are simple matters of clarification as the authors acknowledge; others, however, require further substantive work. It sounds like the authors are committed to doing some of this work, and I would like to add one more vote of encouragement. While the paper may be slightly too preliminary for acceptance at this time, I am optimistic that a future version of this paper will be a wonderful contribution.\\n\\n(*) The authors say at several points that the approach \\u201cdid not require explicit knowledge of formal principles of causal inference.\\u201d But there seem to be a whole of lot of causal assumptions that are critically implicit in the setup. It would be good to understand this better. In particular, the different agents are hardwired to have access to different kinds of information. The interventional agent is provided with data that the conditional agent simply doesn\\u2019t get to see. Likewise, the counterfactual agent is provided with information about noise. Any sufficiently powerful learning system will realize that (and even how) the given information is relevant to the decision-making task at hand. A lot of the work (all of the work?) seems to be done by supplying the information that we know would be relevant.\\n\\n(*) Previous reviewers have already made this point - I think it\\u2019s crucial - and it\\u2019s also related to the previous concern: It is not clear how difficult the tasks facing these agents actually are, nor is it clear that solving them genuinely requires causal understanding. What seems to be shown is that, by supplying information that\\u2019s critical for the task at hand, a sufficiently powerful learning agent is able to harness that information successfully. But how difficult is this task, and why does it require causal understanding? I do think that some of the work the authors did is quite helpful, e.g., dividing the test set between the easy and hard cases (orphan / parented, unconfounded / confounded). But I do not feel I have an adequate understanding of the task as seen, so to say, from the perspective of the agent. Specifically:\\n\\n(*) I completely second the worry one of the reviewers raised about equivalence classes and symmetries. The test set should be chosen more deliberately - not randomly - to rule out deflationary explanations of the agents\\u2019 purported success. I\\u2019m happy to hear that the authors will be looking more into this and I would be interested to know how the results look.\\n\\n(*) The \\u201cbaselines\\u201d in this paper are often not baselines at all, but rather various optimal approaches to alternative formulations of the task. I feel we need more actual baselines in order to see how well the agents of interest are doing. I don\\u2019t know how to interpret phrases like \\u201cclose to perfect\\u201d without a better understanding of how things look below perfection. \\n\\nAs a concrete case of this, just like the other reviewers, I was initially quite confused about the passive agents and why they did better than the active agents. These are passive agents who actually get to make multiple observations, rather than baseline passive agents who choose interventions in a suboptimal way. I think it would be helpful to compare against an agent who makes the same number of observations but chooses them in a suboptimal (e.g., random) way. \\n\\n(*) In relation to the existing literature on causal induction, it\\u2019s telling that implementing a perfect MAP agent in this setting is even possible. This makes me worry further about how easy these tasks are (again, provided one has all of the relevant information about the task). But it also shows that comparison with existing causal inference methods is simply inappropriate here, since those methods are designed for realistic settings where MAP inference is far from possible. I think that\\u2019s fine, but I also think it should be clarified in the paper. The point is not (at least yet) that these methods are competitors to causal inference methods that do \\u201crequire explicit knowledge of formal principles of causal inference,\\u201d but rather that we have a proof-of-concept that some elementary causal understanding may emerge from typical RL tasks when agents are faced with the right kinds of tasks and given access to the right kinds of data. That\\u2019s an interesting claim on its own. The penultimate paragraph in the paper (among other passages) seems to me quite misleading on this point.\\n\\n(*) One very minor question I have is why actions were softmax selected even in the quiz phase. What were the softmax parameters? And would some of the agents not perform a bit better if they maximized?\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Reply from authors, thank you for the review (3/3)\", \"comment\": \">>-\\u201cThe values of all but one node (the root node, which is always hidden)\\u201d - so is it 4 or 5 nodes? Or it is that all possible DAGs on N=6 nodes one of which is hidden? I\\u2019m asking because in the following it seems you can intervene on any of the 5 nodes\\u2026 \\n\\nThe DAGs consist of a total of 5 nodes, one of which is hidden. The agent can only intervene on 4 of the nodes. We will clarify this in the updated manuscript.\\n\\n--------------------------------------------------\\n\\n>>-The intervention action is to set a given node to +5 (not really clear why), while in the quiz phase (in which the agent tries to predict the node with the highest variable) there is an intervention on a known node that is set to -5 (again not clear why, but different from the interventions seen in the T-1 steps). \\n\\nThanks for pointing out that we never explained our choices for these values. We will improve our exposition in the updated manuscript. \\n\\nThe intervention action sets a node to a value (+5) outside the likely range of passive observations. This facilitates learning the causal graph. One of the important control conditions is an agent that receives samples from the graph conditioned on one of the nodes being +5, in order to directly assess the benefits of intervention. \\n\\nThe intervention in the quiz phase sets a node to a value never before seen. This disallows the agent from memorizing the results of its interventions in the information phase. \\n\\n--------------------------------------------------\\n\\n>>-Active-Conditional is only marginally below Passive-Conditional, \\u201cindicating that when the agent is allowed to choose its actions, it makes reasonable choices\\u201d - not really, it should perform better, not \\u201cmarginally below\\u201d... Same for all the other settings\\n\\nWe hope to have clarified this in the explanation above.\\n\\n--------------------------------------------------\\n\\n>>-Why not use the MAP baseline for the observational case?\\n\\nThis is a good point. We will add a MAP baseline that records the optimal causal induction / discovery possible with purely observational data. However, the key observation from this experiment was that the conditional agent outperformed the optimal associative baseline. This indicates that that the conditional agent drew causal inferences from observational data (i.e., learned to perform do-calculus). This is highlighted by the fact that this improvement manifested only in test cases where do-calculus makes a prediction distinguishable from the predictions based on correlations. These are cases where the externally intervened node has a parent, so that graph surgery results in a different graph. \\n\\n--------------------------------------------------\\n\\n>>-What data does the Passive Conditional algorithms in Experiment 2? Only observations (so a subset of the data)?\\n\\nThe Passive Conditional agent receives conditional samples from the distribution defined by the DAG. These samples are conditioned on one of the nodes having a value of +5. This is described in the subsection about conditional agents on Page 5 of the original manuscript. However, we appreciate that this description was somewhat buried. In the revision we draw more attention to this mechanism. \\n\\n--------------------------------------------------\\n\\n\\n>>-What are the unobserved confounders you mention in the results of Experiment 2? I thought there is only one unobserved confounder (the root node)? Where do the others come from?\\n\\nThe reviewer is right in noting that there is only one unobserved confounder. The plural was not intended to refer to multiple confounders within a single graph, however we realize that this was confusing, and we will remove the usage of the plural in the revision. \\n\\n--------------------------------------------------\\n\\n>>-The counterfactual setting possibly lacks an optimal algorithm? \\n\\nThere is an optimal algorithm in the counterfactual setting. Thanks for drawing attention to this. In the revision we will include a baseline that records the specific exogenous noise and draws the correct counterfactual prediction from it. Notwithstanding this, the key observation in this experiment was that the counterfactual agent earned more reward than the MAP baseline. This is sufficient to infer that the agent used information about the specific exogenous noise (i.e. counterfactual inference), and not just information about the causal structure. This observation is also consistent with the fact that the performance improvement manifested only in the presence of degenerate maximum valued nodes. \\n\\nWe\\u2019d like to thank the reviewer again for their detailed and insightful comments, and look forward to their reply. Our manuscript has hugely benefited from this feedback.\"}", "{\"title\": \"Reply from authors, thank you for the review (2/3)\", \"comment\": \">>-Seem to be missing a lot of literature on causality and bandits, or reinforcement learning (for example: https://arxiv.org/abs/1606.03203, https://arxiv.org/abs/1701.02789, http://proceedings.mlr.press/v70/forney17a.html)\\n\\nWe thank the reviewer for these helpful references and will incorporate a better literature review in our updated manuscript.\\n\\n--------------------------------------------------\\n>>-Many details were unclear to me and in general the clarity of the description could be improved\\n\\nWe hope that by addressing the reviewers concerns as detailed below, the clarity of the updated manuscript will be much improved.\\n\\n--------------------------------------------------\\n\\n>>In general, I think the paper could be opening up an interesting research direction, but unfortunately I\\u2019m not sure it is ready yet. \\n\\n>>Details:\\n>>-Abstract: \\u201cThough powerful formalisms for causal reasoning have been developed, applying them in real-world domains is often difficult because the frameworks make idealized assumptions\\u201d. Although possibly true, this sounds a bit strong, given the paper\\u2019s results. What assumptions do your agents make? At the moment the agents you presented work on an incredibly small subset of causal graphs (not even all linear gaussian models with a hidden variable\\u2026), and it\\u2019s even not compared properly against the standard causal reasoning/causal discovery algorithms\\u2026\\n\\nWe agree that our paper in its current form does not speak to improving causal reasoning in real-world domains. The goal of our paper is to demonstrate the proof of principle that causal reasoning can arise out of model-free reinforcement learning. The above statement was made to motivate the potential practical value of RL algorithms that can make fast, amortized causal inferences at run time, learned end-to-end from large and high dimensional data that might be intractable for traditional causal inference algorithms. However, we agree that this statement is strong and will edit it to make sure our contributions are more accurately represented.\\n\\n--------------------------------------------------\\n>>-Footnote 1: \\u201cthis formalism for causal reasoning assumes that the structure of the causal graph is known\\u201d - (Spirtes et al. 2001) present several causal discovery (here \\u201ccausal induction\\u201d) methods that recover the graph from data.\\n\\nIt was not our intention to dismiss the literature, but to make clear to readers that the tasks our agent performs in the three experiments do not fully equate to the three levels of causal reasoning we formalize in the section on causality, as these assume that the graph structure is known. We will change this phrasing.\\n\\n--------------------------------------------------\\n\\n\\n>>-Section 2.1 \\u201cX_i is a potential cause of X_j\\u201d - it\\u2019s a cause, not potential, maybe potentially not direct.\\n\\nWe are using the definition in the book Causal Inference in Statistics - A Primer, Judea Pearl, Madelyn Glymour, Nicholas P. Jewell, 2016, page 27: \\u201cIf X is a descendant of Y, then Y is a potential cause of X (there are rare intransitive cases in which Y will not be a cause of X, which we will discuss in Part Two).\\u201d, in the interest of being as general as possible in our introductory section on causality. \\n\\n--------------------------------------------------\\n\\n\\n>>-Section 3: 3^(N-1)/2 is not the number of possible DAGs, that\\u2019s described by this sequence: https://oeis.org/A003024. Rather that is the number of (possibly cyclic) graphs with either -1, 1 or 0 on the edges. \\n\\nOur sampling procedure was not adequately explained in our paper, we thank the reviewer for drawing our attention to our lack of clarity on this point. We have tried to delineate the process more clearly below, and will include an explanation in the updated manuscript.\\n\\nWe first consider the n*(n-1)/2 edges in the strictly upper-triangular part of the adjacency matrix. The number 3^(n*(n-1)/2) for the total number of graphs is derived from each of these edges independently having weights -1, 0, or 1. These are all guaranteed to be acyclic. We will revise to be more clear about what distribution of graphs we are sampling from.\\n\\n--------------------------------------------------\\n\\ncont\\u2019d...\"}", "{\"title\": \"Reply from authors, thank you for the review (1/3)\", \"comment\": \"TL;DR: We will significantly improve clarity of our paper and will update our results with larger graphs, addressing the reviewers\\u2019 concerns about generalizability.\\n\\nWe thank the reviewer for their interest, time, and detailed review! We feel that most of the criticisms are not fundamental concerns about the content but rather pertain to lack of clarity in the text. We are grateful for the opportunity to improve our exposition. We have made extensive clarifications below, and hope the reviewer will find our paper much improved as a result.\\n\\nWe have grouped some of the suggestions below so as to make responses more coherent.\\n\\n--------------------------------------------------\\n\\n\\n>>-Task does not necessarily require causal knowledge (predict the node with the highest value in this restricted linear setting)\\n\\n\\n>>-Very limited experimental setting (causal graphs with 5 nodes, one of which hidden, linear Gaussian with +/- 1 coefficients, with interventions in training set always +5, and in test set always -5) and lukewarm results, that don\\u2019t seem enough for the strong claims. This is one of the easiest ways to improve the paper.\\n\\nWe agree that generalization to more settings is important but we think that our setting is sufficient to answer the question posed in this work. We respectfully disagree that the simplicity of the domain means that causal knowledge is not required. That the agent successfully makes predictions about the effect of an intervention, makes correct causal inference despite the presence of a hidden confounder, and makes counterfactual predictions, are all hallmarks of causal knowledge. The agent outperforms the best possible non-causal algorithm. We also point to standard textbooks [Causal Inference in Statistics - A Primer, Judea Pearl, Madelyn Glymour, Nicholas P. Jewell, 2016] describing and analysing causal inference in exactly such linear Gaussian settings. A non-linear setting would possibly make this more challenging for the agent, but our main goal was to demonstrate that our meta-learning approach using model-free RL, can learn to exploit causal structure changing at each episode -- an entirely new area of research. We felt that this simple setting afforded the most unencumbered test for causal reasoning.\\n\\nNevertheless, we agree that some demonstration of the generalizability of our approach will strengthen our paper. In Appendix D, we present results on non-linear causal graphs, and in the revision we will include results with larger graphs. We used a test intervention (-5) far outside the learning distribution (always +5) because this is a strong test for having encoded the underlying causal graph. We agree that there are many ways to generalize our approach, and have made an effort to demonstrate it with some non-linear graphs and larger numbers of nodes; but convincingly and fairly testing the limits of this generalizability would require using more sophisticated agents and training regimes, and is orthogonal to the purview of our current work.\\n\\n--------------------------------------------------\\n\\n>>-In the rare cases in which there are some causal baselines (e.g. MAP baseline), they seem to outperform the proposed algorithms (e.g. Experiment 2)\\n\\nOur analyses are focused on looking for evidence that the RL agent takes advantage of causal information. The MAP baseline is an upper bound on performance. The key result for Experiment 2 is that the agent learns an important aspect of causal reasoning i.e. to resolve unobserved confounders with interventions. In Figure 4(a) we see that the agent with access to interventional data performs better than an agent with access to only observational data, reaching close to optimal MAP performance. Figure 4(b) shows that the performance increase is greater in cases where the intervened node shared an unobserved parent (a confounder) with other variables in the graph.\\n\\n--------------------------------------------------\\n\\n\\n>>-Somehow the \\u201cactive\\u201d setting in which the agent can decide the intervention targets seems to always perform worse than the \\u201cpassive\\u201d setting, in which the targets are already chosen. This is very puzzling for me, I thought that choosing the targets should improve the results\\u2026\\n\\nWe apologize for not explaining this adequately in the manuscript. The intervention policy hard-coded into the passive agent (a single intervention on each of the 4 observable nodes) is near optimal. In the zero noise limit, it is optimal. The active agent, on the other hand, must learn a good exploration policy from scratch. This is an extra challenge that results in slightly worse performance. In future work, it might be interesting to examine domains where it is difficult to hand-craft an effective exploration policy, and determine whether the meta-RL approach can find such a policy. We will improve our explanation of this issue in the revised text.\\n\\n--------------------------------------------------\\n\\ncont\\u2019d..\"}", "{\"title\": \"Reply from authors, thank you for the review (3/3)\", \"comment\": \">>5. Although the random choice would result in a score of -5/4, I think\\n it's quite easy and trivial to beat that by just ignoring the node\\n that's externally intervened on and assigned -5, given it's a small\\n value. This probably doesn't require the agent to be able to do\\n \\\"causal reasoning\\\" ... That immediately gives you a lower bound of\\n 0. That might be more appropriate.\\n\\nWe agree that this is a better baseline. In the original submission we described it in Appendix A. In the revision we will move it to the main text.\\n\\n--------------------------------------------------\\n\\n>>If you could give a distribution of the max(mean_i(Xi)) over all\\n graphs (including weights in your distribution), it could give an\\n idea of how unlikely it is for the agent to get a high score without\\n actually learning the causal structure.\\n\\nThanks for this suggestion. We will add a plot showing the distributions of the values of mean_i(Xi) and max(mean_i(Xi)) over all graphs.\\n--------------------------------------------------\\n\\n>>Suggestions for improving the work:\\n\\nWe have grouped some of the suggestions below so as to make responses more concise.\\n--------------------------------------------------\\n\\n>>- Focus on more intuitive notions that clearly require causal\\n knowledge, or motivate your objective very clearly to show its sufficiency.\\n\\n>>Of course, it would be great if you can probe the agent to see what it\\nreally learnt. But I understand that could be a long-shot.\\n\\nWe have tried to clarify above our analysis of what the agent learned.\\n\\n--------------------------------------------------\\n\\n>>- Perhaps discuss simpler examples (e.g., 3 node), where it's easy to\\n enumerate all causal structures and group them into appropriate\\n equivalence classes.\\n\\nIn the revision, we will explicitly partition equivalence classes between training and testing, as described above. We will then also show examples of 3-node graphs at test time whose equivalence class was excluded from training.\\n\\n--------------------------------------------------\\n\\n>>- Provide results on wider range of experiments (eg more even\\n train-test split, choice of T), or at minimum justify choices\\n made. And address the issues above.\\n\\n\\n>>- You could show a few experiments with larger N by sampling from the space of all possible\\n DAGs, instead of enumerating everything.\\n\\n>>Another open problem is whether this approach can scale to larger number of\\nvariables, in particular the learning might be very data hungry.\\n\\n\\nWe hope to have addressed the reviewers' concerns about the setup in the replies above. The goal of our paper was to show the first evidence that causal reasoning can arise from model-free RL. We also demonstrate that the same agent discovers a good experimentation policy from which it learns the environment\\u2019s causal structure. This lays the groundwork for sophisticated reinforcement learning agents that can actively interact with and learn about their environments. Our synthetic environment of 5 node random DAGs is simple enough to concretely demonstrate causal reasoning, as well as to benchmark the learned intervention policy; but is not tailored so specifically to causal inference as to lose connection to typical RL tasks. While scalability is an important question, we think that it is outside the scope of this paper.\\n\\n However, in the revision, we will discuss issues related to scaling: a) compositional agent architectures might allow the agent to utilize symmetries such as equivalence classes, thus lowering the training data requirements [2], and b) more advanced training regimes might alleviate the credit assignment problem, allowing successful training with longer episode lengths [3].\\n\\n--------------------------------------------------\\n\\n>>- Please proof-read and make sure you've defined all terms (there are\\n a few, such as Xp/Xf in Expt 3, where p/f are not really defined).\\n\\t\\nThese points are duly noted and we will address them in our updated manuscript. \\n\\nWe\\u2019d like to thank the reviewer again for their detailed and insightful comments, and look forward to their reply. Our manuscript has hugely benefited from this feedback.\\n\\n\\n[1] Hochreiter, S., Bengio, Y., Frasconi, P., & Schmidhuber, J. (2001). Gradient flow in recurrent nets: the difficulty of learning long-term dependencies.\\n[2] Battaglia, P. W., Hamrick, J. B., Bapst, V., Sanchez-Gonzalez, A., Zambaldi, V., Malinowski, M., ... & Gulcehre, C. (2018). Relational inductive biases, deep learning, and graph networks.\\n[3] Bengio, Y., & Frasconi, P. (1994). Credit assignment through time: Alternatives to backpropagation.\"}", "{\"title\": \"Reply from authors, thank you for the review (2/3)\", \"comment\": \">>3. Why such a low number of learning steps T (T=5 in paper) in each episode? no\\nexperimentation over choice of T or discussion of this choice is\\ngiven. And it is mentioned in the findings, in several cases, that\\nthe active agent is only merely comparable to the passive agent, while\\none would think active would be better. If T were reasonably higher\\n(not too low, not too high), one would expect to see a difference.\\n\\nWe thank the reviewer for this useful feedback. We choose an episode length of T = 5 (= number of nodes) with 4 learning steps and 1 test step since, in the noise-free limit, exactly 4 interventions (one on each of the 4 observable nodes) are sufficient to fully distill the causal structure required to get maximum performance on the test phase (since the hidden node is never intervened upon in the test phase). In a longer episode, training would be more difficult since the reward (only given once at the end of each episode) becomes sparser and credit assignment more challenging [1]. Meanwhile, in a shorter episode it would be impossible to infer the causal structure in general.\", \"to_clarify_the_relative_performances_of_the_passive_and_active_agents\": \"The intervention policy (a single intervention at each of the 4 observable nodes) of the passive agent is a good policy for this domain. In the limit of zero noise, as mentioned above, it is the optimal intervention policy. The active agent performs worse because in addition to learning to reason from the results of interventions, it must also learn an exploration policy. Given a passive agent with a suboptimal fixed policy, in a task where smart exploration is crucial, indeed we expect the passive agent to perform worse than an agent that can actively learn an exploration policy. However, in this work, the passive policy is close to optimal and therefore acts as a benchmark for the active agent.\\n\\nWe will include clarification on both of these points in our updated manuscript. \\n\\n-------------------\\n\\n>>4. Although I have concerns listed above, something about Figure 2(a)\\n seems to suggest that the agent is learning something. I think if\\n you had tried to probe into what the agent is actually learning, it\\n would have clarified many doubts.\\n\\n>>However, in Figure 2(c), if the black node is -5, why is the node\\nbelow left at -2.5? The weight on the edge is 1 and other parent is\\n0, so -2.5 seems extremely unlikely, given that the variance is 0.1\\n(stdev ~ 0.3, so ~8 standard deviations away!). (Similar issue in\\nFigure 3c)\\n\\nWe appreciate the attention to detail! We did find an error in Figure 4(c) and have addressed it in the new version of the manuscript. Figure 2(c) however seems correct. The mean value of a child node is given by the weighted mean of its parents\\u2019 values. So -2.5 = (-5.0 x 1 + 0.0 x 1)/2.\\n\\nRegarding \\u201cwhat the agent is actually learning\\u201d:\\n\\nIn Section 4.1 and Figure 2, we show that the agent learns to perform some do-calculus. In Figure 2(a) we see that, compared to the highest possible reward achievable without causal knowledge, the trained agent received more reward. This observation is corroborated by Figure 2(b) which shows that performance increased selectively in cases where do-calculus made a prediction distinguishable from the predictions based on correlations. These are situations where the externally intervened node had a parent -- meaning that the intervention resulted in a different graph.\\n\\nIn Section 4.2 and Figure 4, we show that the agent learns to resolve unobserved confounders using interventions (a feat impossible with only observational data). In Figure 4(a) we see that the agent with access to interventional data performs better than an agent with access to only observational data. Figure 4(b) shows that the performance increase is greater in cases where the intervened node shared an unobserved parent (a confounder) with other variables in the graph. In this section we also compare the agent\\u2019s performance to a MAP estimate of the causal structure and find that the agent\\u2019s performance matches it, indicating that the agent is indeed doing close to optimal causal inference.\\n\\nIn Section 4.3 and Figure 6, we show that the agent learns to use counterfactuals. In Figure 6(a) we see that the agent with additional access to the specific randomness in the test phase performs better than an agent with access to only interventional data. In Figure 6(b), we find that the increased performance is observed only in cases where the maximum mean value in the graph is degenerate, and optimal choice is affected by the exogenous noise -- i.e. where multiple nodes have the same value on average and the specific randomness can be used to distinguish their actual values in that specific case.\\n\\nIn all of these three cases we also show an example graph (in Figures 2(c), 4(c), and 6(c)) where the agent\\u2019s behavior demonstrates what it has learned in each experiment.\\n\\nWe will make these three points more directly in the revised text.\\n\\ncontd.\"}", "{\"title\": \"Reply from authors, thank you for the review (1/3)\", \"comment\": \"TL;DR: We will significantly improve clarity of our paper and will update our results with larger graphs, addressing the reviewers\\u2019 concerns about train/test splits and scalability.\\n\\nWe thank the reviewer for their detailed review! We think all of the points are readily addressable without fundamental changes to the paper. We hope that the reviewer will find our paper much improved based on the changes and clarifications detailed below.\\n--------------------------------------------------\\n\\n>>1. Why is the task to select the node with the highest \\\"value\\\"\\n(value=expected value? the sample? what is it?) under some random\\nexternal intervention? It feels very indirect.\\n\\n>>Why not explicitly model certain useful actions that directly query\\nthe structure, such as:\\n\\n>>- selecting nodes that are parents/children of a node\\n>>- evaluating p(x | y) or p(x | do(y))?\\n\\n>>The agent's reward was the value of the chosen node in the sample at that time step.\\n\\nWe did consider training the agent to perform explicit causal inference, but instead choose this more indirect objective to demonstrate that RL algorithms can learn to infer and utilize underlying causal structure when it is relevant to the rewarding task even when the task does not explicitly involve resolving that causal structure. This allows us to make a more general statement that also applies to the kinds of tasks prevalent in RL.\\n\\n--------------------------------------------------\\n\\n>>2. The way you generate test data might introduce biases:\\n\\n>>- If you enumerate 3^(n(n-1)/2) DAGs, some of them will have loops. Do you weed them out?\\n Does it matter?\\n\\n>>- How do you sample weights from {-1, 0, 1}? uniform? What happens if\\n wij = 0? This introduces bias in your training data. This means\\n your distribution is over DAGs + weights, not just DAGs.\\n\\n>>- Your training/test split doesn't take into account certain\\n equivalence/symmetries that might be present in your training data,\\n making it hard to rule out whether your agents are in effect\\n memorizing training data, specially that the number of test graphs\\n is so tiny (300, while test could have been in the thousands too):\\n\\n>>Example, if you graph just has one causal connection with weight = 1:\\n X1 -> X2; X3; X4; X5, This is clearly equivalent to X2 -> X1; X3; X4; X5.\\n Or the structure X1 -> X2 might be present in a larger graph, example with these two components:\\n X1 -> X2; X3 -> X4 -> X5;\\n\\nOur sampling procedure was not adequately explained in our paper; we thank the reviewer for drawing our attention to our lack of clarity on this point. We have tried to delineate the process more clearly below, and will include an improved explanation in the updated manuscript.\\n\\nWe first consider the n*(n-1)/2 edges represented by the upper-diagonal of the adjacency matrix. Any graph that only contains only some subset of these edges is guaranteed to be a Directed Acyclic Graph, and contains no loops. The number 3^(n*(n-1)/2) for the total number of graphs is derived from each of these edges independently having weights -1, 0, or 1. As the reviewer pointed out, we are indeed sampling from a distribution of DAGs + weights. We do not uniformly sample over the space of DAGs, but rather we sample uniformly over the space of graphs formed by randomly assigning the edges in the upper triangular of the adjacency matrix uniformly from {0, -1, 1}. These are all guaranteed to be DAGs. Our sampling procedure means that nodes are more likely to be connected than if we had sampled the presence or absence of each edge uniformly, but this does not affect our results.\\n\\nThe observation about equivalence classes is an excellent point. We would like to point out however, that while such equivalences exist, they are not obvious to a neural network, and the examples outlined above by the reviewer cannot be solved with memorization. Nevertheless, we agree that generalization outside the equivalence class is a stronger claim and we will update our results with simulations where we exclude entire equivalence classes from the training set and test on these held-out classes. \\n\\n--------------------------------------------------\\n\\ncontinued...\"}", "{\"title\": \"Reply from authors, thank you for the review.\", \"comment\": \"We thank the reviewer for their interest and time. Please find below our responses to their suggestions and comments.\\n\\n>>The experiments are so far synthetic, but it would be really interesting to see how the lessons learned extend to more realistic environments. It would also be very nice to have a sequence of increasingly complex synthetic environments where causal inference is the task of interest, such that we can compare the performance of different RL algorithms in this task (the authors only used one).\\n--------------------------------------------------\\nWe strongly agree that scaling our method to complex and realistic environments is worthwhile. However, we think it is outside the scope of this paper. Our goal here is to determine whether it is possible to learn a causally-aware algorithm through model-free reinforcement learning. For this purpose, a simple environment -- and correspondingly simple agent architecture -- facilitates interpretation. We therefore think it is important to start here. The current results already stretch the page limit, so we plan to follow up with scaling results in a separate paper. \\n\\n>>I would change the title to \\\"Causal Reasoning from Reinforcement Learning\\\", since \\\"meta-learning\\\" is an over-loaded term and I do not clearly see its prevalence on this submission.\\n--------------------------------------------------\\nWe thank the reviewer for this great suggestion, and will accordingly update our title.\\n\\nWe\\u2019d like to thank the reviewer again for their time and encouragement, and look forward to hearing back!\"}", "{\"title\": \"Potentially interesting, but possibly not ready yet\", \"review\": \"This paper aims at training agents to perform causal reasoning with RL in three settings: observational (the agent can only obtain one observational sample at a time), interventional (the agent can obtain an interventional sample at a time for a given perfect intervention on a given variable) and a counterfactual setting (the agent can obtains interventional samples, but the prediction is about the case in which the same noise variables were sampled, but a different intervention was performed) . In each of these settings, after T-1 steps of information gathering, the algorithm is supposed to select the node with the highest value in the last step. Different types of agents are evaluated on a limited simulated dataset, with weak and not completely interpretable results.\", \"pros\": \"-Using RL to learn causal reasoning is a very interesting and worthwhile task.\\n-The paper tries to systematize the comparison of different settings with different available data.\", \"cons\": \"-Task does not necessarily require causal knowledge (predict the node with the highest value in this restricted linear setting)\\n-Very limited experimental setting (causal graphs with 5 nodes, one of which hidden, linear Gaussian with +/- 1 coefficients, with interventions in training set always +5, and in test set always -5) and lukewarm results, that don\\u2019t seem enough for the strong claims. This is one of the easiest ways to improve the paper.\\n-In the rare cases in which there are some causal baselines (e.g. MAP baseline), they seem to outperform the proposed algorithms (e.g. Experiment 2)\\n-Somehow the \\u201cactive\\u201d setting in which the agent can decide the intervention targets seems to always perform worse than the \\u201cpassive\\u201d setting, in which the targets are already chosen. This is very puzzling for me, I thought that choosing the targets should improve the results...\\n-Seem to be missing a lot of literature on causality and bandits, or reinforcement learning (for example: https://arxiv.org/abs/1606.03203, https://arxiv.org/abs/1701.02789, http://proceedings.mlr.press/v70/forney17a.html)\\n-Many details were unclear to me and in general the clarity of the description could be improved\\n\\nIn general, I think the paper could be opening up an interesting research direction, but unfortunately I\\u2019m not sure it is ready yet.\", \"details\": \"-Abstract: \\u201cThough powerful formalisms for causal reasoning have been developed, applying them in real-world domains is often difficult because the frameworks make idealized assumptions\\u201d. Although possibly true, this sounds a bit strong, given the paper\\u2019s results. What assumptions do your agents make? At the moment the agents you presented work on an incredibly small subset of causal graphs (not even all linear gaussian models with a hidden variable\\u2026), and it\\u2019s even not compared properly against the standard causal reasoning/causal discovery algorithms...\\n-Footnote 1: \\u201cthis formalism for causal reasoning assumes that the structure of the causal graph is known\\u201d - (Spirtes et al. 2001) present several causal discovery (here \\u201ccausal induction\\u201d) methods that recover the graph from data.\\n-Section 2.1 \\u201cX_i is a potential cause of X_j\\u201d - it\\u2019s a cause, not potential, maybe potentially not direct.\\n-Section 3: 3^(N-1)/2 is not the number of possible DAGs, that\\u2019s described by this sequence: https://oeis.org/A003024. Rather that is the number of (possibly cyclic) graphs with either -1, 1 or 0 on the edges. \\n-\\u201cThe values of all but one node (the root node, which is always hidden)\\u201d - so is it 4 or 5 nodes? Or it is that all possible DAGs on N=6 nodes one of which is hidden? I\\u2019m asking because in the following it seems you can intervene on any of the 5 nodes\\u2026 \\n-The intervention action is to set a given node to +5 (not really clear why), while in the quiz phase (in which the agent tries to predict the node with the highest variable) there is an intervention on a known node that is set to -5 (again not clear why, but different from the interventions seen in the T-1 steps). \\n-Active-Conditional is only marginally below Passive-Conditional, \\u201cindicating that when the agent is allowed to choose its actions, it makes reasonable choices\\u201d - not really, it should perform better, not \\u201cmarginally below\\u201d... Same for all the other settings\\n-Why not use the MAP baseline for the observational case?\\n-What data does the Passive Conditional algorithms in Experiment 2? Only observations (so a subset of the data)?\\n-What are the unobserved confounders you mention in the results of Experiment 2? I thought there is only one unobserved confounder (the root node)? Where do the others come from?\\n-The counterfactual setting possibly lacks an optimal algorithm?\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Promising but several shortcomings\", \"review\": \"I think Causality is an important area, and seeing how RL can help in\\nany aspect is something really worth looking into.\\n\\nHowever, I have a few qualms about the setting and the way the tasks\\nare modeled.\\n\\n1. Why is the task to select the node with the highest \\\"value\\\"\\n(value=expected value? the sample? what is it?) under some random\\nexternal intervention? It feels very indirect.\\n\\nWhy not explicitly model certain useful actions that directly query\\nthe structure, such as:\\n\\n- selecting nodes that are parents/children of a node\\n- evaluating p(x | y) or p(x | do(y))?\\n\\n2. The way you generate test data might introduce biases:\\n\\n- If you enumerate 3^(n(n-1)/2) DAGs, some of them will have loops. Do you weed them out?\\n Does it matter?\\n\\n- How do you sample weights from {-1, 0, 1}? uniform? What happens if\\n wij = 0? This introduces bias in your training data. This means\\n your distribution is over DAGs + weights, not just DAGs.\\n\\n- Your training/test split doesn't take into account certain\\n equivalence/symmetries that might be present in your training data,\\n making it hard to rule out whether your agents are in effect\\n memorizing training data, specially that the number of test graphs\\n is so tiny (300, while test could have been in the thousands too):\\n\\nExample, if you graph just has one causal connection with weight = 1:\\n X1 -> X2; X3; X4; X5, This is clearly equivalent to X2 -> X1; X3; X4; X5.\\n Or the structure X1 -> X2 might be present in a larger graph, example with these two components:\\n X1 -> X2; X3 -> X4 -> X5;\\n\\n3. Why such a low number of learning steps T (T=5 in paper) in each episode? no\\nexperimentation over choice of T or discussion of this choice is\\ngiven. And it is mentioned in the findings, in several cases, that\\nthe active agent is only merely comparable to the passive agent, while\\none would think active would be better. If T were reasonably higher\\n(not too low, not too high), one would expect to see a difference.\\n\\n4. Although I have concerns listed above, something about Figure 2(a)\\n seems to suggest that the agent is learning something. I think if\\n you had tried to probe into what the agent is actually learning, it\\n would have clarified many doubts.\\n\\nHowever, in Figure 2(c), if the black node is -5, why is the node\\nbelow left at -2.5? The weight on the edge is 1 and other parent is\\n0, so -2.5 seems extremely unlikely, given that the variance is 0.1\\n(stdev ~ 0.3, so ~8 standard deviations away!). (Similar issue in\\nFigure 3c)\\n\\n5. Although the random choice would result in a score of -5/4, I think\\n it's quite easy and trivial to beat that by just ignoring the node\\n that's externally intervened on and assigned -5, given it's a small\\n value. This probably doesn't require the agent to be able to do\\n \\\"causal reasoning\\\" ... That immediately gives you a lower bound of\\n 0. That might be more appropriate.\\n\\n If you could give a distribution of the max(mean_i(Xi)) over all\\n graphs (including weights in your distribution), it could give an\\n idea of how unlikely it is for the agent to get a high score without\\n actually learning the causal structure.\", \"suggestions_for_improving_the_work\": \"- Provide results on wider range of experiments (eg more even\\n train-test split, choice of T), or at minimum justify choices\\n made. And address the issues above.\\n\\n- Focus on more intuitive notions that clearly require causal\\n knowledge, or motivate your objective very clearly to show its\\n sufficiency.\\n\\n- Perhaps discuss simpler examples (e.g., 3 node), where it's easy to\\n enumerate all causal structures and group them into appropriate\\n equivalence classes.\\n\\n- Please proof-read and make sure you've defined all terms (there are\\n a few, such as Xp/Xf in Expt 3, where p/f are not really defined).\\n\\n- You could show a few experiments with larger N by sampling from the space of all possible\\n DAGs, instead of enumerating everything.\\n\\nOf course, it would be great if you can probe the agent to see what it\\nreally learnt. But I understand that could be a long-shot.\\n\\nAnother open problem is whether this approach can scale to larger number of\\nvariables, in particular the learning might be very data hungry.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Good paper on important topic\", \"review\": \"This submission is an great ablation study on the capabilities of modern reinforcement learning to discover the causal structure of a synthetic environment. The study separates cases where the agents can only observe or they can also act, showing the expected gains of active intervention.\\n\\nThe experiments are so far synthetic, but it would be really interesting to see how the lessons learned extend to more realistic environments. It would also be very nice to have a sequence of increasingly complex synthetic environments where causal inference is the task of interest, such that we can compare the performance of different RL algorithms in this task (the authors only used one).\\n\\nI would change the title to \\\"Causal Reasoning from Reinforcement Learning\\\", since \\\"meta-learning\\\" is an over-loaded term and I do not clearly see its prevalence on this submission.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
rkVOXhAqY7
The Conditional Entropy Bottleneck
[ "Ian Fischer" ]
We present a new family of objective functions, which we term the Conditional Entropy Bottleneck (CEB). These objectives are motivated by the Minimum Necessary Information (MNI) criterion. We demonstrate the application of CEB to classification tasks. We show that CEB gives: well-calibrated predictions; strong detection of challenging out-of-distribution examples and powerful whitebox adversarial examples; and substantial robustness to those adversaries. Finally, we report that CEB fails to learn from information-free datasets, providing a possible resolution to the problem of generalization observed in Zhang et al. (2016).
[ "representation learning", "information theory", "uncertainty", "out-of-distribution detection", "adversarial example robustness", "generalization", "objective function" ]
https://openreview.net/pdf?id=rkVOXhAqY7
https://openreview.net/forum?id=rkVOXhAqY7
ICLR.cc/2019/Conference
2019
{ "note_id": [ "B1lc77rkx4", "Skx2S8ZEyE", "HJeLz4ey1E", "BJeYXIm907", "Hkx3Y3f5R7", "S1g2EolPCQ", "rJx2ypTTT7", "Bkg6uTmaTQ", "S1eAXuma6m", "ryx7fSXaaQ", "BkgjjVm6p7", "S1e2IVQppX", "BJewrEQTTX", "Hyx7DifTpX", "H1lTT5fp6m", "rJgNX5MT6m", "ryx8V2-6Tm", "Sye2s1-paX", "rJxskIxpaQ", "BklfFeO56Q", "ByxNXe-Chm", "H1ehV4-62X", "Syxd_Ta2n7", "HkeB8Ry92X" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_review" ], "note_created": [ 1544667938210, 1543931459899, 1543599117962, 1543284257383, 1543281795559, 1543076659869, 1542474980480, 1542434165398, 1542432805681, 1542432011421, 1542431907122, 1542431828194, 1542431807346, 1542429530882, 1542429380730, 1542429211561, 1542425646240, 1542422435808, 1542419938723, 1542254713741, 1541439515808, 1541375028452, 1541361008320, 1541172812869 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1375/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1375/Authors" ], [ "ICLR.cc/2019/Conference/Paper1375/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1375/Authors" ], [ "ICLR.cc/2019/Conference/Paper1375/Authors" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper1375/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1375/Authors" ], [ "ICLR.cc/2019/Conference/Paper1375/Authors" ], [ "ICLR.cc/2019/Conference/Paper1375/Authors" ], [ "ICLR.cc/2019/Conference/Paper1375/Authors" ], [ "ICLR.cc/2019/Conference/Paper1375/Authors" ], [ "ICLR.cc/2019/Conference/Paper1375/Authors" ], [ "ICLR.cc/2019/Conference/Paper1375/Authors" ], [ "ICLR.cc/2019/Conference/Paper1375/Authors" ], [ "ICLR.cc/2019/Conference/Paper1375/Authors" ], [ "ICLR.cc/2019/Conference/Paper1375/AnonReviewer4" ], [ "ICLR.cc/2019/Conference/Paper1375/Authors" ], [ "ICLR.cc/2019/Conference/Paper1375/Authors" ], [ "ICLR.cc/2019/Conference/Paper1375/AnonReviewer4" ], [ "ICLR.cc/2019/Conference/Paper1375/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1375/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1375/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1375/AnonReviewer2" ] ], "structured_content_str": [ "{\"metareview\": \"This paper proposes a criterion for representation learning, minimum necessary information, which states that for a task defined by some joint probability distribution P(X,Y) and the goal of (for example) predicting Y from X, a learned representation of X, denoted Z, should satisfy the equality I(X;Y) = I(X;Z) = I(Y;Z). The authors then propose an objective function, the conditional entropy bottleneck (CEB), to ensure that a learned representation satisfies the minimum necessary information criterion, and a variational approximation to the conditional entropy bottleneck that can be parameterized using deep networks and optimized with standard methods such as stochastic gradient descent. The authors also relate the conditional entropy bottleneck to the information bottleneck Lagrangian proposed by Tishby, showing that the CEB corresponds to the information bottleneck with \\u03b2 = 0.5. An important contribution of this work is that it gives a theoretical justification for selecting a specific value of \\u03b2 rather than testing multiple values. Experiments on Fashion-MNIST show that, in comparison to a deterministic classifier and to variational information bottleneck models with \\u03b2 in {0.01, 0.1, 0.5}, the CEB model achieves good accuracy and calibration, is competitive at detecting out-of-distribution inputs, and is more resistant to white-box adversarial attacks. Another experiment demonstrates that a model trained with the CEB criterion is *unable* to memorize a randomly labeled version of Fashion-MNIST. There was a strong difference of opinion between the reviewers on this paper. One reviewer (R1) dismissed the work as trivial. The authors rebutted this claim in their response and revision, and R1 failed to participate in the discussion, so the AC strongly discounted this review. The other two reviewers had some concerns about the paper, most of which were addressed by the revision. But, crucially, some concerns still remain. R4 would like more theoretical rigor in the paper, while R2 would like a direct comparison against MINE and CPC. In the end, the AC thinks that this paper needs just a bit more work to address these concerns. The authors are encouraged to revise this work and submit it to another machine learning venue.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Somewhat controversial, but interesting new criterion for representation learning\"}", "{\"title\": \"Response\", \"comment\": \"Thank you for reading our revisions, and for adjusting your score. You are correct that we focus on the body of the paper on supervised representation learning. This is representation learning as presented in Tishby et al. (2000).\\n\\nHowever, we explicitly state in the main body of the paper that we are placing no restrictions on the nature of X and Y. It is entirely possible for X and Y to be the same random variable. In this case, you may choose to have e(z_X|x) be the same distribution as b(z_X|x), resulting in the objective simplifying to min -<log d(x|z_X)>, where d(x|z_X) is a decoder distribution. This is just a stochastic autoencoder, of course. Another way to put it is that the general CEB objective simplifies to max I(X;Z_X), which could be optimized using a stochastic autoencoder, MINE, or CPC. You may also choose to use two different encoder distributions (i.e., two different architectures and/or sets of parameters parameterizing the forward and backward encoders), in which case the objective is the same as presented in the main body.\\n\\nThe immediate consequence of either of these choices is that the MNI point coincides with H(X). This is exactly the amount of information that MINE and the other unsupervised representation learning papers are targeting when they maximize I(X;Z_X). The only things that will result in I(X;Z_X) < H(X) are modeling and architecture choices that prohibit learning so much information. A sufficiently powerful model would learn I(X;Z_X) = H(X) with all such objectives, in other words.\\n\\nHaving said all of that, we also describe in the appendix a CEB objective for unsupervised learning that gets at what we think are some of the core issues with unsupervised representation learning. We have done fairly extensive experimentation with that approach and have found it to be a substantial improvement over, for example, beta VAE as explored in Alemi et al. (2018) when measured by training a separate classifier on the unsupervised representation. We will present that approach in more detail in later work, but mention it here in order to reassure you that we are describing a general approach to representation learning. Indeed, your proposal at the end of your comment has a lot in common with the approach we describe in the appendix. Your masking suggestion corresponds to a particular choice of a noise function which serves to limit the amount of information the CEB representation will learn. We consider the particular choice of noise function to be a modeling problem, since only the practitioner will know what downstream tasks are important, and what noise is likely to destroy the information that is irrelevant for those tasks on a particular dataset (although naive choices are already quite effective in our experiments, at least when the downstream task is classification).\\n\\nIf we understand correctly, your point about the MINE experiments in comparison with VIB is that the reported classification accuracy is better on the task using the MINE model compared to what is reported in the original VIB paper. We agree that that is the case, but we don\\u2019t think it is a compelling reason to compare to MINE in our experiments for the following reasons:\\n - The VIB models were weaker than ours due to the lack of a learned marginal, and consequently are likely to have had a very loose upper bound on I(X;Z) which in our experience with VIB results in a substantial decrease in performance.\\n - The difference in performance between MINE and VIB on the task in question is small.\\n - The MINE models used to compare with VIB indicate confusion about the purpose of the I(X;Z) term in the VIB objective, since (as we noted) MINE was used to maximize it rather than minimize it. We think your comment about IB and MINE is attempting to refute this point, but we don\\u2019t see how the GAN-like nature of MINE changes the fact that MINE is a lower bound on I(X;Z), while IB requires that I(X;Z) be minimized. It is not possible to minimize a term by optimizing a lower bound on that term.\\n - As we previously stated (and as we make clear in our revisions), CEB is an objective function that can be optimized with any valid bounds on the mutual information terms. Our experiments are explicitly designed to focus on understanding differences than can be attributed to the objective functions themselves (MLE, amortized IB, amortized CEB), rather than an exploration of all the different ways that amortized CEB can be optimized.\\n\\nIn the interim, we have implemented CPC in order to compare its performance in CEB with mean field and pixelcnn decoders when training the bidirectional objective presented in the appendix. We agree that the strength of MINE and CPC is that they avoid the use of costly decoders like pixelcnn when maximizing mutual information with a high dimensional input like an image, and our future work will present extensive experimental comparisons with such approaches.\"}", "{\"title\": \"On revisions\", \"comment\": \"The revision motivations are much clearer and to-the-point, and the inclusions of Figures 2 and 4 are very helpful for understanding where this method lies w.r.t. VIB.\\n\\nMy primary concerns in my original review were the framing of this work as being \\\"finding good representations\\\", but it seems like this is really about finding good representations relevant to some other known variable (such as the labels). Since you appear to be making some broader claims w.r.t. representations, this opens you up to comparisons to methods that are arguably more general in that they operate primarily as unsupervised methods, including the self-supervision and data-augmentation-driven methods I mentioned. You are correct that they do not address MNI as directly as in your work, as each of these could encode information irrelevant for predicting some other known variable. However, as \\\"representation learning\\\" tools, they are far more powerful, as demonstrated in their ability to work on high-dimensional datasets.\\n\\nAs a study of how to learn MI models between two known variables, X and Y, this work has a lot of value, but I would make the setting a bit more clearer in the beginning.\\n\\nI do wish that there were more datasets here, as a study with CelebA attributes or CUB captions / attributes would be very convincing.\\n\\nYour concerns about MINE are duly noted, but MINE comes with the strong advantage of not needing to specify the posterior (for instance, a noise-injected nn will work). The additional network needed is just another encoder similar to the one used in your e(z_x | x). In the IB setting, this works precisely as with GANs, except in this case the encoder tries to (adversarially) make the joint distribution resemble the product of marginals. Besides, MINE is demonstrated to work better than your baseline, so shouldn't that be of note?\\n\\nAnyways, as the revision is a bit better, I'll increase my score to a 6, but I need to read more thoroughly the other reviewers' concerns before I move any further.\\n\\nOne thought on making this fully unsupervised, and possibly a stronger tool for learning representations: what about sampling X and Y from a random mask (i.e., a crop) and the corresponding negative mask on the image? Enforcing MNI would result in a latent representation which contains the information that is shared between the positive and negative masked areas, which is closer to these self-supervision methods I mentioned.\"}", "{\"title\": \"References\", \"comment\": \"Thank you for reminding us to cite InfoDropout. It is included in our latest revision. We agree that the Emergence of Invariance paper is related, although proper treatment will require some care, so we did not manage to fit it into the current set of revisions.\"}", "{\"title\": \"Revision Submitted\", \"comment\": \"We have finished revisions based on reviewer feedback. In addition to addressing reviewer concerns, we have added simple geometric analyses of both CEB and IB which we think substantially clarify our contribution. We have also formalized the Minimum Necessary Information criterion. We look forward to further feedback and discussion from reviewers.\"}", "{\"comment\": \"Before Alemi et al. (2017), there was Achille et al. https://arxiv.org/pdf/1611.01353 on the topic. Also relevant to this discussion is https://arxiv.org/pdf/1706.01350.\", \"title\": \"related work\"}", "{\"title\": \"Review was deleted due to a conflict of interest\", \"comment\": \"The deleted review was deleted because the reviewer discovered a conflict of interest. It's unfortunate that this only came to light after the beginning of the discussion period.\"}", "{\"title\": \"Response to question on the third point\", \"comment\": \"Thank you for your rapid reply, and for continuing to engage with us as we improve our presentation. We look forward to your feedback once we post our revisions.\\n\\nThe simplest way of parameterizing a distribution with a DNN involves using the reparameterization trick from Kingma and Welling (2013), which is most easily understood as including a sample from a standardized version of the desired distribution as an additional input to the network that gets incorporated at the last layer of the encoder to give a valid sample from the desired distribution. More recently, Figurnov et al. (2018) showed how to allow many more distributions to be reparameterized. Finally, it is also possible to use score function estimators to take unbiased but high-variance gradients through non-reparameterizable or discrete distributions.\\n\\nGiven these options, there are very few limitations on how to set up your desired Z_X distribution, although some options will be easier to train than others. In the VAE and VIB literature, using a multivariate normal distribution is quite common for the encoder, as the expressivity of the underlying DNN typically can adjust the input space sufficiently that the resulting encoder distributions can give useful latent samples for the desired task. Our experiments with these types of models typically start with fully covariant multivariate normal distributions, but we have also used multivariate beta distributions and other more esoteric choices for the encoder.\\n\\n(Please let us know if we have missed the thrust of your questions.)\\n\\n[1] Kingma, Diederik P., and Max Welling. \\\"Auto-encoding variational bayes.\\\" arXiv preprint arXiv:1312.6114 (2013).\\n[2] Figurnov, Michael, Shakir Mohamed, and Andriy Mnih. \\\"Implicit Reparameterization Gradients.\\\" arXiv preprint arXiv:1805.08498 (2018).\"}", "{\"title\": \"Response to Deleted Review\", \"comment\": \"We are disappointed that one of our reviews was deleted, as it also gave useful feedback on our work (in addition to giving our work a very positive rating, which we of course appreciated). We will not copy the original review here, but we would like to give our response regardless.\\n\\nThank you for your careful summary of our work. We have continued experimenting since submission, including training CEB models on CIFAR10, where we have achieved almost 95% test accuracy with wide resnets of varying depths and basic data augmentation. We would be happy to add these results in revision if reviewer consensus is that these are critical to the acceptance of the paper.\\n\\nWe would also like to point out (and we will clarify in revision) that any distributional family may be used for the encoder -- reparameterizable distributions are convenient, but it is also possible to use the score function trick to get a high-variance estimate of the gradient for distributions that have no explicit or implicit reparameterization. In general, a good choice for b(z|y) is a mixture of whatever the encoder distribution is. The core point is that these are modeling choices that need to be made by the practitioner, and they depend very much on the dataset (as you suggest). In this work, we chose normal distributions because they are easy to work with and will be the common choice for many problems, particularly when parameterized with neural networks, but that choice is incidental rather than fundamental.\\n\\nYou are correct that we did no explicit regularization on the deterministic model. We think it is likely that a regularized model would perform somewhat better at adversarial robustness, depending on the choice of regularizer. However, we don\\u2019t think it\\u2019s a bold claim to say that none of the standard regularization techniques provide meaningful robustness to the CW attack (which typically achieves 100% attack success rate on models that are not specifically designed to thwart it). It isn\\u2019t clear to us how much standard regularization techniques impact the other tasks on this particular dataset. Perhaps classification performance would have improved slightly? Our results are very much in-line with the Fashion MNIST leaderboard (https://github.com/zalandoresearch/fashion-mnist) for small networks -- in fact, 93% test accuracy or greater is only achieved by much larger or more sophisticated networks (VGG16, Capsule Networks, etc), apart from one simple convnet result -- so we think our results are likely to be representative. Finally, we would like to note that, even though there is no explicit regularization, all of the networks use the same size latent space, which means that the deterministic network also has a 4D bottleneck layer. We imagine that was helpful in minimizing overfitting for the deterministic network.\"}", "{\"title\": \"Concerns addressed above\", \"comment\": \"We hope that our discussion of MINE in our main response above sufficiently addresses your concerns here. Please let us know if that is not the case.\"}", "{\"title\": \"Different optimization choices\", \"comment\": \"This comment seems correct, but we agree with the reviewer that MINE and CPC could be used to learn a CEB model. All that is required is that those lower bounds on mutual information be used to optimize the I(Y;Z) term in the objective. CEB is an objective function, but any appropriate technique can be chosen to optimize any of its terms, such as variational approximations for the upper bound (I(X;Z|Y)) and CPC for the lower bound (I(Y;Z)). We don\\u2019t explore those here for the reasons we mention in our main response above.\"}", "{\"title\": \"Response, Part 2\", \"comment\": \"MINE\\n\\nThe MINE estimator introduced an interesting set of ideas. However, it has a number of problems. First, due to the expectation inside of the log in the proposed MINE objective function, the direction of the bound is lost -- it becomes a stochastic _estimate_ of the mutual information, possibly with very high variance. During training, the authors perform an adjustment to the objective in the name of reducing variance of the gradients, but this adjustment additionally corrects the objective to again be a lower bound. In other words, the proposed MINE objective only works as a lower bound to maximize at all thanks to the gradient correction term. Second, the objective relies on taking O(K^2) passes over each minibatch of K examples, which means that the batch size is severely restricted in practice. However, the tightness of the bound relies on K being large. This is problematic. Finally, MINE appears to be much more challenging to implement and to train than variational approaches, due to the minimax nature of the estimation and the strong sensitivity to the batch size.\\n\\nAs for comparisons between MINE and CEB, there are two important points to make.\\n\\nFirst, doing so would require using a very different architecture than the one we were able to use for all of the other models, where all three approaches we compare can be formulated in terms of an encoder and a classifier at inference time. MINE does not factor neatly into either of those pieces. This would break the apples-to-apples nature of our current set of experiments. In our mind, doing so would substantially reduce the clarity of the experimental results, while not clearly providing benefit to the core story. We hope you agree with that perspective, and that revising the paper to make it more explicit that non-variational estimators of the mutual information can be used to train CEB models will satisfy your concerns about related work.\\n\\nThe second point about experimental comparisons with MINE is that the comparison with VIB presented in the MINE paper is flawed. The authors chose to use MINE to optimize I(X;Z) in the VIB objective, but since MINE (as implemented) is a lower bound and VIB is minimizing I(X;Z) rather than maximizing it, that choice is incorrect. The end result can only have been a model that tried to do as well as possible at the classification problem while also maintaining as _much_ information as possible about X, which is the opposite treatment of I(X;Z) than what the Information Bottleneck principle proposes. Given this experimental error, it does not seem appropriate to cite MINE for its empirical comparison to VIB.\\n\\nWe do not mean for our comments here to be an attack on MINE, which we consider an important contribution, particularly for showing that there are more ways to estimate mutual information than had been previously considered. We will incorporate citations of both MINE and CPC in our revisions.\\n\\n\\nLength and Content\\n\\nWe agree that the paper can be improved by including explicit pseudocode for the training algorithm, and the writing can definitely be tightened. Our revisions will include such changes.\\n\\nIt is true that we wrote the derivation of CEB in an intentionally pedagogical manner -- many authors would skip many of the steps in the derivation. As an expert, you may have found that exposition tedious. We felt that showing this level of detail in the derivation and explaining why each choice needed to be made would help reduce the likelihood of other objectives being proposed that, for example, resulted in an incorrect bound, variational or otherwise.\"}", "{\"title\": \"Response, Part 1\", \"comment\": \"Thank you for your review, and for your thorough search of the literature on uses of the mutual information in objective functions. We were aware of the work you mentioned. The high order bit is that you are correct -- any estimator of the mutual information could be used in conjunction with the CEB objective, assuming the inequalities of the estimator correspond correctly to the direction the mutual information estimate needs to be optimized. We chose to explore variational bounds in this paper because they are simple, tractable, and well-understood, but we mention in Section 3 that other approaches are possible. We will be more specific about that in our revisions.\\n\\nThere is quite a bit of care that needs to be taken to understand these mutual information estimators, but both MINE and CPC may be used to optimize the lower bound on I(Z;Y). For reasons we describe in more detail below, we don\\u2019t think that doing so would substantially change the results in these particular experiments, but when applying CEB to tasks other than classification, both options are worth considering, in addition to the variational approach we present. In other words, the CEB objective involves minimization and maximization of mutual information terms, and any correctly bounded method to estimate mutual information could be used to optimize the components of the general CEB objective. Our focus on the variational approach in the paper is merely convenient, rather than essential.\\n\\nBelow we discuss MINE in more detail, but first we would like to clarify that all five of the papers you mention are proposing objectives that are not consistent with the MNI criterion if used by themselves, rather than as part of the optimization approach for an implementation of CEB. They are all focusing on maximizing estimations or lower bounds of either I(X;Z) or I(Y;Z). As we point out in Section 3, maximizing I(Y;Z) is necessary but not sufficient for achieving the Minimum Necessary Information, and maximizing I(X;Z) is fundamentally inconsistent with MNI. Additionally, it is worth pointing out that standard maximum likelihood estimation training also maximizes I(Y;Z) in deep networks, where Z can be taken as any intermediate layer of the network -- minimizing the cross entropy is the same as minimizing the H(Y|Z) term in the paper, which maximizes a lower bound on I(Y;Z) - H(Y), and H(Y) is constant with respect to the parameters. Thus, we expect that all five of these approaches (when used by themselves) still suffer from the excess of information in the representation that we hypothesize MLE to suffer from (whether the representation is explicit, as in VIB, Gomes et al., 2010, and Hjelm et al., 2018, or implicit, as in the other 3 papers you mention). In contrast, CEB is maximizing I(Y;Z) while also minimizing I(X;Z|Y), which forces optimization to get the trained model as close as it can to the MNI goal state of I(X;Z) = I(Y;Z) = I(X;Y).\\n\\nTo summarize the point we are making here, we do not consider unconstrained maximization of the mutual information between an observed variable and a representation variable to be a desirable property by itself, and we show that doing so is inconsistent with the MNI criterion. Such techniques can only be made compatible with MNI when used to optimize a complete MNI-compatible objective function, such as CEB.\"}", "{\"title\": \"Response, Part 3\", \"comment\": \"[Apologies for breaking up the response like this. OpenReview gives an unhelpful error message when we try to post the full response, so we are effectively having to perform binary search to determine what part of our text is breaking the site.]\\n\\nEase of optimization\\n\\nYour question about optimization in the discrete case is interesting. In IB, the proposed optimization algorithm is the Blahut-Arimoto (BA) algorithm, which converges for a given finite dataset, but adding new data requires retraining. In contrast, VIB is an amortized algorithm, which means that handling new data is trivial, but the results are approximate. For CEB, we chose to focus on the amortized variational approach, but of course we could additionally follow the original IB work and derive similar self-consistent equations for the BA algorithm (for example, trivially by setting beta = 1/2 and using the derivation from IB). In that setting, you are correct that discrete X, Y, and Z would result in a larger search space with CEB than with IB. However, that search would only need to be performed once for CEB, whereas sweeping beta and doing cross validation can reasonably be done until the practitioner runs out of patience, and at the end of that process, the practitioner would still not know if they had learned a representation that covered the information in I(X;Y), unless they already knew that beta=1/2 would give them that result. They wouldn\\u2019t know that without this paper or a rediscovery of its results.\\n\\nWe don\\u2019t state this in this paper, but it is clear to us that most of the time, continuous representations are preferable to discrete representations, as they are much easier to work with (and much easier to train via gradient descent). Of course, there are a number of papers in the literature that propose learning discrete representations, or mixed continuous and discrete representations, including InfoGAN (Chen et al., 2016) and Hu et al., 2017. Often, the discrete representation is used to set a hard upper-bound on the amount of information being learned. Since the CEB objective is consistent with the MNI criterion, the learned representation does not need to have such structural constraints placed on it. The practitioner may choose a discrete representation during model specification, but they are not required to do so by the objective. If continuous representations are more convenient, they may use them.\\n\\nFinally, our amortized variational training is no slower than VIB -- we train all of the models in the paper for essentially the same number of steps using the same dynamic learning rate schedule to get to convergence. Thus, in the case of continuous representations and amortized inference, a single training of CEB is just as efficient as a single training of VIB, and additionally CEB has no objective function hyperparameters to tune.\\n\\n\\nVariational Tightness\\n\\nWe are still working out theory to determine how relatively tight the variational approximations are between VIB and variational CEB. We will add any such results in revision if we have them in time. However, informally, when training continuous distributions, it is often easier to train conditional distributions than marginal distributions. Empirically this appears to be the case -- the marginals we train for VIB are mixtures of 500 gaussians, whereas the CEB model\\u2019s backward encoder is simply 10 multivariate normal distributions, one for each class, yet the CEB model matches or outperforms the VIB models on all of the tasks. Subsequent experiments with CEB using 10 mixtures of 10 multivariate normals each have proven to be even more effective while still being a less expressive distribution than the 500 component mixture.\\n\\n\\nExperimental Clarity\\n\\nWe will revise our description of the experiments and add further detail to the appendices that we left out, including more complete descriptions of the modeling and architectural choices, as well as pseudocode for the training algorithm.\\n\\n\\n\\n[1] Chen, Xi, et al. \\\"Infogan: Interpretable representation learning by information maximizing generative adversarial nets.\\\" Advances in neural information processing systems. 2016.\\n[2] Strouse, D. J., and David J. Schwab. \\\"The deterministic information bottleneck.\\\" Neural computation 29.6 (2017): 1611-1630.\\n[3] Tishby, Naftali, and Noga Zaslavsky. \\\"Deep learning and the information bottleneck principle.\\\" Information Theory Workshop (ITW), 2015 IEEE. IEEE, 2015.\\n[4] Tishby, Naftali, Fernando C. Pereira, and William Bialek. \\\"The information bottleneck method.\\\" arXiv preprint physics/0004057 (2000).\"}", "{\"title\": \"Reponse, Part 2\", \"comment\": \"Triviality\\n\\nIt is true that beta I(X;Z) - I(Y;Z) is two terms of opposite sign, but it does not immediately follow that you will find the cancelation of H(Z) that we show for CEB unless you start from the observation that you can design a representation learning objective that directly optimizes for covering the information in I(X;Y). To put it another way, of course we can break up IB as follows:\\n\\nbeta I(X;Z) - I(Y;Z) = beta / 2 * (H(Z) - H(Z|X) + H(X) - H(X|Z)) - 1 / 2 * (H(Z) - H(Z|Y) + H(Y) - H(Y|Z))\\n\\nThere are a number of other obvious ways to split the IB objective as well. But without the key insight about the conditional information term in CEB, and without the guidance of a criterion like the MNI, you have no principle with which to decide to keep or remove any of those entropies or conditional entropies, and since there is a beta associated with one of the H(Z) terms and not the other, you might conclude that you cannot cancel them. As written (in the form that uses both expansions of the mutual information for both terms), the only assignment of beta that allows the H(Z) terms to cancel is beta=1, so you would need to guess that for the I(X;Z) term, you should not use both expansions of I(X;Z), you should only use the H(Z) - H(Z|X) expansion, and then you would have to guess that beta = 1/2 is not just an arbitrary choice of beta, and thus that the cancelation of H(Z) that value permits is worth pursuing. This shows that the CEB objective (in our opinion) is not an obvious consequence of IB, even though once you know both objectives, the relationship between the two is obvious. We hope that the reviewers will keep this perspective in mind while evaluating this work.\\n\\nA similar complaint might be made by observing that b(z|y) is just as valid a variational approximation to p(z) as m(z) is, and so it\\u2019s reasonable to learn that as an alternative to VIB. However, you would need to realize that using a \\u201cmarginal\\u201d that depends on y does not give a variational upper bound on I(X;Z). The fact that doing so is part of a bound on I(X;Z|Y) is obvious in retrospect, but we are unaware of anyone pointing this out previously. So again, it is important to not view CEB as merely a particular parameterization of IB, or as VIB with a different variational approximation. These trivializations of the work ignore the important reasoning that allowed us to arrive at such a simple solution to a 19 year old problem.\"}", "{\"title\": \"Response, Part 1\", \"comment\": \"We appreciate that you read our paper closely, and address your concerns in detail below.\\n\\n\\nSurprise\\n\\nThe surprise of the result relating CEB to IB so simply comes from two things. First, the fact that there is a single value of beta for IB and VIB that achieves the MNI-optimal information, so long as the model, optimizer, etc, is capable of capturing that amount of information. This is surprising because the analysis in Alemi et al. (2018) naively applied to the VIB case would assume that sweeping beta would be necessary to find the optimal information even if you knew a priori what that amount of information was. We discuss this point in the appendix to some extent, but will clarify that discussion in our revisions. Note that a similar and more directly-applicable Pareto-optimal frontier is described in Strauss and Schwab (2017), and that work also does not point out that beta = 1/2 would result in a learned representation where I(X;Z) = I(Y;Z) = I(X;Y). \\n\\nSecond is a point that we decided not to make in the version we submitted, but that we can add in revision. Tishby et al. say two things in many of the IB papers quite clearly.\\n\\nIn Tishby et al. (2015), the authors state:\\n\\u201cThe information bottleneck (IB) method was introduced as an information theoretic principle for extracting relevant information that an input random variable X \\u2208 X contains about an output random variable Y \\u2208 Y. Given their joint distribution p(X, Y), the relevant information is defined as the mutual information I(X ; Y), where we assume statistical dependence between X and Y. In this case, Y implicitly determines the relevant and irrelevant features in X. An optimal representation of X would capture the relevant features, and compress X by dismissing the irrelevant parts which do not contribute to the prediction of Y.\\u201d\\n\\nBut in Tishby et al. (2000), the authors also state:\\n\\u201c...there is a tradeoff between compressing the representation and preserving meaningful information, and there is no single right solution for the tradeoff.\\u201d\\n\\nIn other words, Tishby et al. recognize that the information measured by I(X;Y) corresponds to the optimal representation, but they do not quite know how to find it. Instead, the IB approach relies on sweeping beta and cross validation.\\n\\nThus, the surprise we are describing isn\\u2019t the trivial arithmetic relating CEB to IB once you have seen both objectives. Instead, the surprise is that you _can_ learn a representation with the optimal amount of information defined by the observed data, I(X;Y) without having to know I(X;Y) ahead of time. This shows that sweeping beta is unnecessary if you believe (as Tishby appears to believe, and as we certainly believe) that I(X;Y) is the correct amount of information to retain in your representation. The information bottleneck was presented 19 years ago, and so far as we know, no-one has previously proposed a way to learn a representation that doesn\\u2019t require you to guess what I(X;Y) might be.\\n\\n\\nWhy MNI?\\n\\nIn this work we do not attempt to give a formal proof that CEB representations learn the optimal information about the observed data (and certainly the variational form of the objective will prevent that from happening in general cases). However, the MNI is motivated by the following simple observations: If I(X;Z) < I(X;Y), then we have thrown out relevant information in X for predicting Y. If I(X;Z) > I(X;Y), then we are including information in X that is not useful for predicting Y. Thus targeting I(X;Z) = I(X;Y) is the \\\"correct\\\" amount of information, which is one of the equalities required in order to satisfy the MNI criterion.\\n\\n\\nRelation between general forms of CEB and IB\\n\\nYes, CEB is a special case of IB with beta = 1/2. However, we are the first to show that the IB Lagrangian with beta = 1/2 targets the MNI. This is a significant result that was not previously highlighted in any of the literature on IB.\"}", "{\"title\": \"Accept the answer to the first two comments (though still lack rigor in the general approach) and a question regarding the third\", \"comment\": \"The authors responses to the first two comments are satisfactory. These responses clarified much that was hard to understand from the actual text. This suggest a revision on the explanations in the manuscript is due. The idea as elaborated in the response is appealing, but the text was too broad and fancy to pin down this concrete concept (that preserving I(X;Y) information in Z_X about X and Y is a reasonable thing to do and why).\\n\\nOn the flip side, it would be *very* nice to see some proofs justifying that performance (in an appropriate sense) degrades when I(X; Z) < I(X; Y) or (X; Z) > I(X; Y) happens. This would *significantly* strengthen the work.\\n\\nBy the way, there is no need to call Eqn. (2) a 'well-known equality' and to cite Thomas & Cover. It is just I(X;Z|Y)=I(X,Y;Z)-I(Y;Z)=I(X;Z)-I(Y;Z), where the first step in mutual information chain rule, and the second step is the Markob chain (essentially that conditional distribution satisfies P_{Z|X,Y=y}=P_{Z|X} for all y, and some algebra). Just saying that it follows by the above would suffice. Also, the notation Z<-X<->Y is redundant and confusing. It is always the case that if Z<-X<-Y forms a Markov chain in that order then Z->X->Y is another valid ordered chain. Consequently, we can just write Z<->X<->Y, or even better: Z-X-Y. \\n\\nAll in all, I look forward to the revision, hoping it will reflect the clarity of the response. \\n\\nRegarding the last point, if Z_X is implemented by a DNN, when the parameters of the system are fixed, what makes it (or the encoder e(z_X|x) for that matter) stochastic? Can the authors specify their assumptions on Z_X (or the family of models) that preclude it from being deterministic, for fixed parameters? Where does this construction endow Z_X with a mechanism for shedding information about X -- is there quantization? noise?\"}", "{\"title\": \"Response\", \"comment\": \"Thank you for taking the time to write a detailed review. We will address your concerns in turn.\\n\\nNegativity of I(X;Y;Z)\\n\\nYou are correct that I(X;Y;Z) can be negative in general. However, it cannot be negative in the representation learning setting that we are describing here -- the Markov chain Z <- X -> Y does not permit it.\\n\\nThe triplet information, I(X;Y;Z) may be defined as follows:\\n\\nI(X;Y;Z) = I(X;Z) - I(X;Z|Y)\\n\\nBut we already know that I(X;Z|Y) = I(X;Z) - I(Y;Z) (Equation 2) due to our Markov chain Z <- X <-> Y, so in our case we have:\\n\\nI(X;Y;Z) = I(X;Z) - I(X;Z) + I(Y;Z) = I(Y;Z)\\n\\nwhich we also know is non-negative, completing our proof.\\n\\n\\nMinimum Necessary Information\", \"our_upcoming_revisions_clarify_the_relationship_between_mni_and_minimal_sufficient_statistics_and_update_the_discussion_of_the_mni_with_the_following\": \"Why MNI?\\n\\nIn this work we do not attempt to give a formal proof that CEB representations learn the optimal information about the observed data (and certainly the variational form of the objective will prevent that from happening in general cases). However, the MNI is motivated by the following simple observations: If I(X; Z) < I(X; Y), then we have thrown out relevant information in X for predicting Y. If I(X; Z) > I(X; Y), then we are including information in X that is not useful for predicting Y. Thus targeting I(X; Z) = I(X; Y) is the \\\"correct\\\" amount of information, which is one of the equalities required in order to satisfy the MNI criterion.\\n\\n\\nRepresentations and Finiteness of Mutual Information\\n\\nZ_X is not a deterministic function of X in either IB or CEB, it is a stochastic representation of X given by an encoder e(z_X|x). Using a stochastic encoder means that the stated concerns of infinite entropy are not applicable in either objective -- the conditions for infinite mutual information given in Amjad and Geiger (2018) do not apply. In practice, (using continuous representations), we did not encounter mutual information terms that diverged to infinity, although certainly it is possible to make modeling and data choices that make it more or less likely that there will be numerical instabilities. This is not a flaw specific to CEB or VIB, however, and we found numerical instability to be almost non-existent across a wide variety of modeling and architectural choices for both variational objectives.\\n\\n\\n[1] R. A. Amjad and B. C. Geiger 'Learning Representations for Neural Network-Based Classification Using the Information Bottleneck Principle', 2018\"}", "{\"title\": \"General Response\", \"comment\": \"We would like to thank the reviewers for their efforts and consideration in reviewing our work. Most of the concerns about the work lie with how easy it is to comprehend the motivation of the minimum necessary information criterion and the derivation of the CEB objective, which we will clarify in our revisions. Individual concerns are addressed in the corresponding threads. Here, we would like to make two high-level points, one about the core purpose of the paper, and the other about the empirical results.\\n\\nAll three of the remaining reviews (since one was deleted) seem to have not understood that we are providing a way to learn a representation Z of observed joint data, X and Y, that maintains only the amount of information I(X;Y) about both X and Y, and does so without any hyperparameter tuning. (Only the deleted review that scored the work an 8 gave a summarization that indicated that the reviewer had understood this point.) The premise of MNI, and what makes it more than just \\u201ca convoluted and long explanation of mutual information\\u201d is that it is defining the amount of information the optimal representation, Z, should have about each of the two observed variables individually and collectively. We are somewhat surprised that this was unclear, as that goal is given as the first formula of Section 3: I(X;Y;Z) = I(X;Z) = I(Y;Z) = I(X;Y), and explained diagrammatically in Figure 1 and the text that references Figure 1. This differs from the statement of the Information Bottleneck principle (Tishby et al., 2000), although Tishby et al. do acknowledge in subsequent work that I(X;Y) is the optimal amount of information a learned representation should have (Tishby et al., 2015). Even with their recent admission that I(X;Y) is the optimal amount of information, they do not know how to set beta to find that amount of information. Thus, our surprising result is that this amount of information, I(X;Y), can be directly targeted for a given representation Z without knowing in advance the value of I(X;Y), and without any hyperparameter tuning.\\n\\nLittle review attention has been given to the empirical results of the work (one review finds the experiments difficult to understand, two reviews request additional experiments but don\\u2019t mention that the experiments we did are problematic or unconvincing, the fourth review does not comment on the experiments at all). Our empirical results show a strong advantage to using the CEB objective in an apples-to-apples comparison on four major outstanding issues in the field of machine learning, all of which can be characterized as problems of generalization.\\n\\nIn particular, we consider our results with adversarial robustness and detection to be compelling -- the whitebox attacks we experimented with amount to an essentially unlimited whitebox adversary, and we are unaware of any work showing real robustness to even the basic version of the adversary, much less the adversary that additionally directly targets the detection mechanism. Recent work (Carlini et al., 2018) urges researchers in the space to try to attack their own mechanism -- our experiments do exactly that, and the remarkable result is that the model is even more robust to those attacks (while less able to detect them). Certainly we have not encountered any work that confers adversarial robustness merely by changing the objective function used to train an otherwise identical inference network on an unmodified dataset. That we are able to achieve those results and the others described in the paper while maintaining performance parity on the core classification task underscores why we expect this paper to have broad interest. A simple change to the objective function used to train the model (no harder to implement than a VAE or VIB model), with no performance lost relative to standard maximum likelihood techniques, while gaining much in terms of generalization performance, all without additional hyperparameters to tune seems to us like something that many researchers and practitioners will want to explore. Given that our objective is motivated from a representation learning perspective, it is difficult to imagine a more appropriate venue for the work than ICLR.\\n\\nWe hope that the reviewers will take the time to consider these strong empirical results while reviewing our revisions.\\n\\n[1] Athalye, Anish, Nicholas Carlini, and David Wagner. \\\"Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples.\\\" ICML (2018).\\n[2] Tishby, Naftali, Fernando C. Pereira, and William Bialek. \\\"The information bottleneck method.\\\" arXiv preprint physics/0004057 (2000).\\n[3] Tishby, Naftali, and Noga Zaslavsky. \\\"Deep learning and the information bottleneck principle.\\\" Information Theory Workshop (ITW), 2015 IEEE. IEEE, 2015.\"}", "{\"title\": \"A new information bottleneck method is proposed, but major reservations arise\", \"review\": \"[UPDATE]\\n\\nI find the revised version of the paper much clearer and streamlined than the originally submitted one, and am mostly content with the authors reply to my comments. However, I still think the the work would highly benefit from a non-heuristic justification of its approach and some theoretic guarantees on the performance of the proposed framework (especially, in which regimes it is beneficial and when it is not). Also, I still find the presentation of experimental results too convoluted to give a clear and comprehensive picture of how this methods compares to the competition, when is it better, when is it worse, do the observations/claim generalize to other task, and which are the right competing methods to be considering. I think the paper can still be improved on this aspect as well. \\n\\nAs I find the idea (once it was clarified) generally interesting, I will raise my score to 6.\\n\\n------------------------------------------------------------------------------------------------------------------------------------------------------------------------\\n\\nThe paper proposes an objective function for learning representations, termed the conditional entropy bottleneck (CEB). Variational bounds on the objective function are derived and used to train classifiers according to the CEB and compare the results to those attained by competing methods. Robustness and adversarial examples detection of CEB are emphasized.\", \"my_major_comments_are_as_follows\": \"1) The authors base their 'information-theoretic' reasoning on the set-theoretic structure of Shannon\\u2019s information measures. It is noteworthy that when dealing with more than 2 random variables, e.g., when going from the twofold I(X;Y) to the threefold I(X;Y;Z), this theory has major issues. In particular, there are simple (and natural) examples for which I(X;Y;Z) is negative. The paper presents an information-theoretic heuristic/intuitive explanation for their CEB construction based on this framework. No proofs backing up any of the claims of performance/robustness in the paper are given. Unfortunately, with such counter-intuitive issues of the underlying theory, a heurisitc explanation that motivates the proposed construction is not convincing. Simulations are presented to justify the construction but whether the claimed properties hold for a wide variety of setups remain unclear.\\n\\n2) Appendix A is referred to early on for explaining the minimal necessary information (MNI), but it is very unclear. What is the claim of this Appendix? Is there a claim? It's just seems like a convoluted and long explanation of mutual information. Even more so, this explanation is inaccurate. For instance, the authors refer to the mutual information as a 'minimal sufficient statistic' but it is not. For a pair of random variables (X,Y), a sufficient statistic, say, for X given Y is a function f of Y such X-f(Y)-Y forms a Markov chain. Specifically, f(Y) is another random variable. The mutual information I(X;Y) is just a number. I have multiple guesses on what the authors' meaning could be here, but was unable to figure it out from the text. One option, which is a pretty standard way to define sufficient statistic though mutual information is as a function f such that I(X;Y|f(Y))=0. Such an f is a sufficient statistic since the zero mutual information term is equivalent to the Markov chain X-f(Y)-Y from before. Is that what the authors mean..?\\n\\n3) The Z_X variable introduced in Section 3 in inspired by the IB framework (footnote 2). If I understand correctly, this means that in many applications, Z_X is specified by a classifier of X wrt the label Y. My question is whether for a fixed set of system parameters, Z_X is a deterministic function of X? If this Z_X play the role of the sufficient statistics I've referred to in my previous comment, then it should be just a function of X. \\n\\nHowever, if Z_X=f(X) for a deterministic function f, then the CEB from Equation (3) is vacuous for many interesting cases of (X,Y). For instance, if X is a continuous random variable and Z_X=f(X) is continuous as well, then \\nI(X;Z_X|Y)=h(Z_X|Y)-h(Z_X|X,Y)\\nwhere h is the differential entropy and the subtracted terms equals -\\\\infty by definition (see Section 8.3 of (Cover & Thomas, 2006). Consequently, the mutual information and the CEB objective are infinite. If Z_X=f(X) is a mixed random variable (e.g., can be obtain from a ReLU neural network), then the same happens. Other cases of interest, such as discrete X and f being an injective mapping of the set of X values, are also problematic. For details of such problem associated with IB type terms see:\\n\\n[1] R. A. Amjad and B. C. Geiger 'Learning Representations for Neural Network-Based Classification Using the Information Bottleneck Principle', 2018 (https://arxiv.org/abs/1802.09766).\\n\\nCan the authors account for that?\\n\\n4) The other two reviews addressed the missing accounts for past literature. I agree on this point and will keep track of the authors' responses. I will not comment on that again. \\n\\nBeyond these specific issue, they text is very wordy and confusing at times. If some mathematical justification/modeling was employed the proposed framework might have been easier to accept. The long heuristic explanations employed at the moment do not suffice for this reviewer. Unless the authors are able to provide clarification of all the above points and properly place their work in relation to past literature I cannot recommend acceptance.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"IB with a specific choice of a parameter, but different from than VIB\", \"review\": \"This paper wants to discuss a new objective function, which the authors dub \\\"Conditional Entropy Bottleneck\\\" (CEB), motivated by learning better latent representations. However, as far as I can tell, the objective functions already exists in the one-parameter family of Information Bottleneck (IB) of Tishby, Pereira, and Bialek. The author seems to realize this in Appendix B, but calls it \\\"a somewhat surprising theoretical result\\\". However, if we express IB as max I(Z;Y) - beta I(Z;X), see (19), and then flip signs and take the max to the min, we get min beta I(Z;X) - I(Z;Y). Taking beta = 1/2, multiplying through by 2, and writing I(X;Z) - I(Y Z) = I(X;Z|Y), we find CIB. Unfortunately, I fail to see how this is surprising or different.\\n\\nA difference only arises when using a variational approximation to IB. The authors compare to the Variational Information Bottleneck (VIB) of Alemi, Fischer, Dillon, and Murphy (arXiv:1612.00410), which requires a classifier, an encoder, and a marginal posterior over the latents. Here, instead of the marginal posterior, they learn a backwards encoder from labels to latents. This difference arises because the IB objective has two terms of opposite sign, and we can group them into positive definite terms in different ways, creating different bounds.\\n\\nPerhaps this grouping leads to a better variational bound? If so, that's only a point about the variational method employed by Alemi et al., and not a separate objective. As this seems to be the main contribution of the paper, this point needs to be explained more carefully and in more detail. For instance, it seems worth pointing out, in the discrete case, that the marginal posterior |Z| values to estimate, and the backwards encoder has |Z| x |Y| -- suggesting this is a possibly a much harder learning problem. If so, there should be a compelling benefit for using this approximation and not the other one.\\n\\nIn summary, the authors are not really clear about what they are doing and how it relates to IB. Furthermore, the need for this specific choice in IB parameter space is not made clear, nor do the experimental results giving a compelling need. (The experimental results are also not at all clearly presented or explained.) Therefore, I don't think this paper satisfies the quality, clarity, originality, or significance criteria for ICLR.\", \"rating\": \"2: Strong rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"MINE and mutual information minimization.\", \"comment\": \"MINE in the min max framework (information bottleneck) works exactly like GANs in that a separate discriminator (statistics network in the paper) maximizes the lower bound to the expected log ratio of the joint over the product of marginals. The encoder is optimized to minimize this estimate, which is the same as the GAN generator. I can see the advantage to VIB-type (min min) methods as adversarial objectives have many optimization difficulties, but MINE has the advantage of not needing explicit densities.\\n\\nI'm not sure what you mean by that MINE fails: MINE is demonstrated to outperform VIB by a good margin in a experiment very similar to one presented in this submission. (See section 5.3 of MINE). \\n\\nMinibatch MINE has biased gradients, but the authors introduce a learned baseline to address this. Please see section 3.2 in the MINE paper on bias correction. \\n\\nAs many recent works have found success in MINE-like techniques and MINE has been shown to outperform VIB in a very similar setting, the submission authors would need a much more compelling argument not to include some comparison to MINE.\"}", "{\"title\": \"MINE cannot be used to minimize mutual information\", \"comment\": \"MINE is a lower bound to the mutual information (and that fails too in the minibatch setting), and thus cannot be used to minimize mutual information. The approach taken by this paper yields a proper upper bound even in the minibatch setting.\"}", "{\"title\": \"Interesting approach with decent results, but far lacking in related works on mutual information\", \"review\": \"Update: see comments \\\"On revisions\\\" below.\\n\\nThis paper essentially introduces a label-dependent regularization to the VIB framework, matching the encoder distribution of one computed from labels. The authors show good performance in generalization, such that their approach is relatively robust in a number of tasks, such as adversarial defense.\\n\\nThe idea I think is generally good, but there are several problems with this work.\\n\\nFirst, there has been recent advances in mutual information estimation, first found in [1]. This is an important departure from the usual variational approximations used in VIB. You need to compare to this baseline, as it was shown that it outperforms VIB in a similar classification task as presented in your work.\\n\\nSecond, far too much space is used to lay out some fairly basic formalism with respect to mutual information, conditional entropy, etc. It would be nice, for example, to have an algorithm to make the learning objective more clear. Overall, I don't feel the content justifies the length.\\n\\nThird, I have some concerns about the significance of this work. They introduce essentially a label-dependent \\u201cbackwards encoder\\u201d to provide samples for the KL term normally found in VIB. The justification is that we need the bottleneck term to improve generalization and the backwards encoder term is supposed to keep the representation relevant to labels. One could have used an approach like MINE, doing min information for the bottleneck and max info for the labels. In addition, much work has been done on learning representations that generalize using mutual information (maximizing instead of minimizing) [2, 3, 4, 5] along with some sort of term to improve \\\"relevance\\\", and this work seems to ignore / not be aware of this work.\\n\\nOverall I could see some potential in this paper being published, as I think the approach is sensible, but it's not presented in the proper context of past work.\\n\\n[1] Belghazi, I., Baratin, A., Rajeswar, S., Courville, A., Bengio, Y., & Hjelm, R. D. (2018). MINE: mutual information neural estimation. International Conference for Machine Learning, 2018.\\n[2] Gomes, R., Krause, A., and Perona, P. Discriminative clustering by regularized information maximization. In NIPS, 2010.\\n[3] Hu, W., Miyato, T., Tokui, S., Matsumoto, E., and Sugiyama, M. Learning discrete representations via information maximizing self-augmented training. In ICML, 2017.\\n[4] Hjelm, R. D., Fedorov, A., Lavoie-Marchildon, S., Grewal, K., Trischler, A., & Bengio, Y. (2018). Learning deep representations by mutual information estimation and maximization. arXiv preprint arXiv:1808.06670.\\n[5] Oord, Aaron van den, Yazhe Li, and Oriol Vinyals. \\\"Representation learning with contrastive predictive coding.\\\" arXiv preprint arXiv:1807.03748 (2018).\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}" ] }
BJfOXnActQ
Learning to Learn with Conditional Class Dependencies
[ "Xiang Jiang", "Mohammad Havaei", "Farshid Varno", "Gabriel Chartrand", "Nicolas Chapados", "Stan Matwin" ]
Neural networks can learn to extract statistical properties from data, but they seldom make use of structured information from the label space to help representation learning. Although some label structure can implicitly be obtained when training on huge amounts of data, in a few-shot learning context where little data is available, making explicit use of the label structure can inform the model to reshape the representation space to reflect a global sense of class dependencies. We propose a meta-learning framework, Conditional class-Aware Meta-Learning (CAML), that conditionally transforms feature representations based on a metric space that is trained to capture inter-class dependencies. This enables a conditional modulation of the feature representations of the base-learner to impose regularities informed by the label space. Experiments show that the conditional transformation in CAML leads to more disentangled representations and achieves competitive results on the miniImageNet benchmark.
[ "meta-learning", "learning to learn", "few-shot learning" ]
https://openreview.net/pdf?id=BJfOXnActQ
https://openreview.net/forum?id=BJfOXnActQ
ICLR.cc/2019/Conference
2019
{ "note_id": [ "Sye7irOggE", "r1xVbbV9RQ", "H1x2TeE90m", "B1gPog4c0Q", "Hyg7wl45AQ", "HJxo10uFh7", "ByxxsLKVn7", "H1lyuO0foX" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544746394839, 1543287035620, 1543286980160, 1543286943095, 1543286874652, 1541144035481, 1540818584341, 1539659879244 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1374/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1374/Authors" ], [ "ICLR.cc/2019/Conference/Paper1374/Authors" ], [ "ICLR.cc/2019/Conference/Paper1374/Authors" ], [ "ICLR.cc/2019/Conference/Paper1374/Authors" ], [ "ICLR.cc/2019/Conference/Paper1374/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1374/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1374/AnonReviewer2" ] ], "structured_content_str": [ "{\"metareview\": \"The reviewers think that incorporating class conditional dependencies into the metric space of a few-shot learner is a sufficiently good idea to merit acceptance. The performance isn\\u2019t necessarily better than the state-of-the-art approaches like LEO, but it is nonetheless competitive. One reviewer suggests incorporating a pre-training strategy to strengthen your results. In terms of experimental details, one reviewer pointed out that the embedding network architecture is quite a bit more powerful than the base learner and would like some additional justification for this. They would also like more detail on the computing the MAML gradients in the context of this method. Beyond this, please ensure that you have incorporated all of the clarifications that were required during the discussion phase.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Good proposal for incorporating class dependencies in few-shot learning.\"}", "{\"title\": \"Response to reviewer 2\", \"comment\": \"Thank you for the very detailed and constructive comments.\\n\\n1. The motivation\\n1.1 How the metric space is trained?\\nThe metric space is trained in a pre-training step and it is not updated while training the base-learner. The embeddings obtained from the metric space is different from other popular pre-training techniques, e.g. in LEO the embeddings are pre-trained as a supervised classification task. The pre-trained metric space provides a representation for class dependency as it is trained to provide good separation/clustering from randomly sampled classes. This is in contrast with supervised pre-training which aims to provide discriminative feature representations.\\n\\n1.2 \\u201cWill it introduce more information w.r.t. only using embedding space to do the classification?\\u201d\", \"the_proposed_caml_makes_use_of_two_views_of_the_data\": \"a global view through the metric space and a local view via the base classifier. The global view of the data, i.e., the embeddings, may not capture all the necessary information for some classification tasks, such as classifying different breeds of dogs which may have similar embeddings. In such cases, the local view of data from the pixel space could help compensate for the lack of information in the global view.\\n\\n3.5 \\u201cHow about build MAML directly on the embedding space?\\u201d\\nWe are aware that meta-learning on the embedding space is a powerful idea, as shown in LEO. We have added an experiment that directly trains maml on the learned 512 dimensional metric space using three fully-connected layers. We were only able to obtain 47.43% on 1-shot tasks and 57.33% on 5-shot tasks. This suggests that applying conditional transformations on the metric space is more effective than directly using the metric space as input.\\n\\n2. Novelty.\\nThe proposed CAML does have close relation to TADAM. However, they have three main differences.\\n(i) Different goals: TADAM uses conditional transformation for metric scaling while CAML for developing better gradient-based representations.\\n(ii) Task-level vs. example-level representation. TADAM uses task-level representation to modulate the inference from a task perspective, while CAML uses example-level representation to modulate the representation at the content level.\\n(iii) The conditional transformation in TADAM is homogeneous in the sense that the conditional information is retrieved from the metric space and also applied to the metric space. However, the proposed CAML uses conditional transformation under the heterogeneous setup where the conditional information is retrieved from the embedding space but applied to a different base learner.\\n\\n3. Method details\\n3.1 \\u201cSince CBN is example induced, will it prone to overfitting?\\u201d\\nThe metric space (ResNet-12) is pre-trained and not updated while training the base learner. The gradients of the meta learner only affect the base learner and conditional transformation. We choose 30 convolutional channels out of computational considerations, and the skip connection has a bigger impact than the number of conv channels. Using 64 conv channels without the skip connection, we obtain 54.63% on 1-shot and 70.38% on 5-shot.\\n\\n3.2 \\u201cIs this skip connection very important for this particular model?\\u201d\\nYes, the skip connection is very important. The use of skip connections is to improve the gradient flow. MAML unfolds the inner loop into one large graph which may cause gradient issues. Without skip connections, out model obtains 56.07% on 1-shot tasks and 71.26% on 5-shot tasks.\\n\\n3.3 \\u201cWill the MAML objective influences the embedding network?\\u201d\\nWe would like to clarify that the metric is pre-trained and not updated in MAML updates. We empirically observe that training the metric space and meta-learner end-to-end is overly complex and tend to over-fit.\\n\\n3.4 \\u201chow many epochs does MAML need?\\u201d\\nIt takes 50,000 episodes to train CAML, and another 30,000 episodes to pre-train the metric space.\"}", "{\"title\": \"Response to reviewer 1\", \"comment\": \"Thank you for your valuable review.\\n\\n1. Clarification on the metric learning step\\nThank you for the suggestion. The metric is indeed learned in a K-means-flavored way and we have updated our manuscript to reflect that $\\\\phi$ is learned.\\n\\n2. How confidence intervals are constructed?\\nWe sample 600 evaluation tasks from the meta-test classes and report the confidence intervals across all the evaluation tasks. We have updated our manuscript to reflect this.\"}", "{\"title\": \"Response to reviewer 3\", \"comment\": \"Thank you for your constructive review.\\n\\n1. Is the use of class dependency general or specific to MAML-based methods?\\n(1) The benefits of class dependency is not restricted to MAML-based methods. The goal of class dependency is to provide complementary information to the meta-learner; this is especially important in few-shot learning due to insufficient data.\\n(2) Relating to other SOA: (a) TADAM makes use of conditional transformations based on tasks representations for metric scaling; class dependencies can also be incorporated into TADAM with the additional benefit of capturing example-level class relationships. (b) LEO can also make use of the class dependency for improving the conditional generation of model parameters.\\n\\n2. The relationship between the metric space and the base-learner.\", \"the_proposed_framework_captures_the_dual_views_of_a_classification_task\": \"a global view that is aware of the relationships among all classes, and a local views of the current N-way K-shot classification task. The metric space, or the global view, is pre-trained in a way that is independent of the current N-way K-shot task; while the base-learner, or the local view, attempts to develop representations for the current classification task alone.\\n\\n3. \\u201cWhat would happen if similar process keeps on? E.g., by building the third stage that modulates the features from the previous two?\\u201d\\nThis is a very interesting question. One can build different stages of conditional transformations associated with different granularities of class-dependency. With metric spaces trained to capture different levels of class-dependency, one could modulate the base-learner in a hierarchical manner.\\n\\n4. How to make use of hierarchical class structure?\\nOne can employ a curriculum learning strategy to learn the metric space at different levels of the hierarchy to better train the metric space. As mentioned in 3, the hierarchical class structure can also be used to train different metric spaces and conditionally modulate representations in a hierarchical manner.\"}", "{\"title\": \"Updated version of the paper\", \"comment\": \"We thank the reviewers for their valuable feedback. The main changes we have made in the manuscript include:\\n\\n(1) Clarifications on metric learning notations and the fact that the metric space is pre-trained.\\n(2) Additional discussions about the relationships between the metric space and the base classifier.\\n(3) Highlight the differences between CAML and TADAM.\\n(4) Hyperparameters and other small edits.\"}", "{\"title\": \"An interesting paper with some areas yet to exploit.\", \"review\": \"[Summary]\\nThe paper presents an enhancement to the Model-Agnostic Meta-Learning (MAML) framework to integrate class dependency into the gradient-based meta-learning procedure. Specifically, the class dependency is encoded by embedding the training examples via a clustering network into a metric space where semantic similarity is preserved via affinity under Euclidean distance. Embedding of an example in this space is further employed to modulate (scale and shift) features of the example extracted by the base-learner via a transformation network, and the final prediction is made on top of the modulated features. Experiments on min-ImageNet shows that the proposed approach improves the baseline of MAML. \\n\\nPros\\n- An interesting idea of leveraging class dependency in meta-learning.\\n- Solid implementation with reasonable technical solutions.\\n\\nCons\\n- Some relevant interesting areas/cases were not exploited/tested.\\n- Improvement over state-of-the-arts (SOA) is marginal or none. \\n\\n[Originality]\\nThe paper is motivated by an interesting observation that class dependency in the label space can also provide insights for meta-learning. This seems to be first introduced in the context of meta-learning.\\n\\n[Quality]\\nOverall the paper is well executed in some aspects, including motivation and technical implementation. There are, however, a few areas/cases I would like to see more from it so as to make a stronger case. \\n\\nIn terms of generalization, the proposed enhancement to MAML is claimed to be orthogonal to other SOAs that are also within the framework based on gradient-descent, e.g. LEO. It is not quite clear to me that if the use of class dependency can lead to general benefits to alike methods like LEO, or if it is just a specific case for the MAML baseline. Actually, it would be interesting to see how the proposed class-conditional modulation can help other SOA in table 1. Also, more empirical results from other use cases (e.g., other datasets or problems) also help provide more insights here. These augmentation can better justify the value or significance of this work. \\n\\nIn the specific formulation of the approach in Fig 2, it looks to me that the whole system is a compounded framework that combines two classifiers with one (base-learner) producing base representation, and the second injects side-information (e.g., from class-dependency in this case) to modulates the base representation before the final prediction. I just wonder what would happen if similar process keeps on? E.g., by building the third stage that modulates the features from the previous two? Or what if we swap the roles of base-learner and the embedding from the metric space (i.e., using the base-learner to modulate the embedding)? It looks to me that the feature/embedding from both components (in Fig 5 and 6) are optimized to improve separability. The roles they play in this process are also very interesting to get more elucidation. \\n \\nAnother point worth discussion is that the class dependency currently imposed does not see to include hierarchical structure among classes, i.e., the label space is still flat. It would be great if this can be briefly discussed with respect to the current formulation to better inspire the future work.\\n\\n[Clarity]\\nThe paper is generally well written and I did not have much difficulty to follow. \\n\\n[Significance]\\nWhile the paper is built on an interesting idea, there are still a few areas for further improvement to justify its significance (the the comments above).\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Good paper\", \"review\": \"TL;DR. Significant contribution to meta-learning by incorporating latent metrics on labels.\\n\\n* Summary\\n\\nThe manuscript builds on the observation that using structured information from the labels space improves learning accuracy. The proposed method --CAML-- is an instance of MAML (Finn et al., 2017), where an additional embedding is used to characterize the dissimilarity among labels.\\n\\nWhile quite natural, the proposed method is supported by a clever metric learning step. The classes are first represented by centroids and an optimal mapping $\\\\phi$ is then learnt by maximizing a clustering entropy (similarly to what is performed in a K-means-flavored algorithm, though this connection is not made in the manuscript). A conditional batch normalization (Dumoulin et al., 2017) is then used to model how closeness (in the embedding space $f_\\\\phi$) among labels is taken into account at the meta-learning level.\\n\\nExisting literature is well acknowledged and I find the numerical experiments to be convincing. In my opinion, a clear accept.\\n\\n* Minor issues\\n\\n- I would suggest adding a footnote explaining why Table 1 reports confidence intervals and not just standard deviations. How are constructed those intervals?\\n- Section 3.2 bears ambiguity as the manuscript reads \\\"We first define centroids [...]\\\" depending on $f_\\\\phi$ which is then defined as the argument of the minim of the entropy term. What appears as a circular definition is merely the effect of loose writing yet I am afraid it would confuse readers. I would suggest to rewrite this part, maybe using a pseudo-code to better make the point that $f_\\\\phi$ is learnt.\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"A paper with clear idea for few-shot learning, but there are still some questions about the paper.\", \"review\": \"This paper proposes a new few-shot learning method with class dependencies. To consider the structure in the label space, the authors propose to use conditional batch normalization to help change the embedding based on class-wise statistics. Based on which the final classifier can be learned by the gradient-based meta-learning method, i.e., MAML. Experiments on MiniImageNet show the proposed method can achieve high-performance, and the proposed part can be proved to be effective based on the ablation study.\\n\\nThere are three main concerns about this paper, and the final rating depends on the authors' response.\\n1. The motivation\\nThe authors claim the label structure is helpful in the few-shot learning. If the reviewer understands correctly, it is the change of embedding network based on class statistics that consider such a label structure. From the objective perspective, there are no terms related to this purpose, and the embedding space learning is also based on the same few-shot objective. Will it introduces more information w.r.t. only using embedding space to do the classification?\\n\\n2. The novelty.\\nThis paper looks like a MAML version of TADAM. Both of the methods use the conditional batch normalization in the embedding network, while CAML uses MAML to learn another classifier based on the embedding. Although CAML uses the CBN at the example level and considers the class information in a transductive setting, it is not very novel. From the results, the proposed method uses a stronger network but does not improve a lot w.r.t. TADAM.\\n\\n3. Method details\\n3.1 Since CBN is example induced, will it prone to overfitting?\\n3.2 About the model architecture. \\nCAML uses a 4*4 skip connection from input to output. It is OK to use this improve the final performance, but the authors also need to show the results without the skip connection to fairly compare with other methods. Is this skip connection very important for this particular model? Most methods use 64 channel in the convNet while 30 channels are used in this paper. Is this computational consideration or to avoid overfitting? It is a bit strange that the main network is just four layers but the conditional network is a larger and stronger resNet.\\n3.3 About the MAML gradients\\nHow to compute the gradient in the MAML flow? Will the embedding network be updated simultaneously? In other words, will the MAML objective influences the embedding network?\\n3.4 The training details are not clear. \\nThe concrete training setting is not clear. For example, does the method need model pre-train? What is the learning rate, and how to adapt it? For the MAML, we also need the inner-update learning rate. How many epochs does CAML need?\\n3.5 How about build MAML directly on the embedding space?\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}" ] }
SJf_XhCqKm
Open Loop Hyperparameter Optimization and Determinantal Point Processes
[ "Jesse Dodge", "Kevin Jamieson", "Noah Smith" ]
Driven by the need for parallelizable hyperparameter optimization methods, this paper studies open loop search methods: sequences that are predetermined and can be generated before a single configuration is evaluated. Examples include grid search, uniform random search, low discrepancy sequences, and other sampling distributions. In particular, we propose the use of k-determinantal point processes in hyperparameter optimization via random search. Compared to conventional uniform random search where hyperparameter settings are sampled independently, a k-DPP promotes diversity. We describe an approach that transforms hyperparameter search spaces for efficient use with a k-DPP. In addition, we introduce a novel Metropolis-Hastings algorithm which can sample from k-DPPs defined over any space from which uniform samples can be drawn, including spaces with a mixture of discrete and continuous dimensions or tree structure. Our experiments show significant benefits in realistic scenarios with a limited budget for training supervised learners, whether in serial or parallel.
[ "hyperparameter optimization", "black box optimization" ]
https://openreview.net/pdf?id=SJf_XhCqKm
https://openreview.net/forum?id=SJf_XhCqKm
ICLR.cc/2019/Conference
2019
{ "note_id": [ "HJg8RdU4gV", "BkeUrUReAX", "rkl41X0xC7", "BklWLzRl07", "SylYFraThX", "Bke7SBzq2m", "rklkvgsL27" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1545001165732, 1542673982348, 1542673115914, 1542672969055, 1541424512780, 1541182778840, 1540956247164 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1373/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1373/Authors" ], [ "ICLR.cc/2019/Conference/Paper1373/Authors" ], [ "ICLR.cc/2019/Conference/Paper1373/Authors" ], [ "ICLR.cc/2019/Conference/Paper1373/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1373/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1373/AnonReviewer3" ] ], "structured_content_str": [ "{\"metareview\": \"This is a very clearly written, well composed paper that does a good job of placing the proposed contribution in the scope of hyperparameter optimization techniques. This paper certainly appears to have been improved over the version submitted to the previous ICLR. In particular, the writing is much clearer and easy to follow and the methodology and experiments have been improved. The ideas are well motivated and it's exciting to see that sampling from a k-DPP can give better low discrepancy sequences than e.g. Sobol. However, the reviewers still seem to have two major concerns, namely novelty of the approach (DPPs have been used for Bayesian optimization before) and the empirical evaluation.\", \"empirical_evaluation\": \"As Reviewer1 notes, there are much more recent approaches for Bayesian optimization that have improved significantly over the TPE method, also for conditional parameters. There are also more recent approaches proposing variants of random search such as hyperband.\", \"novelty\": \"There is some work on using determinantal point processes for Bayesian optimization and related work in optimal experimental design. Optimal design has a significant amount of literature dedicated to designing a set of experiments according to the determinant of their covariance matrix - i.e. D-Optimal Design. This work may add some interesting contributions to that literature, including fast sampling from k-DPPs, etc. It would be useful, however, to add some discussion of that literature in the paper. Jegelka and Sra's tutorial at NeurIPS on negative dependence had a nice overview of some of this literature.\\n\\nUnfortunately, two of the three reviewers thought the paper was just below the borderline and none of the reviewers were willing to champion it. There are very promising and interesting ideas in the paper, however, that have a lot of potential. In the opinion of the AC, one of the most powerful aspects of DPPs over e.g. low discrepancy sequences, random search, etc. is the ability to learn a distance over a space under which samples will be diverse. This can make a search *much* more efficient since (as the authors note when discussing random search vs. grid search) the DPP can sample more densely in areas and dimensions that have higher sensitivity. It would be exciting to learn kernels specifically for hyperparameter optimization problems (e.g. a kernel specifically for learning rates that can capture e.g. logarithmic scaling). Taking the objective into account through the quality score, as proposed for future work, also seems very sensible and could significantly improve results as well.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"A well written, interesting paper on designing experiments for hyperparameter optimization using DPPs, but with lingering concerns over novelty and experiments.\"}", "{\"title\": \"Thank you for your review.\", \"comment\": \"We thank AnonReviewer3 for their review, and address their points in order.\\n\\nWe experimented with a number of different kernels, including cosine similarity (the most common approach used with K-DPPs), a kernel built using Levenshtein distance, and the RBF kernel. We found our results to be quite robust to the choice of the kernel -- we saw the K-DPP outperform the other approaches on our hyperparameter optimization experiments for all three. Similarly, we experimented with a number of different bandwidths for the RBF kernel, and found that as long as the bandwidth was large enough that the points interacted each other, it performed well. Our experiments focused on the presented feature function, as it was the most natural, but we expect any feature function which allows the points to be repulsed from one another (through the kernel) would behave similarly. In this work, we set the quality function to be 1, and have left learning the quality function to future work.\\n\\nIf the search space is continuous, the mixing rate of Alg. 2 is not known. In practice, the MCMC algorithm is quite fast (a small fraction of the expense of training the models) so we ran the algorithm for 10x as long as the expected mixing rate if the space had been discrete, though our synthetic and real experimental results indicated that it was mixed significantly earlier.\\nThe k log(N) term appears when analyzing the difference between Algorithm 1 and Algorithm 2: computation and storage of the NxN kernel matrix L. In the discrete case, Algorithm 1 requires computing all of L, which has time and space complexity of O(N^2). Algorithm 2, instead of constructing L directly, only uses a submatrix of L computed on the fly. It runs for O(N log(N)) steps, and at each step computes and stores at most O(k) additional distances, leading to a total of O(Nk log(N)) time and space complexity (with a max of O(N^2) once it computes all of L). Therefore, Algorithm 2 has better complexity when O(Nk log(N)) < O(N^2), or when k log(N) < N. Otherwise, Algorithm 1 and 2 have the same time and space complexity. We will include a clarification in the paper, please let us know if this is still unclear.\\n\\nWhile we agree Algorithm 2 is a straightforward extension, it is an important one for the community. Other work that has used K-DPPs for hyperparameter optimization (Kathuria et al., 2016, Wang et al., 2017) has been restricted to non-tree structured domains, and has discretized continuous spaces to be discrete so they could use existing sampling algorithms, which we experimentally found to hurt performance. We introduce the ability to sample from more realistic hyperparameter spaces.\\n\\nThank you for pointing out the small changes, they will be updated. \\n\\nThe title of your review mentions worries about novelty, but as we mentioned in a reply to another review, we believe this approach (drawing samples from tree-structured, mixed discrete and continuous spaces in the open loop regime) and analysis (including dispersion calculation) are novel. We welcome further discussion, especially of more specific novelty concerns that may arise, and look forward to further suggestions on how to improve our paper or clarifications we can provide.\"}", "{\"title\": \"How else can we improve?\", \"comment\": \"We thank AnonReviewer2 for their thoughtful review.\\n\\nWe have been convinced by the synthetic and experimental results (on two hyperparameter search spaces) of the efficacy and generality of our approach, and ask what experiments on an image dataset would contribute beyond the current experimental results?\\n\\nWe look forward to further discussion, especially if there are any other points we can clarify, or additional ways you suggest we could improve our work.\"}", "{\"title\": \"Thank you for both rounds of review you have provided for our paper.\", \"comment\": \"We greatly appreciate both rounds of review that you've provided for our paper. We took the first review quite seriously, and it seems that you missed some of the changes we made in response to your review and others. Specifically, addressing items that have changed that were mentioned in your first review:\\nThe first experiment actually shows dispersion, not discrepancy. While we include star discrepancy in the appendix (in Figure 5, not Figure 1), we argue in section 3 that dispersion is a better measure of the quality of a point set for optimization (though discrepancy has been the tool of choice for previous work). We include a theorem which bounds the optimization error with dispersion, a connection which discrepancy lacks, and show our approach outperforms the Sobol sequence and uniform sampling. We believe this connection will encourage future work to move away from evaluating open loop methods with discrepancy, and to use dispersion instead.\\nThe third experiment does not address discretization error. Instead, it is another hyperparameter optimization experiment on a different search space, again showing that samples from a K-DPP outperform uniform samples, the Sobol sequence, and BO-TPE.\\n\\nWe believe our method for defining a K-DPP over tree-structured, mixed discrete-continuous spaces is novel, as is the sampling algorithm we introduce in Algorithm 2. We know of no other approach that can draw K-DPP samples from tree-structured, mixed discrete-continuous domains. Previous work using K-DPPs for hyperparameter optimization (Kathuria et al., 2016, Wang et al., 2017) discretize continuous domains, then use a known algorithm to sample from a discrete (non-tree structured) base set. Your first review mentions a plot which showed the error from discretizing the search space, empirically motivating this contribution.\", \"to_address_the_listed_reservations\": \"we do find it surprising that samples from a K-DPP match or outperform the Sobol sequence in our synthetic measures, as the Sobol sequence was designed specifically to perform well. Additionally, it has become perhaps the most frequently used approach (e.g. without function evaluation results, Spearmint returns the Sobol sequence, while our results indicate that it should return K-DPP samples instead).\\n\\nWe agree that scaling into large K and D is important, but that isn't the focus of this work, and there is a large body of work on improving space and time complexity of GPs which is directly applicable to our approach.\\n\\nWhen considering which pieces of recent work to compare against, we emphasize that our work is not trying to answer the question, \\\"Is active learning helpful?\\\" by comparing active learning approaches against our open loop approach; instead, we focus on comparing against other non-active learning approaches. For example, Hyperband starts by uniformly sampling K hyperparameter assignments, then (partially) training and evaluating models with those assignments. This work does not advocate replacing Hyperband with a single draw from a K-DPP (i.e., replacing an active learning strategy with a non-active learning strategy), but it does argue that the uniform sampling step in Hyperband be replaced with a draw from a K-DPP (replacing an open loop strategy with another, better one). All of your suggested comparisons are against active learning approaches.\\n\\nWe do include experiments comparing against Spearmint (an active learning approach), though this is meant to illustrate the large cost in optimization time that active learning entails. We appreciate the suggestion to compare against Spearmint with more parallelization, but note that in our experiments we compared against the most parallel possible Spearmint configuration (as well as a number of others). Any active learning strategy (excluding Hyperband-style evaluations of partially trained models) will take at least twice as long in expectation as a fully-parallel non-active learning strategy like a K-DPP to train and evaluate models with a set of hyperparameter assignments, so we expect the results in Figure 4 to hold for any number of hyperparameters.\\n\\nThank you again for your review, we look forward to further discussion.\"}", "{\"title\": \"small number of hyperparameters, comparison with spearmint not strong enough\", \"review\": \"I reviewed the same paper last year. I am appending a few lines based on the changes made by authors.\\n\\nThe authors propose k-DPP as an open loop (oblivious to the evaluation of configurations) method for hyperparameter optimization and provide its empirical study and comparison with other methods such as grid search, uniform random search, low-discrepancy Sobol sequences, BO-TPE (Bayesian optimization using tree-structured Parzen estimator) by Bergstra et al. (2011). The k-DPP sampling algorithm and the concept of k-DPP-RBF over hyperparameters are not new, so the main contribution here is the empirical study. \\n\\nThe first experiment by the authors shows that k-DPP-RBF gives better star discrepancy than uniform random search while being comparable to low-discrepancy Sobol sequences in other metrics such as distance from the center or an arbitrary corner (Fig. 1).\\n\\nThe second experiment shows surprisingly that for the hard learning rate range, k-DPP-RBF performs better than uniform random search, and moreover, both of these outperform BO-TPE (Fig. 2, column 1).\\n\\nThe third experiment shows that on good or stable ranges, k-DPP-RBF and its discrete analog slightly outperform uniform random search and its discrete analog, respectively.\\n\\nI have a few reservations. First, I do not find these outcomes very surprising or informative, except for the second experiment (Fig. 2, column 1). Second, their study only applies to a small number like 3-6 hyperparameters with a small k=20. The real challenge lies in scaling up to many hyperparameters or even k-DPP sampling for larger k. Third, the authors do not compare against some relevant, recent work, e.g., Springenberg et al. (http://aad.informatik.uni-freiburg.de/papers/16-NIPS-BOHamiANN.pdf) and Snoek et al. (https://arxiv.org/pdf/1502.05700.pdf) that is essential for this kind of empirical study.\\n\\nCOMMENTS ON THE CHANGES SINCE THE LAST YEAR\\n\\nI am not convinced by the comparison with Spearmint added by the authors since the previous version. It is unclear to me if the comparison of wall clock time and accuracy holds for larger number of hyperparameters or against Spearmint with more parallelization.\\n\\nIn addition the authors do not compare against more recent work, e.g., \\n\\n@INPROCEEDINGS{falkner-bayesopt17,\\n author = {S. Falkner and A. Klein and F. Hutter},\\n title = {Combining Hyperband and Bayesian Optimization},\\n booktitle = {NIPS 2017 Bayesian Optimization Workshop},\\n year = {2017},\\n month = dec,\\n}\\n\\n@InProceedings{falkner-icml-18,\\n title = {{BOHB}: Robust and Efficient Hyperparameter Optimization at Scale},\\n author = {Falkner, Stefan and Klein, Aaron and Hutter, Frank},\\n booktitle = {Proceedings of the 35th International Conference on Machine Learning (ICML 2018)},\\n pages = {1436--1445},\\n year = {2018},\\n month = jul,\\n}\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"interesting and clear idea on using DPPs for hyperparameter search\", \"review\": [\"This paper proposes an approach to get samples with high dispersion for hyperparameter optimisation.\", \"It theoretically motivates the use of Determinantal Point Processes in yielding such samples.\", \"Further, an iterative mixing algorithm is proposed to handle continuous and discrete sample space.\", \"Experiments on finding hyperparameter for sentence classification are presented. In terms of accuracy, it performs better than other open-loop methods. In comparison to closed-loop methods, it yields parameter settings with comparable performance but with gains in wall clock time.\", \"The distinction from close-loop approaches makes it easy to parallelise.\", \"This paper is novel in its modelling of hyperparameter optimisation with DPP and the theoretical justification and experiments have been clearly presented. It would be interesting to explore the practicability of the method on more large-scale experiments on image related tasks.\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"recommending rejection because of lack of analyses and questionable novelty\", \"review\": [\"The authors propose to use k-DPP to select a set of diverse parameters and use them to search for a good a hyperparameter setting.\", \"This paper covers the related work nicely, with details on both closed loop and open loop methods. The rest of the paper are also clearly written. However, I have some concerns about the proposed method.\", \"It is not clear how to define the kernel, the feature function and the quality function for the proposed method. The choices of those seem to have a huge impact on the performance. How was those functions decided and how sensitive is the result to hyperparameters of those functions?\", \"If the search space is continuous, what is the mixing rate of Alg. 2? In practice, how is \\\"mixed\\\" decided? What exactly is the space and time complexity? I'm not sure where k log(N) comes from in page 7.\", \"Alg. 2 is a straight forward extension of Alg. 1, just with L not explicitly computed. I think it would have more novelty if some theoretical analyses can be shown on the mixing rate and how good this optimization algorithm is.\"], \"other_small_things\": \"- citation format problems in, for example, Sec. 4.1. It should be \\\\citep instead of \\\\cite. \\n- it would be good to mention Figure 2 in the text first before showing it. \\n\\n[Post rebuttal]\\nI would like to thank the authors for their clarifications. However, I am still concerned with the novelty. The absence of provable mixing rate is also a potential weakness. I think a clearer emphasis on the novelty, e.g. current algorithm with mixing rate analyses or more thorough empirical comparisons will make the paper stronger for resubmission.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}" ] }
S1gd7nCcF7
Self-Supervised Generalisation with Meta Auxiliary Learning
[ "Shikun Liu", "Edward Johns", "Andrew Davison" ]
Auxiliary learning has been shown to improve the generalisation performance of a principal task. But typically, this requires manually-defined auxiliary tasks based on domain knowledge. In this paper, we consider that it may be possible to automatically learn these auxiliary tasks to best suit the principal task, towards optimum auxiliary tasks without any human knowledge. We propose a novel method, Meta Auxiliary Learning (MAXL), which we design for the task of image classification, where the auxiliary task is hierarchical sub-class image classification. The role of the meta learner is to determine sub-class target labels to train a multi-task evaluator, such that these labels improve the generalisation performance on the principal task. Experiments on three different CIFAR datasets show that MAXL outperforms baseline auxiliary learning methods, and is competitive even with a method which uses human-defined sub-class hierarchies. MAXL is self-supervised and general, and therefore offers a promising new direction towards automated generalisation.
[ "meta learning", "auxiliary learning", "multi-task learning", "self-supervised learning" ]
https://openreview.net/pdf?id=S1gd7nCcF7
https://openreview.net/forum?id=S1gd7nCcF7
ICLR.cc/2019/Conference
2019
{ "note_id": [ "HkewfbFWe4", "B1esP4YtAm", "BygTJNKtRQ", "HkesnzFKCQ", "rJl-0ufypm", "Bkxpr4aq3m", "rylvbv_93Q" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544814863487, 1543242851021, 1543242724899, 1543242419309, 1541511369067, 1541227589021, 1541207807471 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1372/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1372/Authors" ], [ "ICLR.cc/2019/Conference/Paper1372/Authors" ], [ "ICLR.cc/2019/Conference/Paper1372/Authors" ], [ "ICLR.cc/2019/Conference/Paper1372/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1372/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1372/AnonReviewer1" ] ], "structured_content_str": [ "{\"metareview\": \"This paper proposes a framework for generating auxiliary tasks as a means to regularize learning. The idea is interesting, and the method is simple. Two of the three reviewers found the paper to be well-written. The experiment include a promising result on the CIFAR dataset. The reviewer's brought up several concerns regarding the description of the method, the generality of the method (e.g. the requirement for class hierarchy), the validity and description of the comparisons, and the lack of experiments on domains with much more complex hierarchies. None of these concerns were not addressed in revisions to the paper. Hence, the paper in it's current state does not meet the bar for publication.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"meta review\"}", "{\"title\": \"Response to Reviewer 1\", \"comment\": \"We thank for the reviewer for their positive comments on our work, and we share our responses below.\\n\\nThe purpose of our work is not to achieve state-of-the-art performance simply by incorporating the latest network architectures and optimisers. Instead, we provide a novel general framework for automating generalisation, and show that when used with standard classification networks across all baselines, our method performs the best.\\n\\nFurthermore, as we also explained in Reviewer #3, the hyper-parameters for defining a hierarchy is not critical, and we can choose an arbitrary hierarchy whilst still achieving better performance than baselines. In the future work, we would like to explore how to find the optimal hierarchy in an automatic manner, or provide an alternative solution on building a general type of auxiliary tasks (such as regression). However, this is the first work to present a double-gradient method for auxiliary task generation, and we believe that it is important to present the success of this initial method now given how simple and general it is, and then fine-tune other aspects in future work.\"}", "{\"title\": \"Clarification and other comments\", \"comment\": \"We thank for the reviewer for their comments on our work, and we share our responses below.\\n\\n1) We agree that we did not provide a clear definition of \\\"task\\\". In the present paper there are two tasks: classification into primary labels, and classification into secondary labels. We did not mean to imply that the classification of a specific class is a task on its own. We agree however that a clearer introduction of the terminology would be clearly helpful and we plan to add this to the final submission.\\n\\n2) This comment is not entirely correct and we would like to apologies for any confusion in the paper. Actually, the update of the generator depends on the improvement of the classifier for the *principal* labels on the *meta-training* data, i.e. the improvement in generalisation to unseen data. Thus, the optimal auxiliary labels are not the ground-truth labels for the principal classes, since this would make both terms in the minimisation for $\\\\theta_1$ (the second equation in 3.2) identical and not allow any leveraging of the meta-training data. Also, we would argue that the KL-divergence, rather then introducing noise, allows us to avoid collapsing classes which we would claim are due to dying neurons (again, there is not loss/mechanism drawing the auxiliary labels to be the same as the primary ones). These claims are supported by showing that providing random labels does not lead to any improved performance and by our experience that using hard labels does indeed improve performance.\\n\\n3) Providing fair comparisons across a range of very different methods is not easy when other methods aim to solve a different problem. Concerning the comparison with prototypical networks, we do agree that this is not a fair comparison and we would like to change the phrasing in the paper. The original reason for associating this to the prototypical network was that we employ their zero-shot setup: i.e. we use a VGG network to obtain prototypes on the meta-data and then use these prototypes to define an auxiliary task on the training-data.\\n\\n4) We do agree that requiring the class hierarchy is a current limitation of the work. While it is still general enough for solving classification tasks (we merely have to choose a fixed number of sub-classes per task, e.g. 5 without having to provide anything else), we would want to look at more general auxiliary task in future. One option we are considering is employing an auxiliary regression task, where the generator network would provide vectors and the corresponding loss would be simple regression. However, since this is the first work to use a double gradient method for auxiliary task generation, we believe that presenting results with a comparison to human auxiliary labels, which itself also requires this hierarchy, is a good starting point.\\n\\n5) We would very much like to test our approach on more complex datasets with more varied classes, and this will be part of future work. However, we would like to repeat that our approach can work with an arbitrary hierarchy (e.g. assigning the same number of sub-classes to every class). The reason why we only used 100 classes in our experiments is for allowing the comparison with human-defined classes, but in principle we could use any number of sub-classes per primary class. In the CIFAR10 dataset in which a hierarchy is not defined, we show that using 6 different hierarchies all lead to a better generalisation.\"}", "{\"title\": \"Response to Reviewer2\", \"comment\": \"We thank for the reviewer for their comments on our work, and we share our responses below.\\n\\n1. Novelty: To the best of our knowledge, this is the first paper presenting a simple solution to generating useful auxiliary tasks in a self-supervised manner. The idea indeed was inspired by other works in auxiliary learning, but only to the extent that we also use auxiliary tasks to improve performance of a principal task. The method is not a heuristic; it is theoretically motivated by use of the double gradient, and inspired by the success of this in meta learning (e.g. MAML [1]). If the reviewer thinks our method is an incremental contribution or similar to previous algorithms, please list the specific references. \\n\\n2. The theoretical insight in this paper comes from the recent advancements in using a double gradient, such as in MAML [1], or understanding what makes a good auxiliary data sampler [2]. The inner gradient is based on the standard auxiliary learning loss as proposed in other works, whereas the outer gradient uses this inner gradient to actually learn the auxiliary tasks. The use of an outer gradient for auxiliary learning is our key novelty, and has not been used in any works before.\\n\\n3. Feature distributions of training and meta-training data (target and auxiliary data in your language) are actually not identical. The \\\"learning to generalise\\\" success from our method is due to closing the *existing* distribution shift in these two datasets. If the distributions are identical, then we wouldn't have any improved generalisation from our method.\\n\\n4. Both CIFAR10 and CIFAR100 are the subsets from 80 million tiny images dataset [3]. As described in the website and paper, all images are collected from the internet and partially labelled by humans, and thus indeed present a real-world setup rather than a synthetic setup. Further, we show that if a harder test set with a more variety exists (CIFAR10.1v6), out method could provide even better generalisation (Figure 4). Thus, we hope the reviewer could better explain why you think our algorithm could fail in real-world scenarios. \\n\\n[1] Finn et al. Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks ICML, 2017.\\n[2] Zhang et al. Fine-Grained Visual Categorization using Meta-Learning Optimization with Sample Selection of Auxiliary Data, ECCV 2018.\\n[3] http://people.csail.mit.edu/torralba/tinyimages/\"}", "{\"title\": \"The paper is an incremental contribution for an artificially sounding problem\", \"review\": \"This paper proposes an algorithm for auxiliary learning. Given a target prediction task to be learned on training data, the auxiliary learning utilizes external training data to improve learning. The authors focus on a setup where both target and external training data come from the same distribution but differ in class labels, where each class in the target data is a set of finer-grained classes in the auxiliary data. The authors propose a heuristic for learning from both data sets through minimization of a joint loss function. The experimental results show that the proposed methods works well on this particular setup on CIFAR data set.\", \"strengths\": [\"a new auxiliary learning algorithm\", \"positive results on CIFAR data set\"], \"weaknesses\": [\"novelty is low: the proposed algorithm is a heuristic similar to previously proposed algorithms in the transfer learning and auxiliary learning space\", \"there is no attempt to provide a theoretical insight into the performance of the algorithm\", \"the problem assumptions are too simplistic and unrealistic (feature distributions of target and auxiliary data are identical), so it is questionable if the proposed algorithm has practical importance\", \"experiments are performed using a synthetic setup on a single data set, so it remains unclear if the algorithm would be successful in a real life scenario\", \"the paper is poorly written and sentences are generally very hard to parse. For example, section 3.1 is opened by statements such as \\\"(we use) a multi-task evaluator which trains on the principal and auxiliary tasks, and evaluates the performance of the auxiliary tasks on a meta set\\\"??\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Not accurate to call it meta-learning, the auxiliary labels might bring few helpful information, lacking comparisons to several important baselines and benchmark datasets.\", \"review\": \"This paper proposes a self-auxiliary-training method that aims to improve the generalization performance of simple supervised learning. The basic idea is to train the classification network to predict fine-level auxiliary labels in addition to the ground-truth coarse label, where the auxiliary labels used in training is generated by a generator network. During training, the classification network and the generator network are alternatively updated, and the update of the latter aims to maximize the improvement of the former after using the generated auxiliary label for training. The method requires a class hierarchy in advance to define the binary mask applied to the output layer for auxiliary class prediction. A KL divergence term is attached to the optimization objective to avoid generating trivial and collapsing auxiliary classes.\", \"pros\": \"1) The main idea is simple and easy to understand.\\n2) It discusses the class collapsing problem in generating pseudo (auxiliary) labels and provides a reasonable solution, i.e., using KL divergence as regularization.\\n3) Uses several visualizations to show experimental results.\", \"cons\": \"1) The problem it aims to solve is neither multi-task learning nor meta-learning: it tries to solve a supervised classification problem defined on principle classes, with the help of simultaneously predicting/generating auxiliary class labels. Although the concept of \\\"task\\\" is not explicitly defined in this paper, the authors seem to associate each task with a specific class. This is not correct: in meta-learning, each task is a subset of classes drawn from a ground set of classes, and different tasks are independently sampled. In addition, the classification models for different tasks are independent, though their training might be related by a meta-learner. Hence, the claims in multiple places of this paper and the names for the two networks are misleading.\\n\\n2) At the end of Page 4, the authors show that the update of the generator only depends on the improvement of the classifier after using the auxiliary label for training. In fact, the optimal auxiliary labels minimizing the objective is the ground truth label for principle classes. This results in the class collapsing problem observed by the authors. The KL divergence regularization introduces extra randomness to the auxiliary labels and thus mitigates the problem, but it hardly provides any useful information except randomness. In other words, the auxiliary labels for a specific principle class are very possible to be multiple noisy copies of the principal label with random perturbations. So it is not convincing to me that the auxiliary labels generated by the generator can be really helpful. My conjecture is that the observed improvements are mainly due to the softness of the auxiliary labels, which has been proved by model compression/knowledge distillation and recent \\\"born-again neural networks\\\". To verify this, the authors might need to compare the results with those methods (which use the generated soft probability of ground truth classes for training), and the \\\"random-noisy copies of soft principle label\\\" mentioned above.\\n\\n3) The experiments lack comparisons to several important baselines from self-supervised learning community, and methods using soft labels for training (as mentioned in 2) above). A successful idea of self-supervised learning is to use the output feature map of the trained classification network to generate auxiliary training signals, since it provides extra information about the learned distance beyond the ground-truth labels. The authors might want to compare to \\\"Mathilde Caron, Piotr Bojanowski, Armand Joulin, and Matthijs Douze. Deep Clustering for Unsupervised Learning of Visual Features. ECCV 2018.\\\" and \\\"Carl Doersch and Andrew Zisserman. Multi-task self-supervised visual learning. ICCV 2017.\\\" Moreover, since the method is not a meta-learning approach for few-shot learning, it is not fair and also not appropriate to compare with Prototypical Network.\\n\\n4) Although the paper claims that the ground truth fine labels are not required, it requires a class hierarchy, which in the experiments are provided by the dataset and defined between true coarse and fine classes. In practice, such hierarchy might be much harder to achieve than the primary (coarse) labels, and might be as costly to obtain as the true fine-class labels. This weakens the feasibility of the proposed method.\\n\\n5) The experiments only test the proposed method on CIFAR100 and CIFAR10, which has at most 100 fine classes. It is necessary to test it on datasets with much more fine classes and much-complicated hierarchy, e.g., ImageNet, MS COCO or their subsets, which have ideal class hierarchy structures.\", \"minor_comments\": \"Some important equations in the paper should be numbered.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"An interesting idea for applying meta-learning to a problem of learning auxiliary tasks in a self-supervised fashion\", \"review\": \"Summary:\\nThe role of auxiliary tasks is to improve the generalization performance of the principal task of interest. So far, hand-crafted auxiliary tasks are generated, tailored for a problem of interest. The current work addresses a meta-learning approach to automatically generate auxiliary tasks suited to the principal task, without human knowledge. The key components of the method are: (1) meta-generator; (2) multi-task evaluator. These two models are trained using the gradient-based meta-learning technique (for instance, MAML). The problem of image classification is considered only, while authors claimed the method can be easily applied to other problems as well.\", \"strengths\": [\"To my best knowledge, the idea of applying the meta-learning to the automatic generation of auxiliary tasks is novel.\", \"The paper is well written and easy to read.\", \"The method nicely blends a few components such as self-supervised learning, meta-learning, auxiliary tasks into a single model to tackle the meta auxiliary learning.\"], \"weakness\": [\"The performance gain is not substantial in experiments. I would like to suggest to use the state-of-the-arts classifier for the principal task and to evaluate how much gain your method can get with the help of auxiliary tasks. You can refer to the state-of-the-arts performance on CIFAR.\", \"If the information on the hierarchy of sub-categories is not available, it will be an annoying hyperparameters that should be well tuned.\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
Hyg_X2C5FX
GAN Dissection: Visualizing and Understanding Generative Adversarial Networks
[ "David Bau", "Jun-Yan Zhu", "Hendrik Strobelt", "Bolei Zhou", "Joshua B. Tenenbaum", "William T. Freeman", "Antonio Torralba" ]
Generative Adversarial Networks (GANs) have recently achieved impressive results for many real-world applications, and many GAN variants have emerged with improvements in sample quality and training stability. However, visualization and understanding of GANs is largely missing. How does a GAN represent our visual world internally? What causes the artifacts in GAN results? How do architectural choices affect GAN learning? Answering such questions could enable us to develop new insights and better models. In this work, we present an analytic framework to visualize and understand GANs at the unit-, object-, and scene-level. We first identify a group of interpretable units that are closely related to object concepts with a segmentation-based network dissection method. Then, we quantify the causal effect of interpretable units by measuring the ability of interventions to control objects in the output. Finally, we examine the contextual relationship between these units and their surrounding by inserting the discovered object concepts into new images. We show several practical applications enabled by our framework, from comparing internal representations across different layers, models, and datasets, to improving GANs by locating and removing artifact-causing units, to interactively manipulating objects in the scene. We provide open source interpretation tools to help peer researchers and practitioners better understand their GAN models.
[ "GANs", "representation", "interpretability", "causality" ]
https://openreview.net/pdf?id=Hyg_X2C5FX
https://openreview.net/forum?id=Hyg_X2C5FX
ICLR.cc/2019/Conference
2019
{ "note_id": [ "B1la7rH-g4", "SJlKyI2FA7", "Syga2ShtR7", "HJlb9ShFRm", "BJgk8S3K0m", "rylRgFDnnQ", "Bklj6-einQ", "H1lQioJchm" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544799524642, 1543255520729, 1543255476977, 1543255433350, 1543255366753, 1541335285662, 1541239234955, 1541172123475 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1371/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1371/Authors" ], [ "ICLR.cc/2019/Conference/Paper1371/Authors" ], [ "ICLR.cc/2019/Conference/Paper1371/Authors" ], [ "ICLR.cc/2019/Conference/Paper1371/Authors" ], [ "ICLR.cc/2019/Conference/Paper1371/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1371/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1371/AnonReviewer1" ] ], "structured_content_str": [ "{\"metareview\": \"The paper proposes an interesting framework for visualizing and understanding GANs, that will be of clear help for understanding existing models and might provide insights for developing new ones.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Intersting framework for the analysis of GANs\"}", "{\"title\": \"Answers to questions for AnonReviewer3\", \"comment\": \"Thank you for your comments and questions; we have incorporated your suggestions in the revision, and we also answer your questions below.\", \"q7\": \"apply the author's methods to other architecture, and to other application domains?\", \"a7\": \"We have applied our method to WGAN-GP model with a different generator architecture, as shown in Figure 16 in Section S-6.3. Our method can find interpretable units for different GANs objectives and architectures.\\n\\nThe general framework can be extended beyond generative models for vision, although that topic is beyond the scope of the current paper. Concurrent work submitted to ICLR 2019 is an example of similar ideas being applied to natural language translation. (https://openreview.net/forum?id=H1z-PsR5KX)\", \"q8\": \"how to choose the 'units' for which they seek interpretation when reporting their results?\", \"a8\": \"We do two analyses. For the dissection analysis examining correlation, u are analyzed as individual units (i.e., |U| = 1). We analyze every individual unit in a layer, and we plot all units that match a segmented concept with IoU exceeding 5%.\\n\\nFor the causal analysis, we choose the elements of U by doing the optimization described in equation (6), which finds an alpha that specifies a contribution for every unit to maximize causal effects, ranking units according to highest alpha, and choosing the number needed to achieve a desired causal effect.\", \"q9\": \"How large does u tend to be? How would one choose it? Is it one filter out of all filters in a certain layer?\", \"a9\": \"To choose U to have strong causal effects, we measure and plot the causal effect of different numbers of units for U as in Figure 4. The increase in causal effect diminishes after about 20 units. To be able to compare different causal sets on an equal basis, we set |U| = 20 for most of our experiments.\", \"q10\": \"When optimizing for sets of units together (using the alpha probabilities and the optimization in eq. 6) what is d? Is it performed for all units in a single layer? More details would be useful here.\", \"a10\": \"Yes, we perform an optimization for all units in a single layer. d is the number of all units in a single layer (512, for the case of layer 4 of our Progressive GAN).\\n\\nFor the dissection analysis, we analyze every individual unit in a layer, and we plot all units that match a segmented concept with IoU exceeding 5%. The causal analysis requires identifying sets of units, which is done through the optimization in equation (6).\\n\\nBeyond this objective, learning U involves several additional details including how to specify the big constant for positive intervention, how to sample class-relevant positions, and how to initialize the coefficient alpha. We have added a section S-6.4 to supplementary materials with these implementation details.\", \"q11\": \"Regarding SWD and FID\", \"a11\": \"SWD and FID are measures which estimate realism of the GAN output by measuring the distance between the generated distribution of images and the true distribution of images; Borji (arXiv 2018) surveys and compares these methods at https://arxiv.org/abs/1802.03446. We have clarified these terms and added citations in the paper.\", \"q12\": \"No reference to supp. info and minor typos:\", \"a12\": \"Thank you for your detailed comments; we have updated the text and expanded the supplementary materials. We also added a brief summary of the supplementary material in each section of the main paper.\"}", "{\"title\": \"Answers to questions for AnonReviewer2\", \"comment\": \"Thank you for your comments and questions; we have incorporated your suggestions in the revision, and we answer your questions below.\", \"q3\": \"About diagnosing and improving GANs, please give more details of the human annotation for the artifacts.\", \"a3\": \"We visualize the top 10 highest activating images for each unit, and we manually identify units with noticeable artifacts in this set. (This human annotation was done by an author.)\\nDetails have been added to section 4.2. This method for diagnosing and improving GANs is further analyzed and expanded in the supplementary materials, in section S-6.1.\", \"q4\": \"Minor - I think there is a typo in the first and second paragraphs in section 4.2, Figure 14 -> Figure 8.\", \"a4\": \"Thanks for your detailed comments. We have fixed it.\", \"q5\": \"Have you ever considered to handle these imperfect semantic segmentation models?\", \"a5\": \"We totally agree with the reviewer: the success of our method is linked to the accuracy and comprehensiveness of the segmentation model used. We have performed a human evaluation regarding the accuracy of our method on a Progressive GAN model (on LSUN living rooms), and have found that, our method provides correct labels for 96% of interpretable units. Further details of the evaluation can be found in section S-6.2.\\n\\nIn addition, a semantic segmentation model can perform poorly if the analyzed images are very different from the images on which the semantic segmentation was trained. For example in the \\u201cbedroom\\u201d scene category, if a unit is labeled as correlating with \\u2018swimming pool\\u2019 this may be due to a poorly performing GAN model. We have partly addressed this issue by measuring the average realism of each unit using the FID metric. In practice, in Figure 16, we show the effect of such a filter in which we only report \\u201crealistic\\u201d and interpretable units. Details of such an approach have been added to section S-6.3.\\n\\nAs more accurate and robust segmentation models are developed, we expect our method to be able to identify more semantic concepts inside a representation.\", \"q6\": \"Is there a way to apply the framework to the training process of GANs?\", \"a6\": \"By using a per-unit realism score based on the FID metric on generator units learned by the GAN, we can identify units that should be zeroed to improve the realism of the GAN output. (We assign a realism score to each unit by measuring FID for a subset of images that highly activate the unit.) Zeroing the units with the highest FID score as measured this way will improve the quality of the output nearly as well as ablating units identified manually. This modification could be incorporated into an automatic training process. S-6.1 has further details and a preliminary evaluation of this idea for introducing per-unit analysis in an automatic process. A full development of this idea is left to future work.\\n\\nDissection can also be used to monitor the progress of training by quantifying the emergence, diversity, and quality of semantic units. For example, in Figure 18 we show dissections of layer4 representations of a Progressive GAN model trained on bedrooms, captured at a sequence of checkpoints during training. As training proceeds, the number of units matching objects (and the number of object classes with matching units) increases, and the quality of object detectors as measured by average IoU over units increases. During this successful training, dissection suggests that the model is learning the structure of a bedroom, because increasingly units converge to meaningful bedroom concepts. We add this analysis to section S-6.6.\"}", "{\"title\": \"Answers to questions for AnonReviewer1\", \"comment\": \"Thank you for your comments and questions; we have incorporated your suggestions in the revision, and we also answer your questions below.\", \"q1\": \"Theoretical interpretation of the visualization, and comparisons to the Class Activation Maps (CAM)?\", \"a1\": \"Our visualization is very simple and corresponds to equation (2): we upsample a single channel of the activation featuremap and show the region exceeding a threshold: unlike CAM, no gradients are considered. The threshold used is chosen to maximize relative mutual information with the best-matching object class based on semantic segmentation, however, a fixed threshold such as a top 1% quantile level would look very similar.\", \"it_is_also_informative_to_consider_a_cam_like_visualization_of_the_causal_impact_of_interventions_in_the_model_on_later_layers\": \"we can create a heatmap where each pixel shows the magnitude of the last featuremap layer change that results when making an intervention at each pixel in an early layer. The result is shown in Figure 17 of supplementary materials S-6.4: this visualization shows that the effects of an intervention at different locations are not uniform. The heatmap pattern reveals the structure of the model\\u2019s sensitivity to a specific concept at various locations.\", \"q2\": \"How is the rate of finding the correct sets of units for a particular visual class?\", \"a2\": \"Our method provides a correct label for 96% of interpretable units, as measured by the following human evaluation, which we have added to supplementary materials, section S-6.2.\\n\\nFor each of 512 units of layer 4 of a \\\"living room\\\" progressive GAN, 5-9 human labels are collected (3728 labels total), where the AMT worker is asked to provide one or two words describing the highlighted patches in a set of top-activating images for a unit. Of the 512 units, 201 units were described by a consistent word (such as \\\"sofa\\\", \\\"fireplace\\\" or \\\"wicker\\\") that was supplied by 50% or more of the human labels.\\n\\nApplying our segmentation-based dissection method, 154/201 of these units are also labeled with a confident label with IoU > 0.05 by dissection. In most of the cases (104/154), the segmentation-based method gave the same label word as the human labelers, and most others are slight shifts in specificity (e.g. segmentation says \\\"ottoman\\\" or \\\"curtain\\\" or \\\"painting\\\" when a person says \\\"sofa\\\" or \\\"window\\\" or \\\"picture\\\"). A second AMT evaluation was done to rate the accuracy of both segmentation-derived and human-derived labels. Human-derived labels scored 100% (i.e., of the 201 human-labeled units, all of the labels were rated to be accurate by most raters). Of the 154 of our segmentation-generated labels, 149 (96%) were rated as accurate by most AMT raters as well.\\n\\nThe five failure cases (where the segmentation is confident but rated as inaccurate by humans) arise from situations in which human evaluators saw one pattern from seeing only 20 top-activating images, while the algorithm, in evaluating 1000 images, counted a different concept as dominant. (E.g., in one example shown in Figure 14a, there are only a few ceilings highlighted and mostly sofas, whereas in the larger 1000-image set, mostly ceilings are triggered.)\\n\\nThere were also 47/201 cases where the segmenter was not confident while humans had consensus. Some of these are due to missing concepts in the segmenter. For example, several units are devoted to letterboxing (white stripes at the top and bottom of images), and the segmentation had no confident label to assign to these (Figure 14b).\\n\\nWe expect that as semantic segmentations improve to be able to identify more concepts such as abstract shapes, more of these units can be automatically identified.\"}", "{\"title\": \"Summary of changes to the manuscript\", \"comment\": [\"We thank all the reviewers for their helpful comments. We are glad that they found the topic important, the idea new, and the visualization results convincing. We have addressed individual questions raised by the reviewers in separate posts. Below we summarize the major changes in this revision.\", \"In supplementary material S-6.1, we show an automatic evaluation of per-unit realism that can be done using FID measurements, and we show that zeroing these units improves the quality of the output. We have also corrected our FID computation by eliminating JPEG artifacts in our evaluation pipeline and recomputed FID comparisons in Table 1. (R2Q3, R2Q6)\", \"In S-6.2, we conduct a human evaluation of dissection label accuracy for interpretable units. (R1Q2, R2Q5)\", \"In S-6.3, we show how unit realism can be used to filter the results to protect the segmenter against unrealistic images that can be produced by some GAN models. (R2Q5, R3Q7)\", \"In S-6.4, we provide details of our method for optimizing causal units. To eliminate a hyperparameter, we have defined the large constant \\u201cc\\u201d used for positive interventions to be a mean conditioned on the target class, rather than an unconditional 99 percentile value. Figures 4, 9, 10, and 11 have been updated with results based on this adjustment. (R3Q10)\", \"In S-6.5, we have traced the effects of interventions through downstream layers and show how a CAM-like heatmap can be used to visualize these effects. (R1Q1)\", \"In S-6.6, we show how dissection can be used to monitor the emergence of unit semantics during the training epochs of a GAN. (R2Q6)\", \"We have fixed minor typos and grammar errors (R2Q4, R3Q12)\", \"We have clarified the method for manually identifying artifact units (R2Q3)\", \"We have clarified the method for identifying causal sets of units described in equations 5 and 6 (R3Q8,9,10)\", \"We have clarified the definition of SWD and FID and added citations (R3Q11)\"]}", "{\"title\": \"New methods for interpreting GANs, with nice practical contribution for improving GANs outputs.\", \"review\": \"The paper proposes a method for visualizing and understanding GANs representation. This seems an important topic as several such methods were performed for networks trained in supervised learning, which relate\\nto the predicted outcome, but there is lack of methods for interpreting GANs which are learned in an unsupervised manner and it is generally unclear what is the representation learned by GANs. \\nThe method is finding correlations between the appearance of objects and the activation of units in each layer of the learned network. \\nIn addition, the paper presents a 'causal' measure, where a causal effect of a unit is measured by removing and adding this unit from/to the network and computing the average effect on object appearance.\\nThe authors demonstrate how the methods are applied by improving the appearance of images, by modifying units which were detected as important for specific objects. \\nThe authors also provide an interactive interface where users can manually examine and modify their trained GANs in order to add/remove objects and to remove artifacts. \\n\\nThe method proposed by the authors seem to be appropriate for convolutional neural networks, where 'units' in each layer may correspond to objects and can be searched for in particular locations of image. \\nIt is not clear to me if and how one can apply the author's methods to other architecture, and to other application domains (besides images), or whether the method is limited to vision applications. \\nThe authors do not explain specifically how do they choose the 'units' for which they seek interpretation when reporting their results. It is written that each layer is divided into two sets: \\nu and u-bar, where we seek interpretation of u. But how large does u tend to be? how would one choose it? is it one filter out of all filters in a certain layer? when optimizing for sets of units together\\n(using the alpha probabilities and the optimization in eq. 6) what is d? is it performed for all units in a single layer? more details would be useful here. \\n\\nThe paper is overall clearly written, with lots of visual examples demonstrating the methods presented in it. \\nThe paper presents a new methodological idea, which allows for nice practical contribution. There is no theoretical contribution or any deep analysis. \\nThere is no reference in the paper to the supp. info. figures and therefore it is not clear if and how the supp. info. adds valuable information to the reader. \\nThe authors use scores like SWD and FIT for performance, but give no explanations for what do these scores measure.\", \"minor\": \"\", \"abstract\": \"immprovements -> improvements\\n\\nPage 6, middle: 'train on four LSUN' -> 'trained on four LSUN'\\n\\nPage 7, bottom: Fig. 14a and 14b should be Fig. 8a and 8b\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"An interesting idea to visualize and explain the representation of GANs and to provide a new potential way to further improve the quality of the generated images by GANs\", \"review\": \"## Summary\\nThis work proposes a novel analytic framework exploited on a semantic segmentation model to visualize GANs at unit (feature map) level. The authors show that some GAN representations can be interpreted, correlate with the parsing result from the semantic segmentation model but as variables that have a causal effect on the synthesis of semantic objects in the output. This framework could allow to detect and remove the artifacts to improve the quality of the generated images.\\n\\nThe paper is well-written and organized. The dissection and intervention for finding relationships between representation units and objects are simple, straightforward and meaningful. The visualizations are convincing and insightful. I recommend to accept the paper.\\n\\n## Detail comments\\nAbout diagnosing and improving GANs, please give more details of the human annotation for the artifacts.\\n\\nI think there is a typo in the first and second paragraphs in section 4.3, Figure 14 -> Figure 8. \\n\\nThe whole framework is based on a semantic segmentation model. The model is highly possibly imperfect and could have very different performances on different objects. Have you ever considerate to handle these imperfect models?\\n\\nIs there a way to apply the framework to the training process of GANs?\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"This paper reveals the essence of GAN through experiments.\", \"review\": \"This paper provides a visualization framework to understand the generative neural network in GAN models. To achieve this, they first find a group of interpretable units and then quantify the causal effect of interpretable units. Finally, the contextual relationship between these units and their surrounding is examined by inserting the discovered object concepts into new images. Extensive experiments are presented and a video is provided.\\n\\nOverall, I think this paper is very valuable and well-written. The experiments clearly show the questions proposed in the introduction are answered. Two concerns are as follows.\", \"cons\": \"1) The visualization seems to be very heuristic. What I want to know is the theoretical interpretation of the visualization. For example, the Class Activation Maps (CAM) can be directly calculated by the output values of softmax function. How about the visual class for the generative neural networks?\\n2) I am also very curious, how is the rate of finding the correct sets of units for a particular visual class?\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
S1lvm305YQ
TimbreTron: A WaveNet(CycleGAN(CQT(Audio))) Pipeline for Musical Timbre Transfer
[ "Sicong Huang", "Qiyang Li", "Cem Anil", "Xuchan Bao", "Sageev Oore", "Roger B. Grosse" ]
In this work, we address the problem of musical timbre transfer, where the goal is to manipulate the timbre of a sound sample from one instrument to match another instrument while preserving other musical content, such as pitch, rhythm, and loudness. In principle, one could apply image-based style transfer techniques to a time-frequency representation of an audio signal, but this depends on having a representation that allows independent manipulation of timbre as well as high-quality waveform generation. We introduce TimbreTron, a method for musical timbre transfer which applies “image” domain style transfer to a time-frequency representation of the audio signal, and then produces a high-quality waveform using a conditional WaveNet synthesizer. We show that the Constant Q Transform (CQT) representation is particularly well-suited to convolutional architectures due to its approximate pitch equivariance. Based on human perceptual evaluations, we confirmed that TimbreTron recognizably transferred the timbre while otherwise preserving the musical content, for both monophonic and polyphonic samples. We made an accompanying demo video here: https://www.cs.toronto.edu/~huang/TimbreTron/index.html which we strongly encourage you to watch before reading the paper.
[ "Generative models", "Timbre Transfer", "Wavenet", "CycleGAN" ]
https://openreview.net/pdf?id=S1lvm305YQ
https://openreview.net/forum?id=S1lvm305YQ
ICLR.cc/2019/Conference
2019
{ "note_id": [ "PO9AR1F-sP", "rJlTzPwIZV", "HJe2eAlSZN", "HJg6f75xe4", "HyxuHufle4", "B1eVK21xxV", "S1gy7d6pA7", "HylL4LaKCX", "Bkx0mGhxRX", "rJeIRWhgAX", "H1eiwWnxRm", "H1x8JZ2lA7", "Skx38gheCm", "H1g0p_-ChX", "Skx4CYgj3X", "r1gf_Og53X", "SJgsgqXK2X", "HyeAAbiq57", "HkgRLhKU97" ], "note_type": [ "comment", "official_comment", "comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review", "official_comment", "official_comment", "comment" ], "note_created": [ 1587116382306, 1546184469438, 1546092020345, 1544753940655, 1544722495746, 1544711291878, 1543522326646, 1543259694369, 1542664742513, 1542664654264, 1542664546861, 1542664413987, 1542664275816, 1541441734073, 1541241292503, 1541175402362, 1541122546586, 1539121621564, 1538853974014 ], "note_signatures": [ [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper1370/Authors" ], [ "~Rahul_Bhalley1" ], [ "ICLR.cc/2019/Conference/Paper1370/Authors" ], [ "ICLR.cc/2019/Conference/Paper1370/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1370/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1370/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1370/Authors" ], [ "ICLR.cc/2019/Conference/Paper1370/Authors" ], [ "ICLR.cc/2019/Conference/Paper1370/Authors" ], [ "ICLR.cc/2019/Conference/Paper1370/Authors" ], [ "ICLR.cc/2019/Conference/Paper1370/Authors" ], [ "ICLR.cc/2019/Conference/Paper1370/Authors" ], [ "ICLR.cc/2019/Conference/Paper1370/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1370/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1370/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1370/Authors" ], [ "ICLR.cc/2019/Conference/Paper1370/Authors" ], [ "~Keunwoo_Choi1" ] ], "structured_content_str": [ "{\"comment\": \"> As for the repo, we are currently working on cleaning up the code and will release once we finish.\\n\\nThough it's been almost a year and a half since you wrote in the rebuttal, the source code has still not uploaded yet in the repository specified in the paper.\\nConsidering that the content of the rebuttal can be a factor to be accepted, I think you must be responsible for what you wrote in your rebuttal as a member of the scientific community.\", \"title\": \"When will you release the code?\"}", "{\"title\": \"Thanks for your interest and the comment!\", \"comment\": \"Hi Rahul,\\n\\nThanks for your interest in our work and thanks for the comment! \\n\\nTheoretically there is no size constraint in the generator network as it\\u2019s fully convolutional. However practically because the generator was initially written in TensorFlow, we initially specified the input placeholder with size [1, 257, 251] (Note that it\\u2019s no longer 256 as in the image case) and that was the \\u201csize constraint\\u201d. We removed it (so that it can take in arbitrary size during test time) to address the volume jump issue. This will be further clarified in the appendix C.6 of the camera-ready version. \\n\\nAnd yes it was due to limited computation resources. We did not experiment with larger batch size. \\n\\nPlease let us know if you have any further question, thanks!\"}", "{\"comment\": \"Congratulations for good research work! But it misses some technical details.\\n\\nFirstly in appendix C.6, there is no information given about how the size constraint in generator network was removed for processing arbitrary length inputs. Preserving the musical length to at most 2 minutes (due to GPU memory constraint, as written in paper) will be a long 7680 x 256 image whereas the generator is designed to process 256 x 256 image inputs (in accordance to original CycleGAN paper details). The explicit details about how the 7680 x 256 image is fed to the generator in one-shot must be provided. \\n\\nSecondly there is no discussion given about setting up the batch size equal to 1 i.e.:\\n1. Is it due to limited computation resources like CycleGAN research https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/198\\n2. Does higher batches not result in better quality of timbre style transfer?\\n3. Is it just a matter of choice?\", \"title\": \"Incomplete information about one-shot generation of long musical pieces\"}", "{\"title\": \"Response to R2. Thanks for the comments!\", \"comment\": [\"Thanks for your comments!\", \"Sorry about the broken YouTube link, that link stopped working and please use this link instead: https://youtu.be/2ypcAZRYZJg We checked that this one is working.\", \"We agree with your point that, at least in certain contexts, a research system should be better than commercially available systems, or at least be on par. However our work is about Timbre transfer, which doesn\\u2019t have an existing commercially available system yet. The pitch shift and time stretch experiment were just included to demonstrate that we can get those side benefits for free by training TimbreTron on CQT representation; it was not our primary goal, and was not allocated the majority of space in the paper as well. We agree that if the sole purpose of our system had been to do pitch shifting or time stretching for music, then existing commercial tools would have been an important baseline to consider.\", \"As for the audio quality, one way we chose to address the subjective nature of the question was by conducting AMT human study. Based on our human study results there was strong evidence that TimbreTron is indeed able to transfer Timbre recognizably while preserving other musical content.\"]}", "{\"title\": \"the new examples confirm to me that the approach is not ready\", \"comment\": \"Thanks for the detailed response and extra effort to address some of my points. However, listening to the new examples, I have come to an even stronger opinion that the proposed method is not ready. From the samples, it is clear that WaveNet and the mapping method both introduce artefacts that cause unacceptable loss of quality. Yes, when mapping flute to piano it definitely does have characteristics of piano, but at the cost of extreme distortions that are far away from being useful for any purpose at this point in time.\\n\\nI do believe that this is an interesting idea and an interesting line of research, but I also continue to believe that the results are not good enough at this point in time.\", \"some_details\": \"> \\\"We demonstrate that the aforementioned translation does indeed result in a perceivable pitch shift when fed to our conditional WaveNet.\\\" I believe my original comment is still correct, \\\"The real question here is whether the reconstructed signal is of the same quality as, for example, a simple PSOLA-based pitch change would be.\\\"\\n\\nI believe that a research system should be better than commercially available systems, or at least be on par. Please check out this Youtube video that uses Audacity to time-stretch an audio clip: https://www.youtube.com/watch?v=SjVY2Fs8-24. Although this is time-stretching and not pitch-changing, obviously pitch-shifting can be implemented by time-stretching followed by playing back at a different sampling rate. Their example time-stretches a pop song by 30%. Please play back from 3:50. With concentration, I can hear some minor artefacts, but by and large, it sounds good and not unpleasant at all.\\n\\nCompare this to the pitch change examples \\\"Mozart up/down3.wav\\\", which has strong distortions which are very unpleasant. Using WaveNet to reconstruct *originals* seems to work, by and large, except for quite some additional noise, but even the basic task of pitch-shifting already shows strong artefacts.\\n\\nThis shows me that WaveNet synthesis itself is inadequate in its current form.\\n\\n> \\\"We did some PSOLA experiments for baseline to address your concern\\\"\\n\\nThe shifted_bumble.wav example in https://onedrive.live.com/?authkey=%21ACqUS3QXHSdcSW8&id=9AD8937254DEBD90%2120331&cid=9AD8937254DEBD90\\n\\nis full of crackling artefacts. I have never heard such artefacts for any re-pitching algorithm. I think you did not do the windowing right.\\n\\n> \\\"as well as our final demo video(https://www.youtube.com/watch?v=aT4D4mTITko)\\\"\\n\\nYoutube says \\\"This video is unavailable.\\\" Did Youtube kill it due to copyright claims? Then please post the video on the OneDrive as well.\\n\\n> \\\"we also included some samples of Violin -> Piano(Sample 19) and Harpsichord -> Piano(Samples 16 and 18) and Piano -> Flute(sample 17)\\\"\\n\\nThis example shows strong artefacts in that it synthesizers frequencies below the original pitch. It sounds like a base guitar playing unisono with the main melody (and sometimes a different note). Does this come from WaveNet or the mapping method? Either way, my takeaway is, again, that the method is not ready.\\n\\n> \\\"For the poor audio quality in the source material, could you point us to some specific samples?\\\"\\n\\nThis is an example which is very noisy, has some strange volume dip at the very beginning that is impossible to produce with a piano, and a drop-out at 3.5 seconds: https://onedrive.live.com/?authkey=%21ACqUS3QXHSdcSW8&id=9AD8937254DEBD90%2120329&cid=9AD8937254DEBD90\\n\\nIt is, however, possible that the poor quality I perceived is an byproduct of OneDrive's audio-playback interface, which may send compressed audio when playing uncompressed WAV files. So I concede this point.\"}", "{\"metareview\": \"Strengths: This paper is \\\"thorough and well written\\\", exploring the timbre transfer problem in a novel way. There is a video accompanying the work and some reviewers assessed the quality of the results as being good relative to other approaches. Two of the reviewers were quite positive about the work.\", \"weaknesses\": [\"Reviewer 2 (the lowest scoring reviewer) felt that the paper was a little too far from solving the problem to be of high significance and that there was:\", \"too much focus on STFT vs. CQT\", \"too little focus on getting WaveNet synthesis right\", \"too limited experimental validation (too restricted choice of instruments)\", \"poor resulting audio quality\", \"feels too much of combining black boxes\", \"AMT listening tests were performed, but better baselines could have been used.\", \"The author response addressed some of these points.\"], \"contention\": \"An anonymous commenter noted that the revised manuscript added some names in the acknowledgements, thereby violating double blind review guidelines. However, the aggregated initial scores for this work were past the threshold for acceptance. Reviewer 2 was the most critical of the work but did not engage in dialog or comment on the author response.\", \"consensus\": \"The two positive reviewers felt that this work is worth of presentation at ICLR. The AC recommends accept as poster unless the PC feel the issue of names in the Acknowledgements in an updated draft is too serious of an issue.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Interesting paper, has some issues, but would be of interest to the community\"}", "{\"title\": \"I agree with R3\", \"comment\": \"I'm standing by my initial assessment. This paper is thorough and well written, and as R3 says, it makes significant progress on this problem even if it's not \\\"solved\\\".\"}", "{\"title\": \"WaveNet ablation study added\", \"comment\": \"We added in Appendix E the ablation study focusing on the WaveNet component of TimbreTron. It showed the impact of each of our modification (reverse generation, data augmentation and beam search) on the output quality of WaveNet. The corresponding audio samples can be found here: https://1drv.ms/f/s!ApC93lRyk9iagZ8qP0IlxLXZkbO-iA\"}", "{\"title\": \"General updates to reviewers\", \"comment\": \"1.\\tWe included more samples that were generated by TimbreTron but without any beam search here: https://1drv.ms/f/s!ApC93lRyk9iagZ5M91jko9nSAiYMiA\\n\\n2.\\tWe included samples reconstructed by WaveNet that were not transferred by CycleGAN in order to show that without phase information, our WaveNet can reconstruct waveform from CQT pretty well (i.e, WaveNet(CQT(source audio))). The samples can be found here:https://1drv.ms/f/s!ApC93lRyk9iagZ5NBNV_ERqOJFK_3w\\n\\n3.\\tWe are in progress of making a project page so that all audio samples will be more organized in the camera-ready version.\\n\\n4.\\tWe added the original user interface of our AMT experiments and they can be found here:https://1drv.ms/f/s!ApC93lRyk9iagZ0ndQIYdJBAqYDlDA\\n\\n5. We added in Appendix E the ablation study focusing on the WaveNet component of TimbreTron. It showed the impact of each of our modification (reverse generation, data augmentation and beam search) on the output quality of WaveNet. The corresponding audio samples can be found here: https://1drv.ms/f/s!ApC93lRyk9iagZ8qP0IlxLXZkbO-iA\"}", "{\"title\": \"Response to R1. Thank you for your review, which allowed us to improve our work\", \"comment\": \"Thanks for catching the typos! They are now fixed. GP means Gradient Penalty; sorry for not writing down the abbreviation in the first place, it is now fixed.\\n\\nAs for quantitative measurement of phase retrieval of the decoder component, we will include qualitative comparison of rainbowgram(which encodes phase information with color) of the source audio and the wavenet reconstruction (wavenet(CQT(source audio)) and you can find the samples here:https://1drv.ms/f/s!ApC93lRyk9iagZ5NBNV_ERqOJFK_3w. The reason why we don\\u2019t think in our particular case that quantitative measurement can provide much insight in this regard is because phase retrieval is neither the focus nor a sub-goal of our work: so if, for example, our system produces an output that's shifted by some number of time steps, it would be a perfectly good output, even though the phases are all completely wrong. \\n\\nWe apologize for the complexity of listening to samples via OneDrive\\u2026 In testing it anonymously in advance, we did not have any of these errors, so we did not anticipate such a problem. We are glad you were able to access the youtube link, and we are in the progress of making a project page for our camera ready version, and by then it should be easier to go through all the samples.\"}", "{\"title\": \"Response to R3. Thank you for your review, which allowed us to improve our work\", \"comment\": \"We would like to thank Reviewer3 for the helpful and high quality review! It has enabled us to add some missing information to our paper and improve the writing quality.\\n\\nAs for the repo, we are currently working on cleaning up the code and will release once we finish. As for the baseline comparison of our work, we did include a detailed ablation study comparing our final model with various baseline and \\u201cpartial\\u201d models(subtracting a modification each time)\\n\\nThanks for bringing up three papers that are related to ours. We indeed cited work by Verma. et al in our original paper. We now cite work done by Dai. et al and Bitton. et al in the updated paper. Thanks! \\n\\nAs for the detailed information on the number of AMT workers (30 workers per questionnaire, but number of questions per questionnaire varies, for example, we asked 8 question per worker for comparing STFT and CQT experiment, thus in total we have 240 data points. See more details in the paper), the size of the CQT frame/hop over which they are summarized(16ms frame hop (256 time steps under 16kHz)), and the set of instruments(Piano, Violin, Flute, Harpsichord) that are being used in the experiments, they are updated in the main body of the paper.\\n\\nSection 6.3, sentence 2 was reworked.\\n\\nAttack is now explained in section 3.2 as the onset characteristics of an instrument in which it reaches a large amplitude quickly. \\n\\nThanks for catching the \\u201cthanks\\u201d typo. It\\u2019s now fixed.\\n\\nThanks for bringing up the percentage being meaningless, it\\u2019s now removed. \\n\\nAgain, thanks for your very constructive comments which allowed us to improve our paper!\"}", "{\"title\": \"Continued Response to R2\", \"comment\": \"...\\n\\n6.\\tFor the poor audio quality in the source material, could you point us to some specific samples? We used 16kHz as our raw source audio and we chose that because even in 2018, the significant majority of audio-related ML research is done with 16 kHz audio, (For example, in recent research on audio superresolution, https://kuleshov.github.io/audio-super-res/ , the authors describe their 16kHz samples as their high-quality dataset) and perceptually it was clear enough to tell details of timbre. Furthermore, the focus of this research is to get timbre transferred correctly while preserving other musical content. Audio quality is part of musical content and is not within the problem we\\u2019re trying to solve. For example, we are not trying to do super resolution on audio, and I think it\\u2019s fair to say 16 kHz is the standard resolution, corresponding to the 256x256 resolution used in preprocessed ImageNet in CV research. So far we haven\\u2019t found any sample that\\u2019s 11kHz as you suggested, but we\\u2019ll make sure all source audio in the updated paper will be the standard 16kHz resolution. \\n\\n7.\\tDue to potential copyright issue, we didn\\u2019t find enough samples in other instrument online that we are absolutely certain won\\u2019t have any copyright issue, so we recorded our own samples, thus didn\\u2019t have a lot of them. \\n\\n8.\\tAs for the beam search, we are aware that beam search should not be a key component on which the entire model relies in order to work (and if that were the case, then we agree that that would be problematic). However, the system does not rely on it: the improvement is only subtle and our results are still acceptable without it. As shown in the original paper, the AMT results we\\u2019ve shown are actually done without beam search. We only included it in the paper because it can still marginally improve the generated audio quality. We added some samples generated by TimbreTron without any beam search and they can be found here: https://1drv.ms/f/s!ApC93lRyk9iagZ5M91jko9nSAiYMiA\\n\\n9.\\tDue to the commercial success of WaveNet, well-resourced company labs are already working hard on improving it, and we look forward to being able to integrate such improvements into our pipeline. Hence, improving WaveNet itself wasn\\u2019t a major focus of our work. Instead, we gave two simple methods which adapted WaveNet to our task and pipeline -- beam search and reverse generation -- and we expect these tricks would apply equally well to future improved WaveNet-like models. We added some samples that were reconstructed by WaveNet from source audio with and without beam search and they can be found here: https://1drv.ms/f/s!ApC93lRyk9iagZ5NBNV_ERqOJFK_3w. We are in progress of adding more detailed ablation study on the wavenet component that, hopefully, will show the impact of reverse generation, data augmentation and beam search. \\n\\n10.\\tWe are in progress of making a project page so that all audio samples can be more organized in the camera ready version.\\n\\nAgain, we sincerely appreciate such a detailed and insightful critique of our work. Please let us know of any other changes that would improve our work. Thanks!\"}", "{\"title\": \"Response to R2. Thank you for your review, which allowed us to improve our work\", \"comment\": \"We would like to thank Reviewer2 for the insightful, high quality and clear review! It has allowed us to greatly improve our work and our paper. We hope the revised draft of the paper with the clarifications and improvements made below serve to increase your rating of our work.\\n \\n1.\\tIn choosing a CQT rather than STFT representation, we agree that, devoid of any other context, such a choice might indeed not be contentious, especially from a signal processing perspective. But the context here is that we are not just using this for discrimination or analysis purposes, but optimizing reconstruction is also essential: since Griffin-Lim is not possible for CQT, then in this case, switching from STFT to CQT means that we need to fundamentally change the pipeline. Furthermore, STFT and Grifflin-Lim is still widely used in state of the art speech and music research, for example in Tacotron 2 by Shen et al.(2018) and Deep Voice 3 by Ping et al.(2018). Hence, we needed to demonstrate that the improvement from CQT in this context is significant enough to justify the added complexity of the WaveNet synthesizer.\\n\\n2.\\tIn advance of our experiments, it would certainly be a reasonable conjecture that the spectrogram-shifting trick would work for pitch shifting. But we don\\u2019t think this was so \\u201cobvious\\u201d as to not require experimental confirmation.\\n\\n3.\\tThank you for suggesting the comparison of our pitch shifted samples with PSOLA-based pitch change. We did some PSOLA experiments for baseline to address your concern. The results can be found here: https://1drv.ms/f/s!ApC93lRyk9iagZ5pKpRLdBcdJ1xJbw As you can see, PSOLA performs well on simple samples, like notes_shifted.wav However, it completely fails on more complicated monophonic samples. We think that\\u2019s due to the fact that for PSOLA to work well, fundamental frequency needs to be precisely detected, which itself is not a trivial task for a complex monophonic musical piece. We haven\\u2019t put this in the paper yet because we are not sure this is a proper baseline and we are still working on better baseline comparison, and will add the comparison on to paper later. \\n\\n4.\\tWe did in fact run more than two instrument pairs, and we encourage you to take a look at the example outputs we provided in the OneDrive folder, as well as our final demo video(https://www.youtube.com/watch?v=aT4D4mTITko). In the OneDrive folder Section 7 (which corresponds to the Section 7. Conclusion of the paper), we also included Violin to Flute and Piano to Violin in our original submission. (The main obstacle to reporting more instrument pairs was simply the computational cost of training additional CycleGANs; our computational resources were much more limited than some prominent groups publishing on deep learning for music.) The idea of this work is not to develop a commercial level product for artists, but rather we are only aiming at empirically verifying that our methodology is a practical approach to the problem of Timbre Transfer, a proof of concept. But we agree that the more instrument pairs the better, thus in this folder: https://1drv.ms/f/s!ApC93lRyk9iagZ5M91jko9nSAiYMiA, we also included some samples of Violin -> Piano(Sample 19) and Harpsichord -> Piano(Samples 16 and 18) and Piano -> Flute(sample 17). The latter is a particularly interesting failure case where TimbreTron dreamed up all the \\u201cpiano vibrato\\u201d from the long notes in the source flute sample. We will gradually add more instruments and translation directions, including both successes and other failure cases as well.\\n\\n5.\\tThank you for bringing up ADSR and vibrato. These are indeed interesting cases where it\\u2019s ambiguous what the \\u201ccorrect\\u201d mapping is. Because TimbreTron does not simulate the physics of the instruments, it sometimes transfers effects that would be impossible or unusual for the target instrument. We highlighted two such examples in our demo video: a crescendo on a sustained piano note, and a string ensemble pausing to breathe. In the context of TimbreTron, we consider these to be interesting artifacts. If one wants to build a commercial system to produce convincing instrument samples, one would want to somehow remove these artifacts. (On the other hand, they might be beneficial, insofar as they enable a broader range of expression in the target instrument, as was the case for our generated harpsichord for the Moonlight Sonata.) \\n\\n(due to characters limit, more response will follow in the next post)\\n...\"}", "{\"title\": \"interesting idea, but weak experimental validation, and too much of combining black-boxes\", \"review\": \"The paper proposes a method for converting recordings of a specific musical instrument to another. The proposed approach is apply CycleGAN, which was developed for image style transfer, to transfer spectrograms. The synthesis is done using WaveNet.\\n\\nThe paper is interesting in the core idea. It demonstrates that this combination of building blocks can indeed map recordings while achieving certain characteristics of the target instrument.\\n\\nThe paper correctly describes \\\"timbre\\\" as a catch-all term for characterizing instruments besides pitch and volume. The success of the method should be judged along two dimensions, which are both very subjective:\\n - Does the method transfer \\\"enough\\\" of the target instrument's characteristics?\\n - Is the resulting audio quality sufficient?\\n\\nThe paper is easy to follow for someone with background in signal processing. I believe it is sufficiently easy to follow for readers with general computer-science and machine-learning background.\\n\\nThe paper focusses a lot on the choice of spectral representation. It compares short-term Fourier transform (STFT), which is the generic standard, and Constant-Q Transforms (CQT), a variant of STFT that uses a logarithmic frequency axis. To someone with signal-processing background, the choice of CQT seems logical and not something that would be challenged strongly as long as a simple comparison to STFT confirms that it works a bit better. I find the comparison between the two too dominant in the paper, and distracting from the other issues that I feel are more important (see below). For example, Section 6.2 states \\\"We demonstrate that the aforementioned translation does indeed result in a perceivable pitch shift when fed to our conditional WaveNet.\\\" But that is trivial: Since changing the playback sample rate by a few half steps does not fundamentally alter the perceived timbre of an instrument, and such a change will, by construction of CQT, shift the CQT representation, and since the WaveNet has seen examples of the source instrument for all notes, shifting the CQT respresentation must necessarily result in a perceived pitch change in the re-synthesized wveform that does not fundamentally change its timbre. The real question here is whether the reconstructed signal is of the same quality as, for example, a simple PSOLA-based pitch change would be.\\n\\nA larger problem of the paper is that the result section seems to only test two instrument mappings, violin to flute, and piano to harpsichord. One notes that these instrument pairs mostly differ in spectral envelope, while they are rather similar in longer-term temporal variations, such as what is sometimes characterized as the ADSR curve (attack-decay-sustain-release) and vibrato. These, in my view, are very important aspects of a musical instrument's characteristics, which are not addressed by the paper. (Whether they are considered part of \\\"timbre\\\" is not clear, but without mapping these, one cannot meaningfully speak of mapping instruments, which is the end goal of this paper.)\\n\\nAnother big problem in my view is that the audio quality is just not good. I hear a lot of musical-noise artifacts and local timbre modulations. Also it is not clear why the source material is of poor quality (sounds quite noisy, most likely in part due to mu-law 8-bit encoding, and they sound like 11 kHz recordings), for which there is no justification in 2018.\\n\\nLastly, I am not happy with the \\\"beam-search\\\" approach. That approach is used to post-correct imperfections in the WaveNet synthesis. It samples multiple generation hypotheses, and re-weights hypotheses by how well they match the original CQT when converted back. The need for this indicates a fundamental flaw in the WaveNet synthesizer. The authors explicitly say they did not want to fix the WaveNet algorithm itself. In my view, this is what should have been done.\\n\\nThe authors should focus much more on how to achieve sufficient WaveNet synthesis quality. This should be the main bulk of the paper, and would be a requirement for me to accept the paper.\\n\\nSo overall, the paper feels a little too much of combining black boxes.\\n\\nIn terms of significance, I would not think that this paper is getting near solving this problem, hence I rate it of less significance in the current state of results.\", \"pros\": [\"interesting idea\", \"reasonable approach by combining existing building blocks\"], \"cons\": [\"too much focus on STFT vs. CQT\", \"too little focus on getting WaveNet synthesis right\", \"too limited experimental validation (too restricted choice of instruments)\", \"poor resulting audio quality\", \"feels too much of combining black boxes\", \"As a result I rate the paper \\\"not good enough\\\" in its current form.\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Timbre can be tranferred pretty well using a constant-Q transform for features, followed by a CycleGAN to do the transfer, followed by a Wavenet to resynthesize it to audio.\", \"review\": \"Main Idea: The authors use multiple techniques/tools to enable neural timbre transfer (converting music from one instrument to another, ex: violin to flute) without paired training examples. The authors are inspired by the success of CycleGANs for image style transfer, and by the success of Wavenet for generating realistic audio waveforms. Even without the CycleGAN, the use of CQT->WaveNet for time stretching and pitch shifting of a single piece is an interesting and valuable contribution.\", \"methodology\": \"Figure 1 captures the overall timbre-conversion methodology concisely. In general the details of the methodology look sound. The lengthy appendices offer additional implementation details, but without access to a source code repository, it is hard to say if the results are perfectly reproducible.\", \"experiment_and_results\": \"Measuring the quality of generated audio is challenging. To do so, subjective listening tests are conducted on Amazon mechanical turk, but without a comparison to a baseline system except for another performance of the target piece. Note that there are few published timbre-transfer methods (see Similar Work).\\n\\nOne issue with the AMT survey is that the total number of workers is not reported, and as such the significance of the results can be questioned.\", \"significance\": \"In my mind, the paper offers validation of the three techniques used. CycleGANs, originally designed for images, are shown to work for style transfers on audio spectrograms. Wavenet's claim to be a generic technique for audio generation is tested and validated for this domain (CQT spectrogram to audio). That CQT outperforms STFT on musical data seems to be a well established result already, but this offers further validation.\\n\\nThis paper also offers practical advice for adapting the techniques/tools (Wavenet, CycleGAN, CQT) to the timbre-transfer task.\", \"similar_work\": \"I have only found 2 papers dedicated to timbre transfer in the field of Learning Representations.\\n\\nBitton, Adrien, Philippe Esling, and Axel Chemla-Romeu-Santos. \\\"Modulated Variational auto-Encoders for many-to-many musical timbre transfer.\\\" arXiv preprint arXiv:1810.00222 (2018).\\n\\nwhich was published on sept 29th 2018, so less than 30 days ago, which is fine according to the reviewer guidelines.\\n\\n\\nVerma, Prateek, and Julius O. Smith. \\\"Neural style transfer for audio spectograms.\\\" arXiv preprint arXiv:1801.01589 (2018).\\n\\nwhich is a short 2 page exploratory paper.\", \"it_could_be_useful_to_cite\": \"Shuqi Dai, Zheng Zhang, Gus G. Xia. \\\"Music Style Transfer: A Position Paper.\\\" arXiv preprint arXiv:1803.06841 (2018)\\n\\n\\nWriting Quality\\n\\nOverall the paper is written well with clear sentences.\\n\\nCertain key information would be useful to move from the appendices to the main body of the paper. This includes the number of AMT workers, the size of the CQT frame/hop over which they are summarized, and the set of instruments that are being used in the experiments.\", \"some_minor_nitpicks\": \"section 6.3, sentence 2 needs to be reworked. ('After moving on to real world data, we noticed that real world data is harder to learn because compared to MIDI data it\\u2019s more irregular and more noisy, thus makes it a more challenging task.') \\n\\nsection 3.2 sub-section 'Reverse Generation', sentence 1 uses the word 'attacks' for the first time. Please explain this for those not familiar.\\n\\nsection 3.1, sentence 3 has a typo, 'Thanks' is wrongly capitalized.\\n\\ntable 1 (and other tables in appendix), 'Percentage' (top left) does not add anything to the table.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Compelling results on timbre transfer, backed up by human evaluation\", \"review\": \"Summary\\n-------\\nThis paper describes a model for musical timbre transfer.\\nThe proposed method uses constant-Q transform magnitudes as the input representation, transfers between domains (timbres) by a CycleGAN-like architecture, and resynthesizes the generated CQT representation by a modified WaveNET-like decoder. The system is evaluated by human (mechanical turk) listening studies, and the results indicate that the proposed system is effective for pitch and tempo transfer, as well as timbre adaptation.\\n\\n\\nHigh-level comments\\n-------------------\\n\\nThis paper is extremely well written, and the authors clearly have a great attention to detail in both the audio processing and machine learning domains. Each of the modifications to prior work was well motivated, and the ablation study at the end, while briefly presented, provides a good sense of the contributions of each piece.\\n\\nI was unable to listen to the examples provided by the link in section 6, which requires a Microsoft OneDrive login to access. However, the youtube link provided in the ICLR comments gave a reasonable sample of the results of the system. Overall, the outputs sound compelling, and match my expectations given the reported results of the listening studies.\\n\\nOn the quantitative side, it would have been nice to see a measurement of phase retrieval of the decoder component, which could be done in isolation from the transfer components by feeding in original CQT magnitudes. This might help give a sense of how well the model can be expected to perform, particular as it breaks down along target timbres. I would expect some timbres to be easier to model than others, and having a quantitative handle on that could help put the listener study in a bit more perspective.\\n\\nDetailed comments\\n-----------------\\n\\nThe paper contains numerous typos and grammatical quirks, e.g.:\\n - page 5: \\\"GP can stable GAN training\\\"\\n - page 7: \\\"CQT is equivalent to pitch\\\"\\n\\n\\nThe reverse-generation trick in section 3.2 was clever!\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Demo Video\", \"comment\": \"We made a demo video for this work, and we strongly encourage you to watch it as it will give you a general and intuitive idea about this work. You can find it here: https://youtu.be/2ypcAZRYZJg\"}", "{\"title\": \"Thanks for the comment!\", \"comment\": \"Hi Keunwoo, thanks for the comment!\\n\\nFor WaveNet, we used kernel size of 3 for all the dilated convolution layers and the initial causal convolution. The residual connections and the skip connections all have width of 256 for all the residual blocks. The initial causal convolution maps from a channel size of 1 to 256. The dilated convolutions map from a channel size of 256 to 512 before going through the gated activation unit. In addition, we also added a constant shift to the spectrogram before feeding it into the WaveNet as the local conditioning signal; this shift of +2 was chosen to achieve a mean of approximately zero. These details will be updated in the paper.\\n\\nIn terms of the discriminator, we adopted the original discriminator architecture from the original CycleGAN paper except that we ran it on the full signal rather than random patches, as was discussed in section 4. \\n\\nThe dataset we used for training both the CycleGAN and the wavenet consists of real world recordings that contain only a single timbre, collected from YouTube. We have not filtered the samples based on whether they are polyphonic or not - for instruments that support polyphony (such as piano and harpsichord), the majority of the recordings are polyphonic. You can see the full list of the recordings we used in our experiments in Appendix C.1 in the updated paper.\\nAs for the OneDrive link, there seems to be an issue with the OpenReview redirection. However if you copy the link and paste it in your browser then it should work.\"}", "{\"comment\": \"Thanks for the work. I'd expect more details of the system. For example, what's the kernel sizes and number of channels of the WaveNet? With only the number of layers and their dilation rates it's not possible to understand/reproduce the proposed system. Details of the discriminator should be provided as well.\\n\\nI'd also appreciate more details of the datasets, e.g. who are the composers, what kind of music it is (more than simply 'classical music'), etc. One important information would be if they are polyphonic. \\n\\nFinally, the OneDrive link does not work at the moment.\", \"title\": \"Some details of the system are missing\"}" ] }
rkMD73A5FX
Can I trust you more? Model-Agnostic Hierarchical Explanations
[ "Michael Tsang", "Youbang Sun", "Dongxu Ren", "Beibei Xin", "Yan Liu" ]
Interactions such as double negation in sentences and scene interactions in images are common forms of complex dependencies captured by state-of-the-art machine learning models. We propose Mahé, a novel approach to provide Model-Agnostic Hierarchical Explanations of how powerful machine learning models, such as deep neural networks, capture these interactions as either dependent on or free of the context of data instances. Specifically, Mahé provides context-dependent explanations by a novel local interpretation algorithm that effectively captures any-order interactions, and obtains context-free explanations through generalizing context-dependent interactions to explain global behaviors. Experimental results show that Mahé obtains improved local interaction interpretations over state-of-the-art methods and successfully provides explanations of interactions that are context-free.
[ "interpretability", "interactions", "context-dependent", "context-free" ]
https://openreview.net/pdf?id=rkMD73A5FX
https://openreview.net/forum?id=rkMD73A5FX
ICLR.cc/2019/Conference
2019
{ "note_id": [ "H1lRBJZHxE", "B1xF8StW14", "rJebRE2507", "HyxeaRB5AQ", "SJxzhNaPRQ", "r1ly9Li8Rm", "BylQzfHL0m", "H1g7jxSURQ", "H1e41g4PTm", "HkxEuTNzaX", "Syg0NUh6nm", "BkxG9ejcnm", "S1gB29f3sm" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1545043782061, 1543767377310, 1543320776589, 1543294647852, 1543128233803, 1543054983251, 1543029259340, 1543028890835, 1542041564419, 1541717355726, 1541420597964, 1541218442412, 1540266669347 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1369/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1369/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1369/Authors" ], [ "ICLR.cc/2019/Conference/Paper1369/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1369/Authors" ], [ "ICLR.cc/2019/Conference/Paper1369/Authors" ], [ "ICLR.cc/2019/Conference/Paper1369/Authors" ], [ "ICLR.cc/2019/Conference/Paper1369/Authors" ], [ "ICLR.cc/2019/Conference/Paper1369/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1369/Authors" ], [ "ICLR.cc/2019/Conference/Paper1369/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1369/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1369/AnonReviewer1" ] ], "structured_content_str": [ "{\"metareview\": \"This paper introduces Mahe, a model-agnostic hierarchical explanation technique, that constructs a hierarchy of explanations, from local, context-dependent ones (like LIME) to global, context-free ones. The reviewers found the proposed work to be a quite interesting application of the neural interaction detection (NID) framework, and overall found the results to be quite extensive and promising.\", \"the_reviewers_and_the_ac_note_the_following_as_the_primary_concerns_of_the_paper\": \"(1) a crucial concern with the proposed work is the clarity of writing in the paper, and (2) the proposed work is quite expensive, computationally, as the exhaustive search is needed over local interactions.\\n\\nThe reviewers appreciated the detailed comments and the revision, and felt the revised the manuscript was much improved by the additional editing, details in the papers, and the additional experiments. However, both reviewer 1 and 3 have strong reservations about the computational complexity of the approach, and the additional experiments did not alleviate it. Further, reviewer 1 is still concerned about the clarity of the work, finding much of the proposed work to be unclear, and recommends further revisions.\\n\\nGiven these considerations, everyone felt that the idea is strong and most of the experiments are quite promising. However, without further editing and some efficiency strategies, it barely misses the bar of acceptance.\", \"confidence\": \"3: The area chair is somewhat confident\", \"recommendation\": \"Reject\", \"title\": \"Expensive approach, unclear writing\"}", "{\"title\": \"Final comment\", \"comment\": \"Thanks for your final revision: the manuscript improved. I increased my rating to 5.\"}", "{\"title\": \"Author Response\", \"comment\": \"Dear reviewer,\\n\\nThank you for your additional suggestions. We have made revisions to the manuscript accordingly.\\n\\n------------\", \"introduction\": \"we mean \\\"irrespective of data instances\\\"\\n\\n------------\\nSection 4.1: thanks for the suggestion\\n\\n------------\\nSection 4.2: we have split the sentence in two. \\\"1)\\\" has been replaced with description for clarity. Verbose words have been removed\\n\\nat the end of this section, we have referred to the distance metrics defined in Section 5.1 and clarified that our results are not very sensitive to exact choice of valid distance metric.\\n\\n------------\\nSection 5.1: indices have been clarified. Asterisk has been replaced with dagger for clarity. The technical jargon \\\"iff\\\" has been spelled out, and the sentence length has been reduced.\\n\\nlarge-dimension experiments and discussion have been added in Appendix F.\\n\\n------------\\nSection 5.2: the sentence has been shortened and clarified\\n\\n------------\\nSection 5.2.2:\\n\\nWe have fixed the grammatical error, and we have removed the average across sigmas. For each base model, we select the sigma for which Mahe\\u2019s performance is the worst at K=L and use the same sigma for the LIME method.\\n\\nWe have revised the mechanical turk paragraph for grammar.\\n\\n------------\\nSection 5.2.3: The discussion has been moved to the paper body. The need for quantification is now highlighted in this section. To the best of our knowledge, our approach Mahe is the only one that quantifies the predictive performance of local interaction explanations at each hierarchical step.\\n\\n------------\\nSection 5.3 : Our explanation of modifying models is introduced in the first paragraph of this section with Sentiment-LSTM. The Transformer paragraph has been revised for clarity. The context-free interactions for Transformer are now related to those of Sentiment-LSTM for continuity.\"}", "{\"title\": \"Manuscript improved but is still hard to understand\", \"comment\": \"I appreciate that you addressed all my comments tried to clarify the manuscript. Unfortunately, many parts of the manuscript remain to be hard to understand, partly due to the writing style. I therefore decided to not change the rating of the manuscript. Details below.\\n\\nIntroduction\\n\\u2014\\u2014\\u2014\\u2014\\u2014-\\n2. Do you mean by \\u2018irrespective of data instance.\\u2019 \\u2018irrespective of data instances.\\u2019 or \\u2018irrespective of a particular data instance.\\u2019?\\n\\nSection 4.1\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\nI suggest to replace \\u2018\\\\sigma , 1 standard deviation\\u2019 by \\u2018\\\\sigma , i.e. one standard deviation\\u2019.\\n\\nSection 4.2\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\nThe sentence \\u2018The advantage of checking the response\\u2026\\u2019 is hard to read. I suggest to shorten it by replacing verbose words (e.g. \\u2018whether\\u2019 by \\u2018if, \\u2018oftentimes\\u2019 by \\u2018often\\u2019), and splitting it into multiple sentences. Do you mean by \\u20181)\\u2019 \\u2018definition 1)\\u2019? Which distance metrics did you use for non-textual data? How sensitive are results with respect to the distance metric? I find this paragraph is still hard to understand.\\n\\nSection 5.1\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\nWhat is p? The section is hard to understand since p and k are not defined in this section and the reader has to look up their definition. Please spell out indices throughout the manuscript, e.g. \\u2018n samples\\u2019 or \\u2018k features\\u2019.\\n\\nWhat does \\u2018*Superpixel\\u2019 mean?\\n\\nThanks for describing interaction sets at the end of section 3. However, this sentence is hard to understand due to its length and technical jargon.\\n\\nI appreciate your additional analysis for different numbers of features p. Can you include and discuss the analysis in the manuscript? This is in my eyes important for being able to assess if the method is applicable for identifying interactions in images on a pixel level (instead of superpixels).\\n\\nSection 5.2\\n\\u2014\\u2014\\u2014\\u2014\\u2014\\nThe sentence \\u2018We evaluate the performance of fitting ...\\u2018 is hard to understand. Please shorten and clarify.\\n\\nSection 5.2.2\\n\\u2014\\u2014\\u2014\\u2014\\u2014-\\nThe sentence \\u2018We average prediction error\\u2026\\u2019 contains grammatical errors.\\n\\nDid you average sigma also for other methods? This is similar to ensembling which can increase the prediction performance.\\n\\nThanks describing the Medical Turk experiment in an additional paragraph. Unfortunately, this paragraph also contain grammatical errors which makes it hard to understand.\\n\\nSection 5.2.3\\n\\u2014\\u2014\\u2014\\u2014\\u2014---\\nPlease discuss the results in the main text instead of the figure caption. The interpretation of subfigure d) is unfortunately not quantitative enough and it remains unclear if LIME is superior to other methods for detecting interactions in images.\\n\\nSection 5.3\\n\\u2014\\u2014\\u2014\\u2014\\u2014-\\nThe reason why you modified the transformer model and how this impacts results remains unclear. Overall, I find the evaluation is still hard to understand.\"}", "{\"title\": \"Author Response\", \"comment\": \"Dear reviewer,\\n\\nThank you for understanding our work and acknowledging its strengths. Below are our responses to your questions.\\n\\n>> \\u201cbut explanations for level > 2 do not seem to be that good (Table 5)\\u201d\\n\\nWe have updated Table 5 (now Table 6 in the latest revision) with examples where the R^2 performance gains of levels>2 over level=2 are more significant. Previously, most examples only showed R^2 improvements of at most 0.006 from levels 2 to 3. Now, the R^2 improvements are at least 10x that number (0.06). We think these explanations also make more sense.\\n\\n>> \\u201cThe contribution seems incremental, given that Tsang et al (2018) already explored explanations based on interactions.\\u201d\\n\\nThanks for acknowledging our contribution of context-free explanations. Although applying NID on locally sampled points might seem simple for context-dependent explanations, there are several reasons why we were compelled to report this approach and experiments for it. We believe that broadening the application of NID to hierarchical explanations is important for the following reasons (in addition to the contributions noted in our original paper submission):\\n\\n1)\\tQuantification: The quantification of explanations provided by interpretability methods has become increasingly important (Kim et al. 2018, Bau et al. 2017). Given that there have been several works recently on local interaction interpretations (Murdoch et al. 2018, Lundberg et al. 2018, Greenside et al. 2018), we hope that our work sets additional guidelines for local interaction evaluation, using both existing evaluation methods and a new one. In particular, we promote the following quantifications: evaluation of interaction explanations w.r.t synthetic ground truth (Section 5.2.1), approximation evaluation (Table 3 and R^2 scores in 5.2.3), and the novel context-free consistency (Section 5.3), which assumes that the local interaction explanations work. Regarding context-free consistency, we have added comparisons between Mahe and baselines: GA2M (Lou et al. 2013) and GLM with pairwise interactions, demonstrating that Mahe's interaction attributions tend to be more consistent than the baselines' before and after interaction negation (Appendix D).\\n\\n2)\\tRich interactions in general deep learning: In this work, we have demonstrated that different forms of deep learning models are rich with context-dependent (and -free) interactions in a variety of domains (e.g. Table 3). This is made possible by applying interaction detection beyond the conventional tabular datasets studied in Tsang et al. (2018) and other works. Please note that the linear approximation of univariate GAMs does not affect Mahe\\u2019s approximation performance in most of our results from Table 3 (via our response to Reviewer 3, point 3).\\n\\n3)\\tHigh-order interaction explanations: For the interaction detection literature and the models that fit interactions (e.g. GAMs in Eq. 3), visualizing high-order interactions has been challenging (Tsang et al. 2018). For example, while a GA2M can be used to visualize pairwise interactions in 3D (Lou et al. 2013), visualizing higher-order interactions the same way would require understanding high-dimensional space (i.e. 4D and higher). We believe that explaining high-order interactions locally via interaction detection and GAMs is a compelling way to visualize complex interacting behavior, especially in black-box models (e.g. high-order interactions in Section 5.2.3; the CACGTG 6-way interaction in Section 5.3, which can also be visualized).\\n\\nBesides these context-dependent contributions, the methodological contribution of finding context-free explanations can also stand as a new evaluation metric for future research on speeding up or learning context-free interactions.\\n\\nOut of these contributions, we think that the most important are written in the paper.\", \"references\": \"David Bau, Bolei Zhou, Aditya Khosla, Aude Oliva, and Antonio Torralba. Network dissection: Quantifying interpretability of deep visual representations. In CVPR, 2017.\\nPeyton Greenside, Tyler Shimko, Polly Fordyce, and Anshul Kundaje. Discovering epistatic feature interactions from neural network models of regulatory dna sequences. Bioinformatics, 2018.\\nBeen Kim, Martin Wattenberg, Justin Gilmer, Carrie Cai, James Wexler, Fernanda Viegas, et al. Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (tcav). In ICML, 2018.\\nYin Lou, Rich Caruana, Johannes Gehrke, and Giles Hooker. Accurate intelligible models with pairwise interactions. In KDD, 2013.\\nScott M Lundberg, Gabriel G Erion, and Su-In Lee. Consistent individualized feature attribution for tree ensembles. arXiv preprint, 2018.\\nW James Murdoch, Peter J Liu, and Bin Yu. Beyond word importance: Contextual decomposition to extract interactions from lstms. In ICLR, 2018.\\nMichael Tsang, Dehua Cheng, Yan Liu. Detecting Statistical Interactions from Neural Network Weights. In ICLR, 2018.\"}", "{\"title\": \"Author Response\", \"comment\": \"Dear reviewer,\\n\\nThank you for your suggestions and support for our paper. We have conducted the major experiments you suggested, and our responses to your comments are below:\\n\\n>> \\u201c0. The motivation of the local-MLP models is not convincing\\u201d\\n\\nThank you for your comment. Our motivations for using local MLP models are threefold: 1) MLPs are universal function approximators, which is an especially important property for learning arbitrarily complex, i.e. non-additive, interactions. 2) MLPs were able to obtain state-of-art level accuracy and efficiency at interaction detection, which is why we use the MLP with NID. The advantages of 1) and 2) have been clarified in Section 4.1 in the last paragraph of \\u201cLocal Interaction Detection\\u201d. The third motivation for using local MLPs comes from our experiments on your second point. After performing experiments comparing Mahe to baselines for identifying context-free interactions, we found that local MLPs in the form of the GAM with interactions (Eq. 3) outperformed the baseline tree-based approach (Lou et al. 2013) that was also designed to learn the same form of function (GAM with interactions).\\n\\n>> \\u201c1. Of particular concern is the computational time cost of the model, as it involves retraining and an exhaustive search through local interactions to get context-free explanations\\u2026 The paper provides no experiments about timing cost\\u2026\\u201d\\n\\nWe agree that using an exhaustive search is a concern for runtime, which is why we originally included the part of Context Free Explanations (Section 4.2) to locally modify the target model to check if gives a global response. The local modification takes of role of checking whether any globally consistent behavior is more than just coincidence, which can be useful when the contexts available tend to biased in one way or another (for example, all contexts having the overall same prediction polarity as the interaction\\u2019s polarity). \\nNevertheless, individual testing on data instances is indeed needed in Mahe. We have added runtime experiments evaluating the average time needed for context-dependent (Figure 10), and -free (Figure 11) explanations for explaining the models trained on real-world datasets. From context-free runtime experiments, we found that sequentially testing over 40 instances before and after model modification (80 instances total) often takes less than two hours for a single interaction. If one has extra computational resources are available, individual tests can be parallelized.\\n\\n>> \\u201c2. The paper includes no baseline comparisons for context-free interactions\\u201d\\n\\nThank you for this comment. We have performed experiments comparing Mahe to baselines for identifying context-free interactions. Using the same context-free interactions identified from Sentiment-LSTM in Figure 5, we evaluate whether baseline methods for interaction detection and fitting can also identify them. The baselines we use are lasso-regularized GLM with all pairs of multiplicative interactions (Bien et al. 2013) and GA2M (Lou et al. 2013), a nonlinear tree-based model for fitting pairwise interactions in Eq. 3. Experimental results are shown in Appendix D. We found that while both baselines were originally close to performing as well as Mahe in identifying consistent interaction polarity, the baselines \\u2013 especially GLM \\u2013 did not perform nearly as well after using them to locally modify Sentiment-LSTM to identify consistent negated behavior.\\n\\n>> \\u201c3. Non-linear GAM is replaced by linear approximations in the experiments. More experiments showing the advantage of non-linear function approximation is recommended\\u201d\\n\\nFor our experiments explaining most models trained on real-world datasets, replacing linear approximations with nonlinear (univariate) GAM made no difference because our feature representations here were always binary, and a linear approximation can perfectly fit two points. However, we noticed significant performance improvements when the task was binary classification rather than regression, which was the case for explaining Transformer (from 0.75 to 0.95 avg AUC corresponding to the experiment in Table 3, K=0). We believe there was performance improvement because binary classification can have multiple solutions, and a better solution was found by the more flexible GAM. Regarding the advantages of nonlinear approximation of interactions, we believe our results for point 2 are relevant. \\n\\n>> \\u201c4. Minor Comments\\u201d\\n\\nL indicates the number of hierarchical levels with interactions. This has been clarified in the relevant section in Section 4.1.\", \"references\": \"Jacob Bien, Jonathan Taylor, and Robert Tibshirani. A lasso for hierarchical interactions. Annals of statistics, 2013.\\nYin Lou, Rich Caruana, Johannes Gehrke, and Giles Hooker. Accurate intelligible models with pairwise interactions. In KDD, 2013.\"}", "{\"title\": \"Author Response (1/2)\", \"comment\": \"Dear reviewer,\\n\\nThank you for your thorough suggestions and follow-up response on improving our paper. We have updated many parts of the paper to improve its presentation and address your concerns.\\n\\nBelow, we summarize our updates based on your suggestions. These updates are shown in blue in the latest paper revision as of Nov 23rd.\\n\\n\\n------------\", \"abstract\": \"We have clarified what is meant by \\u201ccontext-dependent\\u201d and \\u201c-free\\u201d and replaced \\u201cdependency\\u201d with \\u201cinteraction\\u201d.\\n\\n------------\", \"introduction\": \"We have clarified context-dependent and -free \\u201cinteractions\\u201d as local and global interactions, and what we mean by global behavior. \\u201cPerformance and generality\\u201d has been clarified to interaction fitting and detection performance and model-agnostic generality.\\n\\n------------\\nSection 3.1: Thanks, we have incorporated your feedback.\\n\\n------------\\nSection 4.1: We made a significant revision to this section. Our explanation of sampling procedure now covers continuous, categorical, and a mix of continuous and categorical data. Specifics mostly follow the procedures of previous work (Riberio et al. 2016) with some modification to the sampling kernel and local vicinity boundary. Regarding correlations, their absence can actually be advantageous for interaction detection. Suppose that two variables are correlated and one of them should naturally interact with a third variable w.r.t. an outcome variable. Because interaction signals can spread between the correlated variables, the interaction effect with the third variable is weakened, making it difficult to detect this interaction (Sorokina et al. 2008). When there are no correlations, interaction detection can focus more on identifying the true interactions in the data-generating function, i.e. the target model f(). \\n\\nWe recommend that sigma, which is used as epsilon for epsilon-neighborhood as you rightly mentioned, should be chosen based on factors such as the stability and interaction orders of explanations. If a local interaction explanation is extremely high-order and uninformative because the local vicinity covers too much of f\\u2019s complex representation, the vicinity size should be reduced. The number samples n should be larger than the feature dimension, p, to prevent the curse of dimensionality, as you mentioned. It is better to reduce p, e.g. with feature selection (or superpixel segmentation) to avoid overfitting as much as possible. We have clarified this consideration in the paper. \\n\\nWe have provided more context and guidance on hierarchical interaction visualizations with appropriate reference to an example. \\n\\n------------\\nSection 4.2: Thanks for reiterating the importance of clear definitions of local vicinity and sampling procedures. We have added a reference to our new definitions in Section 4.1 and distance metric considerations at the end of this section. We have also made some clarifications about why we modify the target function f. This is for limiting the scenario that globally consistent interaction behavior is a coincidence, which is more likely to be an issue with limited samples to test on. Therefore, we modify f in part to speed up the search for context-free interactions. \\n\\nphi_k is now used, thank you\\n\\n------------\\nSection 5.1: We have made clarifications here about our number of vicinity samples, dataset dimensionality, and evaluation metrics. For MSE, we have added clarification that 1000 uniformly drawn samples within the vicinity of a data instance are used to compute interaction outputs using Mahe and baselines. The MSE is evaluated across these samples w.r.t ground truth synthetic interactions. \\n\\nThanks for asking for how interaction sets are defined. We have added this definition at the end of Section 3. The definition of non-additive interaction directly tells us that it adds to the representation of a function compared to one without the interaction. This added representation is responsible for improving prediction performance, as previous works (including NID) found to have consistently approximated the performance of state-of-the-art complex models (Lou et al., 2013, Tsang et al., 2018). The MLP architecture sizes we used were already noted in Section 5.1 and are smaller (with less parameters) than those used by Tsang et al. (2018).\"}", "{\"title\": \"Author Response (2/2)\", \"comment\": \"Section 5.1 (continued):\\n\\nThe average runtime between Mahe and LIME for the different datasets has been added in Figure 10 in Appendix C, which includes runtimes for how long sampling and fitting nonlinear models take.\\n\\nBased on your suggestion, we performed experiments on the accuracy and runtime of the MLP used for interaction detection on large p data. We generate synthetic data with randomly generated pairwise interactions using the equation y^(i) = X^(i)WX^(i) + beta*X^(i) (Purushotham et al. 2014). X^(i) are rows of dimension p in a nxp design matrix, and W is a pxp matrix that indicates which variables will interact. We set a 2-3% nonzero density for the generation of W (Purushotham et al. 2014). We found that in low p settings, p=100, n only needed to be at least 10p to recover 5-15 pairwise interactions at AUC>0.9. Increasing p to 1000 still required n>10p, but performance stability significantly improved between 10p and 100p for detecting 900-2000 interactions. When p = 10k, we could not detect interactions at n=10p and did not study further due to large training time. In general, increasing n by an order of magnitude at fixed p required 4-9x more runtime. As a rough estimate, increasing p by an order of magnitude at fixed n required 2-3x more runtime. There is high variance in the runtime associated with increasing p because of the early stopping used. \\n\\nWe should emphasize that p should be held small whenever possible, such as using superpixels instead of individual pixels in images. The identification of interactions in high dimensional input spaces like images and image models is a challenging research problem and is left for future work. \\n\\n------------\\nSection 5.2.2: We have redone the prediction experiments (Table 3), now with a parameter sweep on sigma. Let s\\u2019 be the average pairwise distance between data instances in the test sets corresponding to respective models, provided distance metrics defined in section 5.1. Sigma is chosen to be 0.4s\\u2019, 0.6s\\u2019, 0.8s\\u2019, and 1.0s\\u2019 to represent the locality of a data instance relative to others.\\n\\nWe have made some updates to the mechanical turk section to be clearer about what and how mechanical turk users are asked, as well as the scope of the experiment. More examples like Figure 4, now randomly selected, have been added in Appendix B.\\n\\n------------\\nSection 5.2.3: We have added a discussion about the water buffalo interaction in the Figure 6 caption, which is indeed interesting. We searched through examples of water buffalo in ImageNet from the link [1] and found that many images of water buffalo did not have water, and when looking at images in adjacent classes like bison, there were also many images with and without water. This leads us to question the extent that water is a discriminatory feature. The lack of a water interaction may also be caused by a misbehaving ResNet, as its top-1 error is large (22%), or a misbehaving Mahe. Unfortunately, we cannot determine whether this image was misclassified because it belongs to the ImageNet test set without test labels. \\n\\n------------\\nSection 5.3: We agree that introducing this section with the cet interaction can be confusing. Therefore we have rearranged this section to start with our results on sentiment analysis. The purpose of our cet experiments was to determine whether it is possible to identify context-free interactions in Transformer. We have clarified this goal in the paragraph discussing the Transformer experiment. We have also included examples of expected interactions and added the reason why we modify models to make our motivations clearer. We are actively searching for other interesting context-free interactions in Transformer and are happy to report any additional results. Currently, we are still trying to better understand French.\\n\\n------------\", \"regarding_your_follow_up_suggestions\": \"The time complexity of NID has been included in Section 4.1. \\nThe inputs to MSE and R-precision are now defined in Section 5.2.1. \\nSuperpixels and the rationale behind them are now discussed in Section 4.1 at the end of the \\u201cLocal Interaction Detection\\u201d subsection. The choice of superpixel segmenter has been added to Section 5.1.\", \"references\": \"Yin Lou, Rich Caruana, Johannes Gehrke, and Giles Hooker. Accurate intelligible models with pairwise interactions. In KDD, 2013.\\n\\nSanjay Purushotham, Martin Renqiang Min, C-C Jay Kuo, and Rachel Ostroff. Factorized sparse learning models with interpretable high order feature interactions. In KDD, 2014.\\n\\nMarco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. Why should i trust you?: Explaining the predictions of any classifier. In KDD 2016. \\n\\nDaria Sorokina, Rich Caruana, Mirek Riedewald, and Daniel Fink. Detecting statistical interactions with additive groves of trees. In ICML, 2008.\\n\\nMichael Tsang, Dehua Cheng, Yan Liu. Detecting Statistical Interactions from Neural Network Weights. In ICLR, 2018. \\n\\n[1] http://image-net.org/challenges/LSVRC/2014/browse-synsets\"}", "{\"title\": \"Response\", \"comment\": \"Thanks for outlining the basic idea of NLP. What does \\u2018efficiently\\u2019 mean, i.e. what is the time complexity of NLP compared with O(2^p)?\\n\\nPlease see \\u2018Section 4.2\\u2019 in my main review for what I find unclear about this section.\\n\\nI agree that MSE = 1/n (a - b)^2 is widely used in machine learning but it is unclear what a and b are. While \\u2018Precision\\u2019 is well known, \\u2018R precision\\u2019 is ranking-specific and should be defined. Superpixels might be established in the context of model-agnostic explanation, but should be also understandable for readers who are unfamiliar with previous literature in this domain. Please briefly explain and cite superpixels and how they were computed.\"}", "{\"title\": \"Early Author Response\", \"comment\": \"Dear reviewer,\\n\\nWe are in the process of performing experiments and improving presentation based on your suggestions, but we would like to request clarification in advance about your difficulty understanding our paper. We have added a brief description of the previous literature of NID in Section 4.1 (in blue) and was wondering if this is the type of clarification you are looking for. In addition, we are wondering what part of our Context-Free approach in Section 4.2 is hard to understand? In this section, the cited literature appears after we discuss the bulk of our approach.\\n\\nMSE and R-Precision are standard metrics in Machine Learning. Superpixels for image models are also standard in model-agnostic explanation methods like LIME, Shap, and Anchors. Section 5.2.2 referred to details about the Mechanical Turk experiment in Supplementary B, which was included in our original submission.\"}", "{\"title\": \"interesting paper with a thorough evaluation\", \"review\": \"Summary\", \"this_paper_proposes_a_method_named_mahe_that_can_provide_hierarchical_explanations_for_a_model\": \"including both context-dependent(instance level) and context-free (global) explanations by a local interpretation algorithm. It obtains context-free explanations through generalizing context-dependent interactions to explain global behaviors. The effectiveness is shown through a number of synthetic and real-world data experiments.\\n\\nThe paper provides an interesting way to get context-free explanations from local explanations. The experiments are well designed and the paper is overall written well. \\n\\nMajor comments\\n\\n0. The motivation of the local-MLP models is not convincing. \\n\\n1. Of particular concern is the computational time cost of the model, as it involves retraining and an exhaustive search through local interactions to get context-free explanations.\\n\\nThe paper provides no experiments about timing cost to show the relative computational scalability of the proposed method. As Mahe trains MLPs per data sample and searches through all interactions for finding context-free explanations, this raises concerns.\\n\\n2. The paper includes no baseline comparisons for finding context-free interactions.\\n\\n3. Non-linear GAM is replaced by linear approximations in the experiments. More experiments showing the advantage of non-linear function approximation is recommended. \\n\\n4. Minor Comments: In the description, \\\"L + 1 different levels of a hierarchical explanation which constitutes the context-dependent explanation\\\", What does L indicate? The order of interactions?\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Promising idea, but results could be better.\", \"review\": \"Summary of the paper:\\nThe authors propose a framework called Mahe that provides context-dependent and context-free explanations for a given (neural network) model\\u2019s prediction. Context-dependent explanations are found by applying NID (Tsang et al, 2018) on a set of data points sampled from a neighborhood around the given input point. Further, the generalized additive model representing the function approximation around the given input is incrementally built by selectively computing higher-order-interaction terms using NID again. Each such added term results in an explanation at a level in the hierarchy. Context-free explanations are generated in two ways: 1) when a local explanation shows same polarity among all valid data points, and 2) by negating the local explanations\\u2019 polarity at a data point, fine-tuning the model on the resulting modified function approximation, and regenerating the local explanations for other data points; if the polarity is reversed for all other data points, then the local explanation is also a global explanation\", \"strengths\": [\"Broadens the application of NID to provide hierarchical explanations and context-free explanations\", \"Experiments on context-free explanations show promising results, for instance, on the Sentiment-LSTM model and in Supplementary A. Would be great to see more results on this front.\"], \"questions_for_authors\": \"- The experimental results only show that using higher order interactions results in a better function approximation (explanation), but explanations for level > 2 do not seem to be that good (Table 5). For the image example, they look slightly better. \\n- The contribution seems incremental, given that Tsang et al (2018) already explored explanations based on interactions. \\n\\nConclusion\\nConsidering that the NID idea has been broadened to context-free explanations, the paper shows promise, but it is a weak accept because the other contributions do not seem fully worked out.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Clear methodological contribution but not written clearly enough\", \"review\": \"Summary\\n=======\\nThe authors extended the linear local attribution method LIME for interpreting black box models by non-linear functions to more accurately approximate black box models locally and identifying interactions between model input variables using the previously published neural interaction detection (NID) framework. They further propose a method to discern between context-dependent and context-free interactions. I found the paper hard to understand without being familiar with previously published literature on detecting interactions and could not understand their approach to detect context-free interactions as well as some aspects of their evaluation. I have also concerns about the practically of their method due to the high runtime and the notion of locality in the light of high-dimensional inputs. In the following, I will briefly summarizing my major criticism and give further details below.\\n\\nSummary of major criticism\\n=====================\\n1) The paper is hard to understand without being familiar with previously published literature in the field.\\nThe authors do not describe how they define the interaction sets X_I in equation (3).\\n\\n2) I could not understand their approach for detecting context-free interactions (section 4.2).\\n3) It is unclear how discrete variables are locally modified. Can the approach be used for a combination of differently distributed variables, e.g. categorical and continuous variables?\\n4) Evaluation metrics such as MSE and R-precision are not described and I could not understand other important aspects of their evaluation, e.g. superpixels (figure 6), why and how they modified network architectures for evaluating context-free interactions (section 5.3), or how they asked Amazon Medical Turk users.\\n5) The authors did not compare the runtime of Mathe with LIME, which is presumably high due to the need of fitting multiple non-linear models (equation 3) and sampling the local neighborhood.\\n6) The authors did not compare the total number of model parameters of Mathe with LIME, which might also account for the higher accuracy (lower MSE). \\n7) The authors did not evaluate the accuracy and runtime of Mathe on high-dimensional inputs, e.g. large images.\\n\\n\\nDetails\\n===== \\n\\nAbstract\\n---------------------------\\n1. The abstract is hard to understand since \\u2018context-dependent\\u2019 and \\u2018context-free\\u2019 are undefined. The authors should also not use \\u2018dependencies\\u2019 as synonym for \\u2018interactions\\u2019.\\n\\nIntroduction\\n---------------------------\\n2. The difference between interactions and context-free and context dependent interactions is unclear. Is a variable without interactions context-free, e.g. Buffalo does not interact with water, and a variable with interactions context-dependent? Also, \\u2018classes of data\\u2019 is unclear, which can be misinterpreted as class labels for classification problems. The authors should also clarify what they mean by \\u2018performance and generality\\u2019 in the last section of the introduction.\\n\\nSection 3.1\\n---------------\\n3. The description of the model function f(x) and attribution scores phi(x) is unclear. Since f(x) is undefined when it is first mentioned after equation (1), I suggest to first define f(x) and afterwards define phi(x). The purpose of the attribution score function phi(x) is also unclear without prior knowledge. The authors should more clearly describe that f(x) is the target function (model) of interest, e.g. a classifier, and phi(x) locally approximates f(x) and is interpretable in contrast to f(x).\\n\\nSection 4.1\\n---------------\\n4. The authors should justify why they are sampling x ~ N(x, sigma * I), which assumes that data instances are iid Normal. This is not the case, e.g., if x is categorical or contains a combination of categorical and continuous variables, and variables are correlated. How is sigma chosen? How many samples are used depending on the dimension of x?\\n\\n5. The authors should briefly describe the basic idea of NID.\\n\\n6. How are points sampled from the epsilon neighborhood of x? What is a link function?\\n\\n7. The subsection \\u2018Hierarchical Interaction Attribution\\u2019 is hard to understand without being familiar Tsang et al. The authors should give an example of a hierarchical explanation with different layers. \\n\\nSection 4.2\\n---------------\\n8. How is the \\u2018local vicinity\\u2019 defined? Which distance metric is used? This is in particular problematic if x is high-dimensional due the the curse of dimensionality. How are continuous and categorical variables locally modified? Did the authors meant to use a lowercase \\u2018k\\u2019 in equation (4), i.e. \\u2018phi_k =\\u2019 instead of \\u2018phi_K\\u2019? I find this section hard to understand without being familiar with the cited literature.\\n\\nSection 5.1\\n---------------\\n9. The number of local vicinity samples is unclear. Did the authors use 1k local vicinity samples for synthetic experiments (Table 1) and 5k samples for real-word synthetic experiments?\\n\\n10. What is the dimensionality (number of words, characters, or pixels) of real world datasets? This is important since it influences the number samples that are required to approximate the vicinity of a particular data point. It is in particular interesting to know how the model accuracy and runtime depends on the dimensionality and the number of local vicinity samples.\\n\\n11. The authors should define the evaluation metrics (MSE, R-precision) in addition to citing them.\\n\\n12. How did the authors choose the interaction sets X_I in equation (3) and (4)? How many MLPs (functions g(.)) did the authors fit to learn phi(x)? Is the number of MLPs the same for LIME and Mathe? Otherwise the performance gain of Mathe over LIME can also be attributed the increased number of models and model parameters (ensemble size). \\n\\n13. What is the average training time of Mathe and baseline models on the different datasets?\\n\\nSection 5.2.2\\n-----------------\\n14. How did the authors choose sigma (0.4, 6, 0.4) for the different datasets?\\n\\n15. How did Amazon medical turk users evaluate Mathe vs. LIME interactions? Were they given for each sentence the best Mathe and best LIME interaction and asked to decide which one is better? Figure 4 should be more clearly described in the caption. The sentence \\u2018The result of this experiment is that the majority of preferred explanation \\u2026\\u2019 is unclear and unjustified by only showing one example in Figure 4.\\n\\nSection 5.2.3\\n-----------------\\n16. The authors should discuss figure 6. The results indicate that neither LIME nor Mathe is able to clearly identify the object of interest, e.g. the water buffalo, and interactions, e.g. between the buffalo and water.\\n\\nSection 5.3\\n---------------\\n17. The sentence \\u2018... the presence of a French word for \\u201cthis\\u201d or \\u201cthat\\u201d, cet, which \\u2026\\u2019 is unclear. I suggest to give an example to illustrate which interactions are supposed to be detected. The modification of the Transformer model and the reason why this is necessary is unclear. Overall, I find this evaluation unclear and insufficient since it only applies to a particular interaction.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
SyMDXnCcF7
A Mean Field Theory of Batch Normalization
[ "Greg Yang", "Jeffrey Pennington", "Vinay Rao", "Jascha Sohl-Dickstein", "Samuel S. Schoenholz" ]
We develop a mean field theory for batch normalization in fully-connected feedforward neural networks. In so doing, we provide a precise characterization of signal propagation and gradient backpropagation in wide batch-normalized networks at initialization. Our theory shows that gradient signals grow exponentially in depth and that these exploding gradients cannot be eliminated by tuning the initial weight variances or by adjusting the nonlinear activation function. Indeed, batch normalization itself is the cause of gradient explosion. As a result, vanilla batch-normalized networks without skip connections are not trainable at large depths for common initialization schemes, a prediction that we verify with a variety of empirical simulations. While gradient explosion cannot be eliminated, it can be reduced by tuning the network close to the linear regime, which improves the trainability of deep batch-normalized networks without residual connections. Finally, we investigate the learning dynamics of batch-normalized networks and observe that after a single step of optimization the networks achieve a relatively stable equilibrium in which gradients have dramatically smaller dynamic range. Our theory leverages Laplace, Fourier, and Gegenbauer transforms and we derive new identities that may be of independent interest.
[ "theory", "batch normalization", "mean field theory", "trainability" ]
https://openreview.net/pdf?id=SyMDXnCcF7
https://openreview.net/forum?id=SyMDXnCcF7
ICLR.cc/2019/Conference
2019
{ "note_id": [ "SJxPSP9NlV", "HJeOL_au6Q", "BJlt5Odvam", "rkx3VuuPpQ", "H1gQCvdvTm", "SkxSjPODaQ", "HkgmPcrZpX", "Sklr8el-6Q", "r1lORJDq3m", "BJxndjHwnm", "HJgsnvp59X", "rygMio2X57" ], "note_type": [ "meta_review", "comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "comment", "official_review", "official_review", "official_comment", "comment" ], "note_created": [ 1545017151500, 1542146128396, 1542060177295, 1542060084279, 1542059979498, 1542059933416, 1541655131302, 1541632077291, 1541201872361, 1541000051822, 1539131314869, 1538669465784 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1368/Area_Chair1" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper1368/Authors" ], [ "ICLR.cc/2019/Conference/Paper1368/Authors" ], [ "ICLR.cc/2019/Conference/Paper1368/Authors" ], [ "ICLR.cc/2019/Conference/Paper1368/Authors" ], [ "ICLR.cc/2019/Conference/Paper1368/AnonReviewer1" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper1368/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1368/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1368/Authors" ], [ "~Angus_Galloway1" ] ], "structured_content_str": [ "{\"metareview\": \"This paper provides a mean-field-theory analysis of batch normalization. First there is a negative result as to the necessity of gradient explosion when using batch normalization in a fully connected network. They then provide further insights as to what can be done about this, along with experiments to confirm their theoretical predictions.\\n\\nThe reviewers (and random commenters) found this paper very interesting. The reviewers were unanimous in their vote to accept.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Interesting and surprising findings with a mean-field-theory analysis of batch normalization\"}", "{\"comment\": \"Thanks for the detailed reply!\", \"title\": \"thanks\"}", "{\"title\": \"Reply\", \"comment\": \"Thank you for your careful review and useful comments! Overall, in response to your review and that of referee 3 we will include a more intuitive discussion of our results in the next revision of our text.\\n\\nTo reply to your other specific comments,\\n\\n1) The intuition for batchnorm can be put in a more general setting. If a function f: X -> Y tends to spread out small clusters in the input space almost evenly in the output space, then one can expect that its gradients will be large typically. In our case, a batchnorm network can be understood as a function that sends a batch of inputs to a batch of outputs. In the appendix, we showed that the correlation between two different batches tend to a constant value independent of the input batches. No matter how close two input batches are, the output batches will have the same \\u201cdistance\\u201d from each other -- small movements in the input space leads to large movements in the output space. Thus we can expect the gradients to be large as well. We have added a new figure to the Appendix to further support this intuition. In it, we pass through a linear batchnorm network 2 minibatches. Both minibatches contain points on the same circle and 1 point off the circle that is unique to each minibatch. While the circle in each minibatch will remain an ellipse as they are propagated through the network, the angle between the planes spanned by them increasingly becomes chaotic with depth.\\n\\n3) As observed in [1] and [2], depthwise convergence to covariance fixed points is bad for training, and the best networks are either moderately deep or initialized such that the depthwise convergence rate to the fixed point is as slow as possible. We observe that deep networks whose activation statistics resemble a non-BSB1 fixed point typically feature worse gradient explosion than BSB1 networks. This seems to be because the nonlinearities that induce these fixed points increase rapidly (for example, polynomials with high degrees), so that the corresponding derivatives are also large, causing gradient explosion. \\n\\n(The reason that rapidly increasing nonlinearities don\\u2019t converge to BSB1 fixed points is that, after a spontaneous symmetry-breaking, begins a \\u201cwinner-take-all\\u201d covariance dynamics, in which the activations of a few examples in the batch suddenly dominates those of the others in the batch, and this dominance persists across each layer.)\\n\\n4) We were a bit confused by what was meant by \\u201cpractice\\u201d here. We have thoroughly verified that for realistic input distributions (MNIST and CIFAR10) and common initialization strategies (weights that are randomly distributed) our theory makes accurate prediction. Moreover, we have shown that these predictions can be connected to practice in the sense that they predict whether or not the network can be trained.\\n\\nHaving said this, if by practice you meant that the neural network is accurately described by our theory during training then we do not expect this to be true. We are happy to emphasize this in the camera ready.\\n\\nIf this did not properly address your question, please feel free to let us to know and we will improve this response!\\n\\n[1] S. S. Schoenholz, J. Gilmer, S. Ganguli, J. Sohl-Dickstein. Deep Information Propagation (https://arxiv.org/abs/1611.01232)\\n[2] L. Xiao, Y. Bahri, J. Sohl-Dickstein, S. S. Schoenholz, J. Pennington. Dynamical Isometry and a Mean Field Theory of CNNs: How to Train 10,000-Layer Vanilla Convolutional Neural Networks (https://arxiv.org/abs/1806.05393)\"}", "{\"title\": \"Reply\", \"comment\": \"Thank you for your thoughtful comments about our paper, we\\u2019re very happy that you found it interesting! We absolutely agree about the need for precise analysis to disentangle the many complexities that compete in deep learning.\\n\\nRegarding your question, great observation! Indeed, they are manifestations of the same underlying phenomenon. Below around L=50, the amount of gradient explosion is small enough that it doesn\\u2019t significantly deteriorate performance --- this is shown in figure 2. At the same time, the gradients are small compared to the corresponding weights, so that after the first few steps, the weights themselves don\\u2019t change much --- this is shown in figure 3a. If the weights don\\u2019t change much, then the gradient dynamics remain roughly the same --- this is shown in figure 3b and 3c. Conversely, for L > 50, gradient explosion dominates the weight matrices, so much that |W| at time 1 is roughly the same as the norm of the corresponding gradient. After 1 step, the exponentially decreasing norms (with depth) of the weights attenuate the gradient explosion. This is because of the batchnorm property dBN(ax)/d(ax) = 1/a dBN(x)/dx. Thus in figure 3b and 3c we see gradient vanishing for L > 50 after 1 step of SGD.\\n\\nWe indeed used the \\u201cback-prop weight\\u201d assumption, and we will be more explicit about its usage in the next revision.\\n\\nWe are also extremely interested in carrying out the calculation for residual networks!\"}", "{\"title\": \"Reply\", \"comment\": \"Thank you for your review and very useful comments! We\\u2019re happy you found our manuscript interesting.\", \"to_address_your_comments\": \"1) Thank you for pointing out that we had not defined the delta. Here delta is the Kronecker delta defined so that \\\\delta_{a,b} = 1 if a = b and 0 if a != b. In the context of the variance of the multivariate normal distribution, the delta function indicates that the different neurons in each layer have zero covariance. We\\u2019ll add an explicit discussion of this fact to the manuscript.\\n\\n2) Thanks for pointing this out, we\\u2019ll correct it in the next revision.\\n\\n3) It is true that the extent to which randomized weights describe trained networks is unclear. However, it is true that most commonly used weight initialization schemes are random. For example, He initialization [1] and Xavier initialization [2] strategies are both special cases of the setup considered here. We therefore view our theory as a theory of neural networks at initialization. (There are, however, initialization schemes that are not random and that are not described by our theory).\\n\\n[1] K. He, X. Zhang, S. Ren, J. Sun. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. (http://www.cv-foundation.org/openaccess/content_iccv_2015/html/He_Delving_Deep_into_ICCV_2015_paper.html) \\n[2] X. Glorot, Y. Bengio, Y. W. Teh, M. Titterington. Understanding the difficulty of training deep feedforward neural networks. (http://proceedings.mlr.press/v9/glorot10a.html)\"}", "{\"title\": \"Reply\", \"comment\": \"Thank you for the review and careful reading of our paper! We\\u2019re glad that you found it of interest. On revision we will fix the typos that you identified.\\n\\nRegarding the first point, your intuition is exactly correct and a slightly simpler discussion of this phenomenon can be found in [1]. When the network is deep enough that the covariance matrix has reached its fixed point, the distribution of the outputs of the network will be independent of the inputs. At this point the network becomes untrainable. To reconcile this with the commonsense intuition that \\u201cdeeper is better\\u201d, our answer is twofold.\\n\\n1) As in [1] and [2] it is often possible to find configurations or architectural modifications where the covariance matrix doesn\\u2019t approach its fixed point over depths often considered in machine learning. When this is the case one can safely increase the depth without sacrificing accuracy.\\n\\n2) It seems that the role of depth in performance is more subtle than standard intuition would dictate. For example, in [3] note that although the authors were able to train a 10k hidden layer network, they did not observe any improvement in accuracy.\\n\\nIn the next version of the manuscript (both in response to your review and that of referee 1) we will add a more intuitive discussion of these results which we agree are somewhat technical.\\n\\n[1] S. S. Schoenholz, J. Gilmer, S. Ganguli, J. Sohl-Dickstein. Deep Information Propagation (https://arxiv.org/abs/1611.01232)\\n[2] G. Yang and S. S. Schoenholz. Mean Field Residual Networks (https://arxiv.org/abs/1712.08969) \\n[3] L. Xiao, Y. Bahri, J. Sohl-Dickstein, S. S. Schoenholz, J. Pennington. Dynamical Isometry and a Mean Field Theory of CNNs: How to Train 10,000-Layer Vanilla Convolutional Neural Networks (https://arxiv.org/abs/1806.05393)\"}", "{\"title\": \"Interesting and counter-intuitive results about batch-normalization\", \"review\": \"This paper develops a mean field theory for batch normalization (BN) in fully-connected networks with randomly initialized weights. There are a number of interesting predictions made in this paper on the basis of this analysis. The main technical results of the paper are Theorems 5-8 which compute the statistics of the covariance of the activations and the gradients.\", \"comments\": \"1. The observation that gradients explode in spite of BN is quite counter-intuitive. Can you give an intuitive explanation of why this occurs?\\n\\n2. In a similar vein, there a number of highly technical results in the paper and it would be great if the authors provide an intuitive explanation of their theorems.\\n\\n3. Can the statistics of activations be controlled using activation functions or operations which break the symmetry? For instance, are BSB1 fixed points good for training neural networks?\\n\\n4. Mean field analysis, although it lends an insight into the statistics of the activations, needs to connected with empirical observations. For instance, when the authors observe that the structure of the fixed point is such that activations are of identical norm equally spread apart in terms of angle, this is quite far from practice. It would be good to mention this in the introduction or the conclusions.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"comment\": \"I finally found the time to read this paper, and was glad that I did so.\\n\\nThe conclusion that at initialization batch norm actually harms at large depth is quite surprising, and this cannot be reached without precise analysis. This is the unique strength of the mean-field framework, in contrast with many other theoretical studies.\\n\\nI have a question. The authors plot the depth scale in Fig 2. The depth scale, which is based on calculations at initialization, looks very predictive as in Fig 2, but as Fig 3 suggested, the behavior at t=10 is much different from the behavior at initialization. Yet interestingly the limit depth in Fig 2 (about L=50) coincides so well with the pictures in Fig 3, where we see there is a sort of phase transition around L=50. Are they related somehow?\\n\\nA suggestion. The back-propagation calculation seem to assume that the \\\"back-prop weight\\\" is independent of the \\\"forward-prop weight\\\". If the assumption is used, I think it should be mentioned, since this assumption is highly non-trivial.\\n\\nIt would be interesting to carry out the calculations for residual architectures.\", \"title\": \"comments on the work\"}", "{\"title\": \"The detailed analysis of the training of DNN with the batch normalization is quite interesting.\", \"review\": \"This paper investigates the effect of the batch normalization in DNN learning.\\nThe mean field theory in statistical mechanics was employed to analyze the\\nprogress of variance matrices between layers. \\nAs the results, the batch normalization itself is found to be the cause of gradient explosion. \\nMoreover, the authors pointed out that near-linear activation function can improve such gradient explosion. \\nSome numerical studies were reported to confirm theoretical findings.\\n\\nThe detailed analysis of the training of DNN with the batch normalization is quite interesting. \\nThere are some minor comments below.\\n\\n- in page 3, 2line above eq(2): what is delta in the variance of the multivariate normal distribution?\\n- the notation q appeared in the middle part of page 3 before the definition of q is shown in the last paragraph of p.3. \\n- The randomized weight is not very practical. Though it may be the standard approach of mean field,\\nsome comments would be helpful to the readers.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"1: The reviewer's evaluation is an educated guess\"}", "{\"title\": \"Interesting work, strong results\", \"review\": \"This paper provides a new dynamic perspective on deep neural network. Based on Gaussian weights and biases, the paper investigates the evolution of the covariance matrix along with the layers. Eventually the matrices achieve a stationary point, i.e., fixed point of the dynamic system. Local performance around the fixed point is explored. Extensions are provided to include the batch normalization. I believe this paper may stimulate some interesting ideas for other researchers.\", \"two_technical_questions\": \"1. When the layers tends to infinity, the covariance matrix reaches stationary (fixed) point. How to understand this phenomenon? Does this mean that the distribution of the layer outputs will not change too much if the layer is deep enough? This somewhat conflicts the commonsense of \\\"the deeper the better?\\\" \\n\\n2. Typos: the weight matrix in the end of page 2 should be N_l times N_{l-1}. Also, the x_i's in the first line of page 3 should be bold.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Clarification\", \"comment\": \"Thanks for your interest!\\n\\n1) The numerator and denominator are correct, but I think the meaning might be slightly unclear. When we write |\\\\nabla_{W^l}L|^2, we mean the norm of the gradient of the loss with respect to the weights in layer l. Thus, in fig. 3 we plot the ratio of the size of the gradient of the loss with respect to the weights in layer 10 compared with layer L for a network of depth L (note that both gradients are for a network of depth L). When gradients explode, the norm of the gradient increases during backprop so that the magnitude is larger nearer to the \\\"input\\\" to the network than the \\\"output\\\". Thus, this is essentially saying that the gradient in the 10'th layer of the network is exponentially larger for deeper networks than for shallower ones.\\n\\n2) \\\\gamma = 1, \\\\beta = 0 at initialization for figure 3. In all experiments, \\\\gamma and \\\\beta were free parameters that could be learned during training.\\n\\n3) The choice of 10 was arbitrary and the results don't depend strongly on the choice. The only thing that one has to be careful of is that our theoretical results are \\\"asymptotic\\\" in the depth. So if you look at layers that are too close to the inputs you can get transient effects that change the clean exponential scaling. \\n\\nPlease let us know if you have any further questions or if this response was unclear.\"}", "{\"comment\": \"Interesting work, I have a few questions about Fig. 3. Were the numerator and denominator of the y-axes flipped accidentally? The current labeling seems to imply that the gradient wrt each of the quantities in a,b,c is monotonically decreasing with depth L at t=0, which I wouldn't expect for layers with constant width. Also, what was gamma set to in this case (e.g. fixed, or learnable with initialization X)? And any reason as to why layer 10 was chosen as normalization? This confused me initially because I only saw in-text references to \\\"10\\\" in the context of training steps.\", \"title\": \"Clarification re Fig. 3\"}" ] }
BkewX2C9tX
Analyzing Federated Learning through an Adversarial Lens
[ "Arjun Nitin Bhagoji", "Supriyo Chakraborty", "Seraphin Calo", "Prateek Mittal" ]
Federated learning distributes model training among a multitude of agents, who, guided by privacy concerns, perform training using their local data but share only model parameter updates, for iterative aggregation at the server. In this work, we explore the threat of model poisoning attacks on federated learning initiated by a single, non-colluding malicious agent where the adversarial objective is to cause the model to misclassify a set of chosen inputs with high confidence. We explore a number of strategies to carry out this attack, starting with simple boosting of the malicious agent's update to overcome the effects of other agents' updates. To increase attack stealth, we propose an alternating minimization strategy, which alternately optimizes for the training loss and the adversarial objective. We follow up by using parameter estimation for the benign agents' updates to improve on attack success. Finally, we use a suite of interpretability techniques to generate visual explanations of model decisions for both benign and malicious models and show that the explanations are nearly visually indistinguishable. Our results indicate that even a highly constrained adversary can carry out model poisoning attacks while simultaneously maintaining stealth, thus highlighting the vulnerability of the federated learning setting and the need to develop effective defense strategies.
[ "federated learning", "model poisoning" ]
https://openreview.net/pdf?id=BkewX2C9tX
https://openreview.net/forum?id=BkewX2C9tX
ICLR.cc/2019/Conference
2019
{ "note_id": [ "HJgtOpzVe4", "HyeoRt-iCm", "SJxKFSbiAX", "B1eRXHbjAX", "rkeDRZZi07", "SJledo6Rnm", "HkxgZYYT2Q", "B1eu77MS27" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544985969139, 1543342547270, 1543341441472, 1543341350050, 1543340495054, 1541491560487, 1541409015644, 1540854559753 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1366/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1366/Authors" ], [ "ICLR.cc/2019/Conference/Paper1366/Authors" ], [ "ICLR.cc/2019/Conference/Paper1366/Authors" ], [ "ICLR.cc/2019/Conference/Paper1366/Authors" ], [ "ICLR.cc/2019/Conference/Paper1366/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1366/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1366/AnonReviewer2" ] ], "structured_content_str": [ "{\"metareview\": \"This paper proposes model poisoning (poisoned parameter updates in a federated setting) in contrast to data poisoning (poisoned training data). It proposes an attack method and compares to baselines that are also proposed in the paper (there are no external baselines). While model poisoning is indeed an interesting direction to consider, I agree with reviewer concerns that the relation to data poisoning is not clearly addressed. In particular, any data poisoning attack could be used as a model poisoning attack (just provide whatever updates would be induced by the poisoned data), so there is no good excuse to not compare to the existing strong data poisoning attacks. One reviewer raised concerns about lack of theoretical guarantees but I do not agree with these concerns (the authors correctly point out in the rebuttal that this is not necessary for an attack-focused paper). I do feel there is room to improve the overall clarity/motivation (for instance, equation (1) is presented without any explanation and it is still not clear to me why this is the right formulation).\", \"confidence\": \"3: The area chair is somewhat confident\", \"recommendation\": \"Reject\", \"title\": \"interesting model but could benefit from better writing and comparisons\"}", "{\"title\": \"A revised version of the paper has been uploaded\", \"comment\": \"We have uploaded a revised version of the paper. In particular, the revised version contains the following changes:\\n\\n1. The \\u2018related work\\u2019 paragraph has been updated to include the papers suggested by Reviewer 1 and to better place our work with respect to those papers.\\n2. The Experimental Setup (Section 2.3) now contains details of the UCI Adult Dataset which is the second dataset we use to show the effectiveness of our attacks.\\n3. Figures 1 and 4 have been updated to show time slices of the weight update distributions which we believe more clearly demonstrate our attack insights. We thank Reviewer 3 for the suggestion to improve the figures. The 3-dimensional time evolution plots of the weight update distributions are now in Appendix B.\\n4. Figure 3 has been updated with a different color scheme to make it easier to read. Its caption has also been updated for clarity.\\n5. Appendix A.1. has been added to demonstrate results for our attacks on the Adult Census dataset. This provides evidence for the effectiveness of our attack across datasets.\\n6. Appendix A.2. now discusses the case when the malicious agent is chosen at random (due to the presence of a large number of agents) and its implication for attack success.\\n7. Appendix C contains results demonstrating the effectiveness of our attack against Krum, a Byzantine resilient aggregation mechanism.\\n8. The abstract has been shortened and the introduction updated to reflect the changes above.\\n\\nA number of other minor writing and presentation changes have also been made to improve the flow of the paper. We welcome further comments!\"}", "{\"title\": \"Clarifying motivation, attack strategies, figures and experimental setup\", \"comment\": \"We thank the reviewer for their nice suggestions for clarifying the paper and have updated the paper accordingly. The Appendix now contains more results for a different dataset, more agents and a different aggregation mechanism.\\n\\n\\u2018...benefit to exploit model poisoning...\\u2019: In the federated learning setup, due to privacy concerns, the agents do not share their data directly with the server. Each agent performs computation on its local data and sends a model parameter update to the server. This is the reason we focus on model poisoning attacks since an attacker has the ability to directly poison the model update to achieve their goals. While the poisoning could be done by modifying the local data, the scope of possible changes induced in the model by data poisoning is subsumed by those possible with model poisoning. It is also unlikely that the server can rule out certain updates on the basis of them being inconsistent with the agent\\u2019s local data since it has no visibility into that data. Thus, we believe at least in the federated learning setup, our focus on model poisoning attacks is pertinent and well-justified. \\n\\n\\u2018...introduced many strategies...\\u2019: Implicit boosting is much more resource hungry as compared to explicit boosting and adds a large (5x) overhead in terms of the time taken for the malicious agent to generate weight updates. In a practical setting, this can lead to the malicious agent being dropped due to synchronization considerations. It is also much less effective (Figure 2). Since we have chosen to experiment with neural networks, following McMahan et al., our attacks do not come with provable guarantees. However, we believe that we have taken an important first step in demonstrating the feasibility of introducing stealthy backdoors in neural networks during the process of federated learning using model poisoning and our attacks can serve to provide lower bounds on attacker success. Further, since our attacks are able to achieve 100% confident targeted misclassification on multiple datasets, provable attacks will only be able to improve in terms of efficiency and stealth.\\n\\n\\u2018figures are also confusing\\u2019: We apologize for the lack of clarity in the figures (Figures 1b), 1c), 4b) and 4c)) depicting the model update distributions. We have replaced them with a representative time slice which we hope depicts the differences between benign and malicious updates more clearly.\\n\\n\\u2018unique to this data set\\u2019: In Appendix B.1., we have added results on the Adult Census data to demonstrate that our observations are not restricted to just a single dataset. The Adult Census data is qualitatively different from Fashion-MNIST, yet the same general attack observations hold. Our attacks are able to achieve high confidence misclassification while ensuring that the global model achieves high accuracy on the test set.\"}", "{\"title\": \"Clarifying that the intent of the paper is to demonstrate model poisoning attacks on neural networks in the federated learning setting, and that the detection mechanisms are used to generate more sophisticated attacks\", \"comment\": \"We thank the reviewer for their insightful comments. We provide details for the aspects the reviewer found unclear and have correspondingly updated the paper.\\n\\n\\u2018...defensive ML research...\\u2019: We would like to clarify that our paper focuses on offensive ML research. The question we ask in this paper is: given that a particular agent in the federated learning setting is malicious, what specific behavior can it induce in the global model? One possible induced behavior is to prevent the convergence of the global model, which is the focus of attacks introduced in papers such as Blanchard et al. However, we deem the possibility of the introduction of a targeted backdoor, where the global model misclassifies just one or a few examples to be more interesting from an attack perspective and that is the focus of our paper. We show that, in the federated learning setting, an attacker with the ability to poison just a single model can cause a specific example to be misclassified with 100% confidence while ensuring that the global model achieves high accuracy on the test set (Figures 4a and 7).\\n\\n\\u2018Experiments alone are not sufficient...\\u2019: The purpose of the detection techniques introduced in the paper is to ensure that our attacks cleared what we deemed to be the minimum bar for stealth. Our intent is not to put forth these detection techniques as ways to fully secure a distributed learning system but to design attacks even under considerable restrictions on the adversary. This approach aided us in the development of more sophisticated attacks that are able to simultaneously insert targeted backdoors and meet minimum levels of attack stealth by bypassing basic detection schemes.\\n\\n\\u2018...the literature cited by the paper\\u2026\\u2019: We appreciate the list of previous papers the reviewer has referred to and would like to point out a few important differences. None of the papers referred to analyze the exact behavior the adversary can induce at the global model and are concerned mainly with behavior in the presence of arbitrary gradients. In our paper, we show that an adversary can induce the global model to provide incorrect outputs for a few examples while classifying the rest correctly, which is very different from an adversary preventing global convergence. In fact, in Appendix C, we show that the Byzantine-resilient aggregation mechanism Krum (Blanchard et al.) is not robust to our attack. The Krum defense chooses a single agent at each time step depending on a score derived from distance functions, instead of linearly aggregating a number of agents\\u2019 updates. Our attacks are stealthy enough that the score function of the malicious agent\\u2019s update causes it to be selected in a majority of time steps. We apologize for our misstatement regarding Chen et al. and have corrected this in the updated version.\\n\\nFurther, the attacker model used in previous work assumes that the adversary has visibility into model updates from the benign agents even at the current time step. We do not make this assumption as it strikes us as unrealistic from an attack perspective, once again highlighting the fact that our paper seeks to understand attacker capabilities under a variety of conditions.\\n\\n\\u2018...relying on a distance computation...\\u2019: We note that one of our detection mechanisms, namely accuracy checking, does not rely on distance computations and is qualitatively different from the Byzantine resilient aggregation mechanisms proposed in previous work. Our baseline attack is thus easily detectable by the accuracy checking mechanism (Figure 1a) and Section 3.2.1), implying that at least some high dimensionality attacks can be detected. Our second detection mechanism does rely on distance computations, however, it is also able to detect updates sent using the baseline attack as aberrant (Figure 3). We introduce the alternating minimization attack precisely to overcome this detectability.\\n\\n\\u2018...absence of formal support...\\u2019: Following McMahan et al., our work focuses on analyzing possible attacks on neural networks and is thus of an empirical nature. While we appreciate the importance of formal results and have begun work on formalizing the attacks presented in this paper, we would like to emphasize that we choose to attack neural network based systems as a first step due to their widespread adoption. We wished to demonstrate the feasibility of our attack on state-of-the-art systems to begin the exploration of model poisoning attacks. Our attacks do however provide a lower bound on attacker success but do not rule out the possibility of more powerful ones. In the updated version of the paper, we have also added experiments with another qualitatively different dataset in order to further corroborate our claims.\"}", "{\"title\": \"Details on norm-based detection\", \"comment\": \"We thank the reviewer for their thoughtful comments. Indeed, detection using norm-based approaches is possible and is something we address explicitly in the paper. In Figure 3 (Page 6), we show the spread in the $L_2$ distances between the update vectors of the different agents. For the baseline attack, the spread of the distances of the malicious update vector from the benign updates diverges from that of the benign updates from each other over time.\\n\\nOn the other hand, for the alternating minimization attack with and without distance constraints, the spread of distances for the malicious update vector approaches that of the benign vectors from each other as model training converges. This does not rule out detection but is a significant improvement over the baseline.\"}", "{\"title\": \"Interesting line of work but need quite some clarifications\", \"review\": [\"This paper presents an interesting adversarial strategy to attack federated learning systems, and discussed options to detect and prevent the attacks. It is based not upon data poisoning attacks, but model poisoning attacks. It analyzes different strategies on the attacker's side, discusses the effect with real experimental data, and proposes ways to prevent such attacks from the federated learning perspective.\", \"It is an interesting line of work which develops specific optimization algorithms to try to manipulate the global classifier for certain desired outcomes. I particularly appreciate the authors' thought process of improving the attack strategies with the understanding of the detection strategies. Also the authors proposed visualization to interpret poisoned models. However, I feel this paper needs major revision to make it a solid piece of work:\", \"Need better motivations. Is there any benefit to exploit model poisoning as opposed to data poisoning? Which one is more effective in attacking (and therefore harder to detect)?\", \"It's confusing to read through Section 3 on these different attack strategies. For instance, in 3.2 the authors introduced explicit boosting and implicit boosting, but only explicit boosting is focused because implicit boosting didn't show good results in Figure 2. But is there a setup that implicit boosting will be beneficial (to the attackers)? I feel the authors introduced many strategies, but didn't give theoretical analysis. It is hard to pick the \\\"best\\\" attack strategy in practice, thus making it equally hard to have the \\\"best\\\" detection strategy.\", \"The figures are also confusing in that it's hard to understand what the 3D figures are trying to show, and it is not obvious what the legend means. The authors should also explain whether this experimental observation is unique to this data set/experimental setup or has similar trends in similar federated learning settings.\", \"Clearly Appendix A is unfinished\", \"I encourage the authors to address these questions carefully and resubmit the manuscript later.\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"The paper adresses model poisoning, a situation where it is relatively easy (and extremely important) to formally prove the claims (e.g. prove guaranteed convergence), yet not a single formal guarantee is provided.\", \"review\": \"The paper considers the federated learning setting as introduced by McMahan et al. (2017) and aims at securing it against model poisoning attacks.\", \"cons\": \"While I appreciated the writing clarity of the paper, the paper misses the whole point of defensive ML research: in the model poisoning case, a minimal requirement for a defense mechanism is to be formally proven *whatever is the behavior of the attacker* (within the threat model). Experiments alone are not sufficient for this purpose given the size of the space of possible attacks. Especially that (unlike evasion attacks) proofs are relatively easy to be made in the poisoning case.\\n\\nFor instance the literature cited by the paper (Chen 2017, Chen 2018, Blanchard 2017) + the recent follow-ups ((1)Alistarh et al. NIPS 2018, (2) El Mhamdi et al. ICML 2018, (3) Yin et al. ICML 2018 etc) are full of approaches the authors can follow to formally support their claims.\\nAlso, the literature review has been done very lightly: Chen et al. 2017b (And most cited above) do *not* assume a single Byzantine agent as said in the paper, but assume up to <50% malicious (potentially colluding) agents. \\n\\nBesides absence of formal support, how does the approach compare to the optimal results in (1) and (3) at least in the convex case ?\\nIn the abstract, it is said (ii) that in the i.i.d situation, it will be easy to make spurious update standout among benign ones), this was proven wrong in (2) when the dimension of the model is large and the loss function highly non-convex, the case of neural networks for example. As a general comment, the defense mechanisms of the paper are all relying on a distance computation and thus will all provide the sqrt(d) leeway for an attacker as described in (2) and will fail preventing high-dimensionality attacks.\", \"pros\": \"I was very excited by the ideas in section 5, this work is the first to my knowledge to attempt at interpreting poisoning attacks. I suggest to the authors to either fix the issues mentioned above (and formally analyze their work), or to focus more on the interoperability question, if they want to keep the paper in the empiricist nature.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Proposing a new setting, interesting for ICLR and has some proof-of-concept results\", \"review\": \"The paper proposes a novel adversarial attack on deep neural networks. It departs from the mainstream literature in two points:\\n1. A 'federated' learning setting is considered, meaning that we optimize a DNN in parallel (imagine a map-reduce approach, where each node performs SGD and then a central server (synchronously) updates the global parameters by averaging over the results of the nodes) and an attacker has control over one of the nodes.\\n2. The treat model is not the common data poisoning setting, but 'model poisoning' (the attacker can send an arbitrary parameter vector back to the server).\\n\\nThe paper, which is well written, starts with proposing a couple of straightforward (naive) attacks, which are subsequently used as a baseline. Since there (apparently) is no direct related work, these baselines are used in the experimental comparisons. Then the authors propose a more sophisticated attacks, based on alternatingly taking a step into the attack direction (to get an effective attack) and minimizing the loss (to Camouflage the attack), respectively. They add also the feature of restricting the solution being not to far away from the usual benign SGD step.\\n\\nAll in all, I am acknowledging that his paper introduces the federated learning paradigm to 'adversarial examples' subcommunity of ICLR and would make for good discussions at a potential poster. I find the used method slightly oversimplistic, but this is maybe fine for a proof of concept paper.\", \"final_judgement\": \"For me this paper is a 6-7 rating paper; a nice addition to the program, but not a must-have.\", \"a_have_a_question_to_the_authors_that_is_important_to_me\": \"it seems that the baseline attack could be very very simply detected by checking on the server the norm of the update vector of the attacked node. Since the vector has been boosted, the norm will be large. While your distance-based regularization somewhat takes that effect away, it remains unclear to what amount. Can you give me some (empirical) details on this issue? / or clarify if I am completely off here? thank you\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
rJgP7hR5YQ
COMPOSITION AND DECOMPOSITION OF GANS
[ "Yeu-Chern Harn", "Zhenghao Chen", "Vladimir Jojic" ]
In this work, we propose a composition/decomposition framework for adversarially training generative models on composed data - data where each sample can be thought of as being constructed from a fixed number of components. In our framework, samples are generated by sampling components from component generators and feeding these components to a composition function which combines them into a “composed sample”. This compositional training approach improves the modularity, extensibility and interpretability of Generative Adversarial Networks (GANs) - providing a principled way to incrementally construct complex models out of simpler component models, and allowing for explicit “division of responsibility” between these components. Using this framework, we define a family of learning tasks and evaluate their feasibility on two datasets in two different data modalities (image and text). Lastly, we derive sufficient conditions such that these compositional generative models are identifiable. Our work provides a principled approach to building on pretrained generative models or for exploiting the compositional nature of data distributions to train extensible and interpretable models.
[ "components", "framework", "decomposition", "gans", "work", "data", "sample", "composition", "gans composition", "generative models" ]
https://openreview.net/pdf?id=rJgP7hR5YQ
https://openreview.net/forum?id=rJgP7hR5YQ
ICLR.cc/2019/Conference
2019
{ "note_id": [ "r1lyl8RegN", "BkxNHkNq0m", "HkgZIX79Am", "rkxRCJXqRm", "B1eoKJXq0m", "r1e7Mx3A2m", "Sklg9J-a37", "SJlqBB3FnQ", "SyxhevqD3Q" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544771046924, 1543286587746, 1543283528875, 1543282645640, 1543282563087, 1541484554919, 1541373832386, 1541158209694, 1541019380459 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1365/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1365/Authors" ], [ "ICLR.cc/2019/Conference/Paper1365/Authors" ], [ "ICLR.cc/2019/Conference/Paper1365/Authors" ], [ "ICLR.cc/2019/Conference/Paper1365/Authors" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper1365/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1365/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1365/AnonReviewer2" ] ], "structured_content_str": [ "{\"metareview\": \"This paper investigates composition and decomposition for adversarially training generative models that work on composed data. Components that are sampled from component generators are then fed into a composition function to generate composed samples, aiming to improve modularity, extensibility, and interpretability of GANs. The paper is written very clearly and is easy to follow.\\nExperiments considered application to both images (MNIST) and text (yelp reviews).\\nThe original version of the paper lacks any qualitative analysis, even though experiments were described. Authors revised the paper to include some experimental results, however, they are still not sufficient. State-of-the-art baselines, from previous work suggested by the reviewers should be included for comparison.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Paper falls short of experimental results, especially comparison to state-of-the-art baselines\"}", "{\"title\": \"Thanks for your comments.\", \"comment\": \"Dear sir,\\n\\nOur generators' architectures followed DCGAN. \\nA thing to clarify in MNIST-BB experiment is our composition/decomposition network are Unet and they are not generators. For the details about the networks' architectures, please see apendix.\\nIn terms of your question, Unet composition network shows it can learn rotate, shift and scale(not very large scale) in our experiments. We observed that pooling (in our case is stride=2) is especially important for learning large shifting of foregrounds. We also observed that a fully-conolutional network (encoder-decoder) can achieve similar performance. We actually tried spatial transformer (ST) in our composition network but we failed. The reason is this network needs to learn large shifting that is too far for the gradient to propate for ST. On the other hand, ST-GAN uses a progessive algorithm to update transformation with also support the idea that it is hard to do large warping at one step. Compositionnal GAN uses RAFN network to first change the viewpoint of an object then does spatial transforming that might suggest only ST is not enough. It is not clear whether ST is sufficient to learn all affine transformation under GANs setting.\"}", "{\"title\": \"Thank you reveiwer 2. Please see our reply.\", \"comment\": \"We thank the reviewer for their detailed and thoughtful review. We have made some improvements to our paper based on these suggestions - adding quantitative evaluations, adding comparisons to relevant related work and clarifying our notation - please see inline for our detailed responses:\\n\\n>>> Hard to tell whether this approach works since the metrics for evaluation are not specified\\u2026 ...\\n>>> 1. I believe the weakest part of this paper is the evaluation section. ... ...\\n(<<<) We agree with the reviewer that while our qualitative results provide some intuition about which tasks are feasible and which are not, providing qualitative metrics across the entire dataset is important. We supplemented our original qualitative results with quantitative metrics - specifically, we evaluated the foreground generator learned from composed examples using FID score and compared this to our base GAN model trained on the actual foreground dataset (as a theoretical upper bound on performance for the compositional model). We show that as expected, we do not do quite as well when we have to learn to decompose and model the foreground simultaneously, but are within range of the FID scores reported in literature on MNIST and Fashion-MNIST.\\n>>> 2. Aside from evaluation, there are some other details missing from the presentation. ... ...Choices of models are often not explained. ...\\n(<<<)We apologize for the missing details, these details were omitted due to space constraints but we have included the relevant details on the full architecture used, including type of regularization, values of alpha etc., in a new section of the appendix.\\n>>> It is not explained in detail how the Yelp-reviews dataset is altered to achieve coherence. ... ....\\n(<<<) The general architecture of the composition network is described at a high level in section 3.2. In brief, it is a seq-to-seq model that takes the concatenation of the two sentence and outputs a sentence pair that is made more coherent by this network. In addition, we have included additional details on architecture and hyperparameters used in a new section of the appendix.\\n>>> 3. The theoretical section is an interesting contribution, ... ...\\n(<<<) We apologize for the disconnected presentation of our theoretical results. The theoretical results were meant to formalize the intuition from the experimental examples that task 1 and 3 are \\u201cfeasible\\u201d in some sense and to provide sufficient conditions on the composition operation such that tasks 1 and 3 are identifiable. We have edited the text to make this connection clearer.\\n>>> 4. My understanding is that both datasets used are created by the authors by making alterations to MNIST and Yelp-reviews dataset, .... ....\\n(<<<) Our primary goal was to suggest a set of composition / decomposition subtasks (c.f. tasks 1 through 4 in our submission), as well as deriving some basic theoretical results about the identifiability of these tasks (e.g., conditions where one can learn component models from composed data etc). The experimental results were intended more as illustrative examples of when such models were learnable (or not) which motivated our synthetic datasets where the \\u201cground truth\\u201d composition operation is known to us. We agree with the reviewer that it would be interesting to apply our model to more complex datasets and we look forward to exploring that further (along with various extensions of the model that this would require) in future work.\\n>>> 5. In section 2.3, in the coherent sentence experimental setting,... ...\", \"minor_issues\": \">>> 6. From the related work section, it is not clear how your approach is different from Azadi et al. (2018). Please include more details.\\n(<<<) Complicated but special case of our framework, hence comparison would not be suitable. We cited them\\nWe have added a comparison table1 which explains how our work relates to various other contributions in this area including Azadi et al. \\n>>> 7. In section 2.4, you mention using Wasserstein GANs, ... ...\\n(<<<) We apologize that due to space constraints we were not able to explain the Wasserstein GAN in sufficient detail. We have provided additional details of our architectures in the appendix. \\n>>> 8. I believe there are some errors in which tasks reference which figures in section 3.3. Should Task 2 refers to Figure 6, and Task 3 to Figure 7?\\n(<<<) Yes, that is correct, we apologize for the confusion and have corrected the references.\\n>>> 9. What exactly is range(.) in section 4? If this refers to the interval of values that a variable can take, the saying \\u201cis a matrix of size |range(Z)| \\u00d7 |range(Y )|\\u201d doesn\\u2019t exactly make sense. Please define formally. \\n(<<<) \\u201crange(.)\\u201d refers to the set of values that Z and Y can take on \\u201c| range (X) |\\u201d thus denoting the cardinality of the range of X (the number of values X can take on).\"}", "{\"title\": \"Thank you reveiwer 3. Please see our reply.\", \"comment\": \">>> - There have been works on this before in the GAN literature, they have not been even cited, let alone being compared to in the experiments. Seminal examples include Donahue et al., ICLR 2018 \\\"Semantically decomposing the latent spaces of generative adversarial networks\\\", and (a bit less starkly in terms of the alignment with the goals of this paper): Huang et al., 2017 \\\"Stacked generative adversarial networks\\\".\\n- In general, comparisons to state-of-the-art (or to other) algorithms are missing.\\n\\nWe thank the reviewer for pointing us to some of the related work in this field. We\\u2019ve added a new comparison table that compares our method to other related methods, and in particular, show that to the best of our knowledge we are the first to tackle the general problem of learning a part generator and composition / decomposition directly from composed data. Regarding comparisons specifically to Donahue et al. and Huang et al. please see the section below on \\u201cfactorized\\u201d representations.\\n\\n>>> - Is the assumptions of pre-trained components viable with image, and not text, data? Please elaborate\\n\\n\\nWe apologize for the confusion caused by our presentation of the text example. The assumption of pre-trained components is indeed still viable with text. In our example, the equivalent pre-trained component would be a generative model for sentences from the review corpus - in our experiments, this is done by training a generator on the first/second sentence of reviews in the corpus.\\n\\n>>> - The related work section is missing out on dozens of works, those on disentanglement or interpretability; what is the point then of making a related work section in the first place if only one single example of an algorithm in each broad topic is mentioned? If so, I would suggest mentioning this single example prior to the discussing the topic without a related work section, or (apparently the better option) to do a related work section with a rigorous coverage. Examples of some related works on disentanglement and interpretability: \\nHiggins et al., ICLR 2017 \\\"beta-VAE\\\" - Kim & Mnih, ICML 2018 \\\"Disentangling by factorising\\\" - Adel et al., ICML 2018 \\\"Discovering interpretable representations for both deep generative and discriminative models\\\" - Chen et al., NIPS 2017 \\\"InfoGAN: Interpretable representation learning by information maximizing generative adversarial nets\\\", etc.\\n\\n\\nAgain, we thank the reviewer for pointing us to work that is related from the interpretability, disentanglement side of things. As the reviewer correctly points out, there is much recent work in the area of learning disentangled representations of data. However, this work is not directly relevant to our work (as we explain below). We included some examples (including Kim & Mnih etc.) from the disentanglement literature, mostly as a means of explaining to the reader how our work differs from the general approaches in learning disentangled representations and not as a comprehensive review of work in the broad field.\\n\\nAs mentioned in our related work section, the work in disentangled representations is complementary to our work but differs in that 1) we specifically attempt to learn standalone component generators and 2) our composition operations occurs \\u201cindependently\\u201d of sampling the components (we also cite this as a reason why the decomposition learned by our models are unlikely to yield good \\u201cdisentangled\\u201d representations). This is important since our goal is to be able to learn marginal, component generators that can then be reused (e.g., to inductively learn more components as illustrated in our chain learning example).\\n\\nIt is unclear to us, for example, how we could sample individual components from a factorized latent representation (nor should that be the goal when optimizing for interpretability). We see our work as a parallel but complementary approach, focusing on exploring how we can build complex models incrementally with atomic generators.\\n\\nWe also thank the reviewer for their feedback on our related work section, we\\u2019ve fleshed out comparisons to relevant related work in the form of a new table 1 which summarizes our contribution relative to some of the most similar existing work.\\n\\n>>> - The advantages promised in Section 1 are a little bit too presumptuous. Too many idealistic assumptions are need in order for these advantages to hold. For instance, extensibility has been mentioned as an advantage in Section 1 and in the abstract, and that has not been capitalised on, or confirmed in the experiments, or from this point onwards. \\n\\nWe agree with the reviewer that the advantages in section 1 are more aspirational and should be reworded to reflect that. We do note that we have added a cross-dataset chain-learning example which we hope does suggest that these advantages could be realizable using this compositional framework.\"}", "{\"title\": \"Thank you reveiwer 1. Please see our reply.\", \"comment\": \"We thank the reviewer for their detailed and thoughtful review. We have made some improvements to our paper based on these suggestions - adding quantitative evaluations and expanding our comparison to related work - please see inline for our detailed responses:\", \"reply_to_1\": \"First, we\\u2019d like to clarify that the primary intent of our work was to suggest a set of composition / decomposition subtasks (c.f. tasks 1 through 4 in our submission), as well as deriving some basic theoretical results about the identifiability of these tasks (e.g., conditions where one can learn component models from composed data etc). The experimental results were intended more as illustrative examples of when such models were learnable (or not) which explained our lack of quantitative evaluations.\\n\\nHowever, we agree with the reviewer that providing a qualitative evaluation across the entire dataset is useful. We supplemented our original qualitative results with quantitative metrics - specifically, we evaluated the foreground generator learned from composed examples using a standard FID score and compared this to our base GAN model trained on the actual foreground dataset (as a theoretical upper bound on performance for the compositional model). We show that, as expected, we do not do quite as well when we have to learn to decompose and model the foreground simultaneously, but are within range of the FID scores reported in literature on MNIST. We further evaluated FID scores on Fashion-MNIST in the same manner as an additional validation.\", \"reply_to_2\": \"We apologise for the lack of clarity in our presentation of the various sub-tasks. Part of the contribution of our work is to enumerate various composition/decomposition tasks and to demonstrate the feasibility of a subset of these tasks. However, we agree with the reviewer that this may result in confusion for the reader. We\\u2019ve edited the introduction to make it clearer that our main focus is to demonstrate that \\u201cchain learning\\u201d is possible since it provides a simple proof-of-concept for modular extensions of GANs.\", \"reply_to_3\": \"We agree that having a pre-specified number of components is a limitation of this framework. We are definitely interested in exploring extensions of such models beyond a fixed, pre-specified number of components. However, we believe that even this constrained version of compositionality has not been extensively explored - especially in terms of our theoretical understanding of when such compositional training is possible.\\n\\nRegarding our compositional operation being too simple, we agree that our composition transformations are not sufficient to capture \\u201creal-world\\u201d composition. Our goal was to show a proof-of-concept on a challenging but still feasible set of composition operations (e.g., in our chain learning example, the composition consists of scaling, rotation and masking).\\nLastly, we agree that most of the tasks assume knowledge of a component generator. This was the main motivation behind our work (how to re-use GANs in a modular fashion), we believe that the chain learning example shows a possible approach for how one can iteratively build up a collection of component generators and hence handle compositional data of increasing complexity.\", \"reply_to_4\": \"We thank the reviewer for the pointer to LR-GANs, that is certainly very interesting and relevant related work. However, there are some key differences between our work and the work on LR-GANs. Firstly, we learn a marginal component model for the foreground that is able to generate foreground samples (instead of generating foreground conditioned on background) this is important for us to be able to reuse component generators as demonstrated in our chain learning examples (we have included a cross-domain chain learning example in the appendix to further illustrate this). Secondly, the LR-GAN is restricted to modelling affine compositions and do not learn a corresponding decomposition operation. The authors also demonstrate that both having a good foreground mask and restricting composition to affine transformations is required for good performance of their model in their ablative analysis. We appreciate the insights provided by the authors of LR-GAN, and while these priors are useful when modeling images specifically and may be useful in our contexts as well, we are more focused on identifying where compositional learning is identifiable more generally without \\n\\nIn summary, there are two main differences in the model formulation directly. First, in our framework the foreground generator and background generator are independent, while LR-GAN\\u2019s foreground generator is dependent on background generator. This independence is required for part generators learnt to be reusable. Second, in our framework there is a decomposition operation and a cycle consistency regularization in the model. We showed that this regularization is beneficial to learning a good part generators.\"}", "{\"comment\": \"Dear Authors,\\n \\nCould you please provide more details on your generators\\u2019 architectures? I am particularly interested in your MNIST-BB experiment (Figure 4) and the fact of rotating and shifting digits according to the given background. I think a Resnet or a Unet generator is not able to rotate, shift, and scale objects by an adversarial loss function. This is why a spatial transformer was used in the ST-GAN [1] and Compositional GAN [2] papers. It would be great if you could clarify this.\\n\\n[1]: Lin, et al. \\\"ST-GAN: Spatial Transformer Generative Adversarial Networks for Image Compositing.\\\"\\u00a0\\n[2]: Azadi, et al. \\\"Compositional GAN: Learning Conditional Image Composition.\\\"\", \"title\": \"Not clear how to spatially transform objects\"}", "{\"title\": \"interesting paper, but missed quantitative analysis and comparisons.\", \"review\": \"[Overview]\\n\\nIn this paper, the authors studied the problem of composition and decomposition of GANs. Motivated by the observations that images are naturally composed of multiple layouts, the authors proposed a new framework to study the compositional image generation and its decomposition by defining several tasks. On those various tasks, the authors demonstrate the possibility of the proposed model to composing image components and decompose the images afterwards. These results are interesting and insightful to some extent.\\n\\n[Strengthes]\\n\\n1. The authors proposed a framework for compose images from components and decompose the images into components. Based on this new framework, the authors tried different settings, by fixing the learning of one or more modules in the model. The experiments on various tasks are appreciated.\\n\\n2. In the experiments, the authors tried both image and text to demonstrate the concepts in this paper. Moreover, some qualitative results are presented.\\n\\n[Weaknesses]\\n\\n1. The authors performed multiple experiments regarding various tasks defined in this paper.However, I can hardly find any quantitative evaluation for the results. It is not clear to me that how the quality of the composed images and the decomposed components from images are. I would suggest the authors derive some metric to measure quality quantitatively, provide some statistics on the whole datasets.\\n\\n2. In this paper, the authors proposed multiple tasks in terms of which parts are fixed and known in the training process. However, dominated by so many different tasks, the core idea is losses in the paper. From the paper, I cannot get the core idea the authors want to deliver. I would suggest the authors focus on one certain task and perform more qualitative and quantitative analysis and comparisons, as also mentioned above.\\n\\n3. The proposed model has several tricky parts. First, the number of components are pre-determined. However, in realistic cases, the number of components are unknown, and thus how many component generators should be used is ill-posed. Second, the composing operation is simple and tricky. Such a simple composing operation make it hard to adapt to some more complicated data, such as cifar10 or so. Thirdly, almost all tasks need some components known. Even for the Task 4, c is known, and the model performs poorly for generating the disentangled components.\\n\\n4. The authors missed one very relevant paper:\", \"lr_gan\": \"Layered Recursive Generative Adversarial Networks for Image Generation. Yang et al.\\n\\nIn the above paper, the authors proposed an end-to-end model for generating images with background and foreground compositionally. It can be applied to a number of realistic datasets. Regardless of the decomposition part in this paper, the proposed method in the above paper seems to be clearly superior to the composition part in this paper considering this paper fails on Task 4. The authors should give credit to the above paper (even the synthesized MNIST dataset looks similar ) and pay some efforts to explain the advantages in comparison it.\\n\\n[Summary]\\n\\nThis paper proposed a new framework to study the compositionally of images during generation and decomposition. Through several experiments on various tasks, the authors presented some interesting results and provided some insights on the potentials and difficulties in this direction. However, as pointed above, I think this paper lacks enough experimental analysis and comparison. Its core idea hard to capture. Also, it missed a comparison to some related work.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"So isolated from the similar works on GANs\", \"review\": [\"There have been works on this before in the GAN literature, they have not been even cited, let alone being compared to in the experiments. Seminal examples include Donahue et al., ICLR 2018 \\\"Semantically decomposing the latent spaces of generative adversarial networks\\\", and (a bit less starkly in terms of the alignment with the goals of this paper): Huang et al., 2017 \\\"Stacked generative adversarial networks\\\".\", \"In general, comparisons to state-of-the-art (or to other) algorithms are missing.\", \"Is the assumptions of pre-trained components viable with image, and not text, data? Please elaborate\", \"The related work section is missing out on dozens of works, those on disentanglement or interpretability; what is the point then of making a related work section in the first place if only one single example of an algorithm in each broad topic is mentioned? If so, I would suggest mentioning this single example prior to the discussing the topic without a related work section, or (apparently the better option) to do a related work section with a rigorous coverage. Examples of some related works on disentanglement and interpretability:\", \"Higgins et al., ICLR 2017 \\\"beta-VAE\\\" - Kim & Mnih, ICML 2018 \\\"Disentangling by factorising\\\" - Adel et al., ICML 2018 \\\"Discovering interpretable representations for both deep generative and discriminative models\\\" - Chen et al., NIPS 2017 \\\"InfoGAN: Interpretable representation learning by information maximizing generative adversarial nets\\\", etc.\", \"The advantages promised in Section 1 are a little bit too presumptuous. Too many idealistic assumptions are need in order for these advantages to hold. For instance, extensibility has been mentioned as an advantage in Section 1 and in the abstract, and that has not been capitalised on, or confirmed in the experiments, or from this point onwards.\", \"It will be interesting to see what happens with rather real-world cases like occlusion, etc\", \"Writing has room for improvements, in terms of both the flow and also grammar, etc. There are a few typos.\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Interesting new problem formulation, not carefully presented and evaluated.\", \"review\": \"The paper proposes a framework for training generative models that work on composed data. The models are trained in an adversarial fashion. The authors apply it to decompose foreground/background parts on MNIST images, and to perform sentence composition/decomposition.\", \"high_level_comments\": [\"Clarity: In terms of language and writing style, the paper is written very clearly and easy to follow. In terms of presentation, there are some details that are omitted which would have made understanding easier and the work more reproducible.\", \"Quality: The idea that is introduced seems intuitive and reasonable, but the experiments does not have enough details to prove that this method works (i.e. no quantitative results presented). Moreover, the presentation of the method is not very well done (missing details), especially since the authors used the upper limit of 10 pages.\", \"Originality: I am not familiar with the literature of generative models to judge this precisely, but according to the related work section it sounds like an original idea that is worth sharing.\", \"Significance: I believe the idea of modeling data composition explicitly sounds intuitive and interesting, and it is worth sharing. However, the experimental section does not have enough evidence that it is actually possible to learn this, so it is not clear whether the contribution is significant.\"], \"pros\": [\"interesting new problem formulation\", \"simple and clear language\", \"the theoretical analysis in the last section could be interesting more generally in the context of GANs\", \"the framework is applied on 2 different modalities: images and text.\"], \"cons\": [\"hard to tell whether this approach works since the metrics for evaluation are not specified and there are no quantitative results in the experimental section (only 1 qualitative example per task)\", \"the work is not reproducible due to the lack of details (see more explanations below)\", \"the theoretical analysis is a standalone piece of the paper, without any discussion about the implications, or making connections to the previous sections.\"], \"detailed_comments\": \"1.\\tI believe the weakest part of this paper is the evaluation section. The authors run their framework on 4 tasks of increasing difficulty. While the MNIST examples make for a nice and intuitive qualitative analysis, the are no quantitative results at all. The only result that is reported for each task is one qualitative picture. The authors make statements such as \\u201cThe decomposition network learns to decompose the digits and backgrounds correctly\\u201d , \\u201cGiven one component, decomposition function and the other component can be learned.\\u201d but there is not mention for how these conclusion are made (no metrics, no numbers). Indeed, it is difficult in general to quantify the results of generative models, but most other GAN papers introduce some sort metric that can be used to aggregate the evaluation on an entire dataset. If the authors manually inspected the results, they should at least report how many images they inspected and how many looked correct. \\n2.\\tAside from evaluation, there are some other details missing from the presentation. The individual details may not be major, but because all of these are missing together, it really affects the overall quality of the paper. For example:\\n \\uf0a7\\t the authors state: \\u201cTo train discriminator(s), a regularization is applied. For brevity, we do not show the regularization term (see Petzka et al. (2017)) used in our experiments.\\u201d. For reproducibility purposes, I believe it is important to at least mention the type of regularization, at least in the appendix. \\n \\uf0a7\\tThere is a parameter alpha used to balance the losses. What values was used in the experiments?\\n \\uf0a7\\tChoices of models are often not explained. Why did you choose that form for c(o1, o2) in section 3.3? Why DCGAN for component generators, and U-net for decomposition?\\n \\uf0a7\\tIt is not explained in detail how the Yelp-reviews dataset is altered to achieve coherence. The authors mention that \\u201cAs we sample a pair independently, the input sentences are not generally coherent but the coherence can be achieved with a small number of changes.\\u201d. However, the specific algorithm by which these changes are made is not specified, and thus it can\\u2019t be reproduced.\\n3.\\tThe theoretical section is an interesting contribution, but the paper just states a list of theorems without making any connections to the applications used before, or a broader discussion about how these fit in the context of GANs more generally.\\n4.\\tMy understanding is that both datasets used are created by the authors by making alterations to MNIST and Yelp-reviews dataset, thus making them to some extent synthetic datasets suited to fit this problem formulation. I would have like to see how this composition/decomposition works on existing datasets with no alterations. Does it still work? \\n5.\\tIn section 2.3, in the coherent sentence experimental setting, I don\\u2019t fully understand the design of the task. Figure 2 shows an example where composition and decomposition are not symmetric (i.e. composing then decomposing does not go back to the input sentences), although one of your losses is supposed to ensure exactly this cyclic consistency. Why not choose another problem that doesn\\u2019t directly violate your assumptions?\", \"minor_issues\": \"6.\\tFrom the related work section, it is not clear how your approach is different from Azadi et al. (2018). Please include more details.\\n7.\\tIn section 2.4, you mention using Wasserstein GANs, with no further details about this model (not even a one line description). Without reading their paper, the readers of your paper could not easily follow through this section. The losses further introduced are also not explained intuitively (e.g. what do the two expectation terms in l_g_i represent?).\\n8.\\tI believe there are some errors in which tasks reference which figures in section 3.3. Should Task 2 refers to Figure 6, and Task 3 to Figure 7?\\n9.\\tWhat exactly is range(.) in section 4? If this refers to the interval of values that a variable can take, the saying \\u201cis a matrix of size |range(Z)| \\u00d7 |range(Y )|\\u201d doesn\\u2019t exactly make sense. Please define formally.\", \"final_remarks_and_advice\": \"Overall, I believe the paper introduces some interesting ideas. There is definitely value in the problem definition and theoretical analysis. However, I believe the paper needs more work on presentation and evaluation, especially since the authors opted for 10 pages and according to ICLR guidelines \\u201cReviewers will be instructed to apply a higher standard to papers in excess of 8 pages.\\u201d. Hopefully the above comments will help the authors improve this work!\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
H1GLm2R9Km
Learning Backpropagation-Free Deep Architectures with Kernels
[ "Shiyu Duan", "Shujian Yu", "Yunmei Chen", "Jose Principe" ]
One can substitute each neuron in any neural network with a kernel machine and obtain a counterpart powered by kernel machines. The new network inherits the expressive power and architecture of the original but works in a more intuitive way since each node enjoys the simple interpretation as a hyperplane (in a reproducing kernel Hilbert space). Further, using the kernel multilayer perceptron as an example, we prove that in classification, an optimal representation that minimizes the risk of the network can be characterized for each hidden layer. This result removes the need of backpropagation in learning the model and can be generalized to any feedforward kernel network. Moreover, unlike backpropagation, which turns models into black boxes, the optimal hidden representation enjoys an intuitive geometric interpretation, making the dynamics of learning in a deep kernel network simple to understand. Empirical results are provided to validate our theory.
[ "supervised learning", "backpropagation-free deep architecture", "kernel method" ]
https://openreview.net/pdf?id=H1GLm2R9Km
https://openreview.net/forum?id=H1GLm2R9Km
ICLR.cc/2019/Conference
2019
{ "note_id": [ "rkgdyqCweV", "HkeHkKRPlE", "B1g_6u5UeE", "H1xiK8H4kE", "BJxxS8z9Am", "B1x5SQXKAQ", "BJl_zmmKCm", "SJeCRGXKRm", "S1x314iyCm", "r1epFXo1Rm", "SJgTm7sJRm", "HJeGi9_cTQ", "H1evQ5_96X", "B1xshY_qp7", "BJxwGCYLpX", "ryg8TpKL67", "r1xNY6K867", "rJeHa0rZ6Q", "SJgeY0rZTX", "Byebf0BbT7", "rklkp6H-6X", "rJe4OaB-67", "rJxLgTrWp7", "SkeGvhSW67", "rJgDEhHWT7", "B1l4k2B-am", "rkeHpoBWaQ", "ryejatHZaQ", "rJelxUjc2X", "H1xZy96tnQ", "rklAnxYt2X" ], "note_type": [ "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1545230815631, 1545230557402, 1545148607780, 1543947907223, 1543280183532, 1543217985985, 1543217936254, 1543217877951, 1542595555809, 1542595460875, 1542595364777, 1542257305608, 1542257183367, 1542257074924, 1542000143256, 1542000062059, 1541999995764, 1541656252693, 1541656183903, 1541656072870, 1541655991087, 1541655916032, 1541655789628, 1541655641673, 1541655599035, 1541655515976, 1541655484955, 1541654979007, 1541219816016, 1541163480815, 1541144758176 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1364/Authors" ], [ "ICLR.cc/2019/Conference/Paper1364/Authors" ], [ "ICLR.cc/2019/Conference/Paper1364/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1364/Authors" ], [ "ICLR.cc/2019/Conference/Paper1364/Authors" ], [ "ICLR.cc/2019/Conference/Paper1364/Authors" ], [ "ICLR.cc/2019/Conference/Paper1364/Authors" ], [ "ICLR.cc/2019/Conference/Paper1364/Authors" ], [ "ICLR.cc/2019/Conference/Paper1364/Authors" ], [ "ICLR.cc/2019/Conference/Paper1364/Authors" ], [ "ICLR.cc/2019/Conference/Paper1364/Authors" ], [ "ICLR.cc/2019/Conference/Paper1364/Authors" ], [ "ICLR.cc/2019/Conference/Paper1364/Authors" ], [ "ICLR.cc/2019/Conference/Paper1364/Authors" ], [ "ICLR.cc/2019/Conference/Paper1364/Authors" ], [ "ICLR.cc/2019/Conference/Paper1364/Authors" ], [ "ICLR.cc/2019/Conference/Paper1364/Authors" ], [ "ICLR.cc/2019/Conference/Paper1364/Authors" ], [ "ICLR.cc/2019/Conference/Paper1364/Authors" ], [ "ICLR.cc/2019/Conference/Paper1364/Authors" ], [ "ICLR.cc/2019/Conference/Paper1364/Authors" ], [ "ICLR.cc/2019/Conference/Paper1364/Authors" ], [ "ICLR.cc/2019/Conference/Paper1364/Authors" ], [ "ICLR.cc/2019/Conference/Paper1364/Authors" ], [ "ICLR.cc/2019/Conference/Paper1364/Authors" ], [ "ICLR.cc/2019/Conference/Paper1364/Authors" ], [ "ICLR.cc/2019/Conference/Paper1364/Authors" ], [ "ICLR.cc/2019/Conference/Paper1364/Authors" ], [ "ICLR.cc/2019/Conference/Paper1364/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1364/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1364/AnonReviewer3" ] ], "structured_content_str": [ "{\"title\": \"Your thoughts on our response?\", \"comment\": \"Dear Reviewer 3,\\n\\nHello! We really hope we have fully addressed your concerns in our earlier reply and it would be great if you could give us some feedback. Do you think we have fully addressed your concerns? In particular, if you think our newly-added experimental results could validate the practicality of our algorithm, could you please reconsider your rating? Thank you very much!\\n\\nBest regards,\\nPaper 1364 authors\"}", "{\"title\": \"Your thoughts on our response?\", \"comment\": \"Dear Reviewer 2,\\n\\nHello! We really hope we have fully addressed your concerns in our earlier reply and it would be great if you could give us some feedback. In particular, if you think we have adequately addressed your two major technical concerns, could you please reconsider your rating? Thank you very much!\\n\\nBest regards,\\nPaper 1364 authors\"}", "{\"metareview\": \"The reviewers mostly raised two concerns regarding the paper: a) why this algorithm is more interpretability than BP (which is just gradient descent); b) the exposition of the paper is somewhat confusing at various places; c) the lack of large-scale experiment results to show this is practically relevant. In the AC's opinion, a principled kernel-based approach can be counted as interpretable, and there the AC would support the paper if a) is the only concern. However, c) seems to be a serious concern since the paper doesn't seem to have experiments beyond fashion MNIST (e.g., CIFAR is pretty easy to train these days) and doesn't have experiments with convolutional models. Based on c), the AC decided that the paper is not quite ready for acceptance.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"meta-review\"}", "{\"title\": \"Thank you!\", \"comment\": \"Dear Reviewer 1,\\n\\nThank you so much for increasing the score. We couldn't have made those improvements without your very helpful review. Thanks again for reviewing our paper.\"}", "{\"title\": \"Summary of changes made to the manuscript\", \"comment\": \"Dear reviewers,\\n\\nHere is a summary of all the major changes we have made to our paper according to your comments and requests. We do hope that we have fully addressed your concerns.\\n\\n1. New experimental results. As requested by Reviewer 1, we have added experimental results to better complement our theory. These include \\n\\n 1) Results of greedy kMLPs on two new datasets (standard MNIST and Fashion-MNIST [1]).\\n 2) New standard MLP baselines including MLPs trained with SGD, Adam, RMSProp+batch normalization and RMSProp+dropout on all datasets.\\n 3) Comparisons of greedy kMLPs and kMLPs trained with standard BP on MNIST.\\n\\nIn these experiments, the greedy kMLPs compared favorably with the MLPs even though no advanced training techniques such as batch normalization or dropout was used for the former. Also, for both the single-hidden-layer and the two-hidden-layer kMLPs, the layer-wise algorithm consistently outperformed BP.\\n\\n2. As pointed out by Reviewer 1, Section 4.3 has been revised to emphasize that the proposed layer-wise algorithm learns network-wise optimality at each layer, justifying the optimality and practicality of the algorithm.\\n\\n3. We have reformulated the two key lemmas (Lemma 4.3 and Lemma 4.5), as requested by Reviewer 1. We think this new formulation is clearer than the previous one.\\n\\n4. We have rewritten the definition of f^(i)_j in par. 3, Section 2, since Reviewer 3 mentioned that the original one was unclear. Also, as requested by Reviewer 1, we have justified expanding the kernel machine on the training sample using the generalized representation theorem [2]. This theorem directly applies in the layer-wise setting.\\n\\n5. As pointed out by Reviewer 1 and 2, the balanced class assumption of Lemma 4.3 has now been removed. The proof has also been updated to justify the removal. \\n\\n\\n\\n[1] Xiao, H., Rasul, K., & Vollgraf, R. (2017). Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. arXiv preprint arXiv:1708.07747.\\n\\n[2] Sch\\u00f6lkopf, B., Herbrich, R., & Smola, A. J. (2001, July). A generalized representer theorem. In International conference on computational learning theory (pp. 416-426). Springer, Berlin, Heidelberg.\"}", "{\"title\": \"New results on Fashion-MNIST\", \"comment\": \"This post contained a description of updates in a past version of the paper. For clarity, all the changes we have made during the rebuttal period to the manuscript are now summarized in the newest reply to all reviewers. Please refer to that for details.\"}", "{\"title\": \"New results on Fashion-MNIST\", \"comment\": \"This post contained a description of updates in a past version of the paper. For clarity, all the changes we have made during the rebuttal period to the manuscript are now summarized in the newest reply to all reviewers. Please refer to that for details.\"}", "{\"title\": \"New results on Fashion-MNIST\", \"comment\": \"This post contained a description of updates in a past version of the paper. For clarity, all the changes we have made during the rebuttal period to the manuscript are now summarized in the newest reply to all reviewers. Please refer to that for details.\"}", "{\"title\": \"New manuscript has been uploaded\", \"comment\": \"This post contained a description of updates in a past version of the paper. For clarity, all the changes we have made during the rebuttal period to the manuscript are now summarized in the newest reply to all reviewers. Please refer to that for details.\"}", "{\"title\": \"New manuscript has been uploaded\", \"comment\": \"This post contained a description of updates in a past version of the paper. For clarity, all the changes we have made during the rebuttal period to the manuscript are now summarized in the newest reply to all reviewers. Please refer to that for details.\"}", "{\"title\": \"New manuscript has been uploaded\", \"comment\": \"This post contained a description of updates in a past version of the paper. For clarity, all the changes we have made during the rebuttal period to the manuscript are now summarized in the newest reply to all reviewers. Please refer to that for details.\"}", "{\"title\": \"New manuscript with more experimental results has been uploaded\", \"comment\": \"This post contained a description of updates in a past version of the paper. For clarity, all the changes we have made during the rebuttal period to the manuscript are now summarized in the newest reply to all reviewers. Please refer to that for details.\"}", "{\"title\": \"New manuscript with more experimental results has been uploaded\", \"comment\": \"This post contained a description of updates in a past version of the paper. For clarity, all the changes we have made during the rebuttal period to the manuscript are now summarized in the newest reply to all reviewers. Please refer to that for details.\"}", "{\"title\": \"New manuscript with more experimental results has been uploaded\", \"comment\": \"This post contained a description of updates in a past version of the paper. For clarity, all the changes we have made during the rebuttal period to the manuscript are now summarized in the newest reply to all reviewers. Please refer to that for details.\"}", "{\"title\": \"New manuscript has been uploaded\", \"comment\": \"This post contained a description of updates in a past version of the paper. For clarity, all the changes we have made during the rebuttal period to the manuscript are now summarized in the newest reply to all reviewers. Please refer to that for details.\"}", "{\"title\": \"New manuscript has been uploaded\", \"comment\": \"This post contained a description of updates in a past version of the paper. For clarity, all the changes we have made during the rebuttal period to the manuscript are now summarized in the newest reply to all reviewers. Please refer to that for details.\"}", "{\"title\": \"New manuscript has been uploaded\", \"comment\": \"This post contained a description of updates in a past version of the paper. For clarity, all the changes we have made during the rebuttal period to the manuscript are now summarized in the newest reply to all reviewers. Please refer to that for details.\"}", "{\"title\": \"Reply to Reviewer 1 [6/6]\", \"comment\": \"- (p.7, 1st par in Sec 6) [related] \\\"However they did not extend the idea to any **arbitrary** NN\\\" (emphasis mine). Can you please be more specific here?\\n\\n[We meant that the early works that tried to \\\"kernelize\\\" NNs considered only certain NN architectures. As we have listed in Section 6, the most general work in this regard proposed only kMLP and the KN equivalent of CNN [7]. In contrast, we proposed that one can convert any NN to an equivalent KN by the simple procedure described in Section 2.]\\n\\n---------------------------------------------\\n\\n- (p.5-6) [minor] Last sentence in Lemmas 4.3 and 4.5 is slightly confusing. Can you rephrase please?\\n\\n[Edit (Nov. 26): These two lemmas have been rephrased in the newest version of the paper. We thank the reviewer for pointing this out.]\\n\\n---------------------------------------------\\n\\n- (p.6) [minor] You say \\\"the learned decision boundary would generalise better to unseen data\\\". Can you please clarify the last sentence (e.g. being more precise about the meaning of the word \\\"simple\\\" in the same sentence) and provide reference for why this is necessarily the case?\\n\\n[That the learned decision boundary would generalize better to unseen data follows from the fact that the optimal hidden representation at the last layer is so defined such that the bound on the expected classification error of the classifier is minimized. Hence if the l-1^th layer learns a representation that is close to this optimal one, it is of course reasonable to expect the classifier to generalize better to unseen data (sampled from the same distribution as that of the training set).\\n\\nBy \\\"simple\\\", we meant that examples from distinct classes would be far apart in the RKHS and those in the same class would be close. This observation is justified in the proof of Lemma 4.3. And intuitively, this representation is \\\"simple\\\" to classify.]\\n\\n\\n\\n[1] Raghu, M., Gilmer, J., Yosinski, J., & Sohl-Dickstein, J. (2017). Svcca: Singular vector canonical correlation analysis for deep learning dynamics and interpretability. In Advances in Neural Information Processing Systems (pp. 6076-6085).\\n\\n[2] Bartlett, P. L. (1997). For valid generalization the size of the weights is more important than the size of the network. In Advances in neural information processing systems (pp. 134-140).\\n\\n[3] Bartlett, P. L., & Mendelson, S. (2002). Rademacher and Gaussian complexities: Risk bounds and structural results. Journal of Machine Learning Research, 3(Nov), 463-482.\\n\\n[4] Park, J., & Sandberg, I. W. (1991). Universal approximation using radial-basis-function networks. Neural computation, 3(2), 246-257.\\n\\n[5] Micchelli, C. A., Xu, Y., & Zhang, H. (2006). Universal kernels. Journal of Machine Learning Research, 7(Dec), 2651-2667.\\n\\n[6] Sch\\u00f6lkopf, B., Herbrich, R., & Smola, A. J. (2001, July). A generalized representer theorem. In International conference on computational learning theory (pp. 416-426). Springer, Berlin, Heidelberg.\\n\\n[7] Zhang, S., Li, J., Xie, P., Zhang, Y., Shao, M., Zhou, H., & Yan, M. (2017). Stacked Kernel Network. arXiv preprint arXiv:1711.09219.\"}", "{\"title\": \"Reply to Reviewer 1 [5/6]\", \"comment\": \"---------------------\", \"experiments\": \"---------------------\\n\\n- I would appreciate addition of some standard baselines, like MLP combined with dropout or batch normalisation, and optimised with RMSProp (or similar). These would greatly help with assessing competitiveness with current SOTA results.\\n\\n- It would be nice to see the relative contribution of the two main components of the paper. Specifically, an experiment which would evaluate empirical performance of KNs optimised by some form of gradient descent vs. by your layer-wise training rule would be very insightful.\\n\\n[Edit (Nov. 26): We have added all the requested comparisons. Please refer to our summary of all changes made to the manuscript for details.]\\n\\n-----------\", \"other\": \"-----------\\n\\n- (p.2, 1st par in Sec 2) [minor] You state \\\"a kernel machine is a universal function approximator\\\". I suppose that might be true for a certain class of kernels but not in general?! Please clarify.\\n\\n[Indeed, this is true for the certain kernels only [4][5]. We have updated the manuscript to be more specific about this and we thank the reviewer for pointing it out.]\\n\\n---------------------------------------------\\n\\n- (p.2, 3rd par in Sec 2) [minor] Are you using a particular version of the representer theorem in the representation of f_j^(i) as linear combination of feature maps? Please clarify.\\n\\n[We are only using the most standard version of the representer theorem [6]. And we have updated the manuscript to clarify this.]\\n\\n---------------------------------------------\\n\\n- (p.2, end of 1st par in Sec 3) L^(i) is defined as sup over X_i. It is not clear to me that this constant is necessarily finite and I suspect it will not be in general (it will for the RBF kernel (and most stationary kernels) used in experiments though). Finiteness of L^(i) is necessary for the bound in Eq. (2) to be non-vacuous. Please clarify.\\n\\n[It is indeed part of the assumption that L^(i) < \\\\infty. And we have updated the manuscript accordingly. Fortunately, as the reviewer pointed out, this assumption is satisfied by most popular kernels.]\\n\\n---------------------------------------------\\n\\n- (p.3, after 1st display in Sec 4.2.1) [minor] Missing dot after \\\"that we wish to minimise\\\".\\n\\n[Could the reviewer please clarify on this comment? We could not figure out what the reviewer meant by \\\"missing dot\\\". Thanks in advance.]\\n\\n- Next sentence states \\\"**the** optimal F\\\" (emphasis mine) -- I am sorry if I overlooked it, but I did not notice a proof that a solution exists and is unique, and am not familiar enough with the literature to immediately see the answer. Perhaps a footnote clarifying the statement would help.\\n\\n[Without further assumption on the loss and the hypothesis space, such a global minimum may not be found in practice and also may not be unique (multiple risk-equivalent solutions may exist). Here we are only referring to the best F the learner can find that minimizes R_l. And it is not important whether that F is \\\"the best\\\" or if \\\"the best\\\" exists at all. We have updated the draft to clarify.]\\n\\n---------------------------------------------\\n\\n- (p.4, 1st par in Sec 4) You say \\\"A generalisation to regression is reserved for future work\\\". I did not expect that based on the first few pages. On high-level, it seems that generalisation to regression need not be trivial as, for example, the optimal representation derived in Lemma 4.3 and Lemma 4.5 explicitly relies on the classification nature of the problem. Can you comment on expected difficulty of extension to regression? \\n\\n[Undoubtedly, generalization of our layer-wise learning algorithm to regression is a nontrivial task. At this point, we only have some very early ideas as to how to proceed. The main difficulty, in our attempt to come up with an equivalent theory for regression, is that the domain is now uncountable instead of a finite set.]\\n\\n- Possibly state in the introduction that only classification is considered in this paper.\\n\\n[We have now stated in the Abstract and also in the Introduction that our theoretical results regarding optimal hidden representations are only for classification and are proven only under certain losses.]\\n\\n---------------------------------------------\"}", "{\"title\": \"Reply to Reviewer 1 [4/6]\", \"comment\": \"[The reviewer is absolutely correct. For \\\\tau >= 4\\\\sqrt{c/N} in Lemma 4.3 (\\\\tau >= \\\\frac{8L^(l)Cd_{l-1}}{\\\\max (|c|, |a|)} \\\\sqrt{c/N} in Lemma 4.5), the representation characterized by Lemma 4.3 (Lemma 4.5) is optimal w.r.t. an upper bound on the population risk. Note that N is the size of the training set and hence, for reasonably-sized datasets, these two conditions are relatively mild. \\n\\nThe true population risk is almost never computable under machine learning settings since we do not make distributional assumptions. For this reason, we believe that it is commonly accepted and also useful to have optimality w.r.t. a bound on the population risk instead of the true population risk. In fact, optimality of this kind justifies many commonly used learning paradigms. For example, for NN in classification, the standard minimization of an empirical risk and a regularization term is considered to be optimal w.r.t. an upper bound on the true risk [2][3].\\n\\nWith that said, we do think that it can be beneficial to investigate the possibility of tightening the bound or deriving new bounds using different techniques, as this may shed light on new optimal hidden representations and hence also new layer-wise learning algorithms. We leave this as future work.]\\n\\n---------------------------------------------\\n\\n- (p.5) There are two assumptions that I find somewhat restrictive. Just before Lemma 4.3 you assume that the number of points in each class must be the same. Can you comment on whether you expect the same representation to be optimal for classification problems with significantly imbalanced number of samples per class? \\n\\n[We thank the reviewer for pointing this out. This assumption has been removed in the newest manuscript and we have made changes to the proof to justify the removal. Please refer to page 17 for details.]\\n\\n- The second assumption is after Lemma 4.4 where you state that the stationary kernel k^{l-1} should attain its infimum for all x, y s.t. || x - y || greater than some threshold. This does not hold for many of the popular kernels like RBF, Matern, or inverse multiquadric. Do you think this assumption can be relaxed?\\n\\n[We agree with the reviewer that this assumption is somewhat restrictive. Unfortunately, this assumption cannot be removed in our current proof of the lemma. We are actively working towards improving or completely removing this assumption. Nevertheless, we think that for kernels with light tails such as the RBF kernel, the value would decay quickly away from the origin (for RBF, the decay is exponentially fast). Hence in practice, we expect that this assumption would not be too far away from reality. In all of our experiments, we have used RBF kernels with varied kernel widths. And we have pointed out that RBF kernels do not strictly satisfy this assumption in Section 7.]\\n\\n---------------------------------------------\\n\\n- (p.5) Choice of the dissimilarity measure for G: Can you provide more intuition about why you selected L^1 distance and whether you would expect different results with L^2 or other common metrics?\\n\\n[The choice of dissimilarity measure is somewhat arbitrary. We do not have a specific reason as to why L^1 distance should be favored over L^2 distance or alignment. And we chose L^1 distance just so that we could obtain a concrete result (Lemma 4.5). Although, we should point out that the proof for Lemma 4.5 used the fact that the loss is L^1 distance. We are currently working on producing equivalent results when the loss is L^2 distance or alignment. In practice, however, we have not noticed any significant performance difference among using different loss functions for the hidden layers in our experiments. Hence we expect theoretical results equivalent to Lemma 4.5 to continue to hold for different losses.]\\n\\n---------------------------------------------\\n\\n- (Sec 4.3) Can you please provide more details about the relation of the proposed objective (\\\\hat{R}(F) + \\\\tau max_j ||f_j||_H) to Lemmas 4.3 and 4.5 where the optimal representation was derived for functions that optimise an upper bound in terms of Gaussian complexity (e.g. is the representation that minimises risk w.r.t. the Gaussian bound also optimal with respect to functions that optimise this objective)?\\n\\n[The bounds in Lemma 4.3 and 4.5 using Gaussian complexity should be combined with the bound on Gaussian complexity in Theorem 4.2. This would give the objective \\\\hat{R}_l(f^(l)) + \\\\tau ||f^(l)||_{H_l} for Lemma 4.3 and \\\\hat{R}_{l-1}(F^(l-1)) + \\\\tau max_j ||f_j^(l-1)||_{H_{l-1}} for Lemma 4.5. We have updated the statements of Lemma 4.3 and 4.5 to clarify and we thank the reviewer for pointing this out.\\n\\nNote that a bound containing Gaussian complexity cannot be computed or used as a loss function since Gaussian complexity is not computable in practice (it involves computing the expectation of i.i.d. random variables of an unknown distribution).]\"}", "{\"title\": \"Reply to Reviewer 1 [3/6]\", \"comment\": \"- In general, I am not convinced layer-wise optimality is a useful criterion when what we want to achieve is network-wise optimality. As you show in the appendix, if layer-wise optimality is achieved then it implies network-wise optimality; however, layer-wise optimality is only a sufficient condition and likely not a necessary one (except for the simplified scenario studied in B.3). \\n\\n[Edit (Nov. 26): The proposed layer-wise learning algorithm learns network-wise optimality at each layer given that the regularization coefficient is chosen properly. Section 4.3 has been revised to justify this claim. Please refer to the newest manuscript for details.]\\n\\n- It is thus not clear to me why layer-wise training would always be preferable to network-wise training (e.g. using BP) especially because its greedy nature might intuitively prevent learning of hierarchical representations which are commonly claimed to be key to the success of neural networks. Can you please clarify?\\n\\n[Some of the benefits of layer-wise learning as compared to BP have been presented in our response to the reviewer's first comment in the DETAILED COMMENTS section. Please see above for details.\\n\\nRegarding learning hierarchical representations, we think that even if the learning algorithm is greedy, the representations are still built on top of each other and hence, they can still be hierarchical.\\n\\nMoreover, we think that it is still an open question whether BP in fact also implicitly learns a deep NN in a greedy, bottom-up fashion. Some empirical results seem to suggest that the answer is positive, i.e., hierarchical representations learned with BP might have also been learned implicitly in a layer-by-layer manner [1].]\\n\\n---------------------------------------------\\n\\n- (Sec 4.2) I think it would be beneficial to state in the introduction that the \\\"risk\\\" is with respect to the hinge loss which is common in the SVM/kernel literature but much less in the deep learning literature and thus could surprise a few people when they reach this point. \\n\\n[We have updated the manuscript according to the reviewer's comment and we thank the reviewer for bringing this potential confusion into our attention.]\\n\\n-------------------------------\", \"further_questions\": \"-------------------------------\\n\\n- From Lemma 4.3, it seems that the derived representation is only optimal with respect to the **upper bound** on the empirical risk (which for \\\\tau >= 2 will be an upper bound on the population risk). I got slightly confused at this point as my interpretation of the previous text was that the representation is optimal with respect to the population risk itself. Does the upper bound have the same set of optima? Please clarify.\"}", "{\"title\": \"Reply to Reviewer 1 [2/6]\", \"comment\": \"3) BP and in fact, end-to-end training algorithms in general, turns the model into a black box in the following senses. First, they make the design procedure of a deep architecture unintuitive in the sense that it is difficult, if not impossible, to attribute the poor performance of a model to an improper design choice in a certain part or layer of the network. Also, it is hard to assess the quality of training in the hidden layers or interpret the effect of the learning algorithm on the model during training. \\n\\n As we have argued in our reply to all reviewers regarding interpretability, the layer-wise learning approach makes it possible to debug each layer individually since we now have a metric against which we can evaluate the performance of each layer separately. And it also has clearer learning dynamics compared to end-to-end training schemes.]\\n\\n\\n- Regarding the criticism that BP forces intermediate layers to correct for \\\"mistakes\\\" made by layers higher up: it seems your layer-wise algorithm attempts to learn the best possible representation in first layer, and then progresses to the next layer where it tries to correct for the potential error of the first layer and so on. In other words it seems that the errors of layers are propagated from first to last, instead of last to first as in BP, but are still being propagated in a sense. I do not immediately see why propagation forward should be preferable. Can you please further explain this point?\\n\\n[We are not sure that we understood this comment correctly. Could the reviewer please be more specific about which part of the paper this comment is referred to? We do not think we have made this particular criticism about BP in Section 4.1 or any part of the paper. It would be helpful if the reviewer can identify the claim in our paper that caused this confusion so that we can rephrase it in a clearer way. Many thanks in advance.]\\n\\n---------------------------------------------\\n\\n- It is proven in the appendix (Lemma B.3) that under certain conditions stacking additional layers never leads to degradation of training loss. Can you please clarify whether additional layers can be helpful even in the case where previous layers already succeeded in learning the optimal representation?\\n\\n[If by succeeding in learning the optimal representaion, the reviewer meant achieving zero loss, then Lemma B.3 would only give a guarantee that we can stack on more layers and still have the resulting model achieve zero loss at the latest layer added. In other words, stacking more layers will certainly not help with further reducing training loss in that case.\\n\\nHowever, in practice, although identical training losses were achieved sometimes while experimenting with models of different depth, many benefits were observed by choosing the deeper model in this case. For example, the upper layers usually converge significantly faster to a smaller or identical training loss compared to the lower layers and hence for example, a three-layer kMLP may take fewer total iterations to reach a certain training loss than a two-layer one. Also, as we have argued in Appendix B.5, upper layers usually have a certain degree of robustness to bad kernel parameterization, hence, making them easier to fine-tune.]\\n\\n---------------------------------------------\\n\\n- (Sec 4.1) Layer-wise vs. network-wise optimality: I find the claim that BP-based learner is not aware of the network-wise optimality confusing. BP explicitly optimises for network-wise optimality and the relative contribution to the network-wise error of each weight is propagated accordingly. I suppose my confusion stems from lack of a clear description of what defines a learner \\\"aware\\\" or \\\"blind\\\" to network-wise optimality. \\n\\n[We do not recall having made the claim that BP is not aware of the network-wise optimality. We certainly agree with the reviewer. Note that the fundamental difficulties in Section 4.1 are for layer-wise learning only. Could the reviewer please be more specific about which sentence/part in Section 4.1 caused this confusion? We will rephrase accordingly.\\n\\nTo clarify further, a learner is \\\"aware\\\" of network-wise optimality for each layer when it is an end-to-end learner since in that case, as the reviewer has pointed out, each weight update would be toward minimizing the loss for the entire network. In contrast, a learner is \\\"blind\\\" to network-wise optimality when it works in a layer-wise fashion, optimizing a loss that is not the loss for the entire network one layer at a time.]\"}", "{\"title\": \"Reply to Reviewer 1 [1/6]\", \"comment\": \"First, we thank Reviewer 1 for the very insightful comments. We can see that Reviewer 1 has read through our paper thoroughly and we are truly thankful for that. We will try our best to address the concerns and answer the questions from the reviewer and we hope that the reviewer finds our reply satisfying.\\n\\nComments from the reviewer are listed first with each preceded by a dash. Our replies are put in brackets.\\n\\n-------------------------------\", \"general_comments\": \"-------------------------------\\n\\n- ...which eliminates necessity of gradient-based training.\\n\\n[Just to clarify, our layer-wise learning algorithm only eliminates the need of obtaining gradients using BP. It is still a gradient-based optimization per se.]\\n\\n---------------------------------------------\\n\\n- My rating of the paper is mainly due to the lack of experimental evidence for usefulness of the layer-wise training, and absence of experimental comparison with several baselines (see details below).\\n\\n[The objective of the current paper was to provide a comprehensive solution to the theoretical problem and therefore, majority of the time was spent on trying to achieve this goal. Nevertheless, we completely agree with the reviewer that more empirical results would complement the theory and henceforth, we are working hard to produce more results as suggested. We shall notify all reviewers once we have new results and have updated our manuscript accordingly.]\\n\\n---------------------------------------------\\n\\n- It is also unclear whether the structure of KNs is significantly better than that of NNs in terms of interpretability.\\n\\n[We are thankful that the reviewer brought up this important issue. Please see our reply to all reviewers for our response.]\\n\\n--------------------------------\", \"two_related_papers\": \"--------------------------------\\n\\nWe were not aware of these two papers and we thank Reviewer 1 for bringing them into our attention. Below are our comments on these two works, which we have added to the newest manuscript as well.\\n\\n1) Kulkarni & Karande, 2017: \\\"Layer-wise training of deep networks using kernel similarity\\\" https://arxiv.org/pdf/1703.07115.pdf\\n\\n[This work used the idea of an ideal kernel matrix to train NNs layer-by-layer. The activation of each layer, together with a kernel function that is separate from the architecture, is used to compute a kernel matrix. And the training of each layer amounts to aligning that kernel matrix to an ideal one. The ideal kernel matrix used therein is a special case of the ideal kernel matrix characterized by our Lemma 4.3 and Lemma 4.5. However, this work did not discuss or prove the optimality of the underlying hidden representations for NNs.]\\n\\n---------------------------------------------\\n\\n2) Scardapanea et al., 2017: \\\"Kafnets: kernel-based non-parametric activation functions for neural networks\\\" https://arxiv.org/pdf/1707.04035.pdf\\n\\n[This work explored the possibility of substituting the nonlinearities of NNs with kernel expansions. While the resulting networks are similar to our KNs, the authors did not further study specially-tailored training methods as we did in our work. Instead, the resulting models are simply optimized with gradient-based optimization together with BP.]\\n\\n--------------------------------\", \"detailed_comments\": \"--------------------------------\\n\\n- (Sec 4.1) Backpropagation (BP) is being criticised: BP is only a particular implementation of gradient calculation. It seems to me that your criticisms are thus more related to use of iterative gradient-based optimisation algorithms, rather than to obtaining gradients through BP?!\\n\\n[Our criticism for BP in Section 1 is on the scheme of obtaining gradients via BP instead of gradient-based optimization algorithms. Our layer-wise learning algorithm is also gradient-based, as pointed out earlier in our reply. To further clarify, we now provide more details backing up our comments on BP in Section 1:\\n\\n 1) BP can be computationally intensive and memory inefficient. This is because in standard BP, gradients for all layers have to be computed at each update. And one can either save these gradients while updating each layer (memory inefficient), or compute gradient for each layer on the fly (requires a lot of redundant computations since one has to differentiate through all layers between the output and the layer being updated). Clearly, a layer-wise learning approach mitigates this issue.\\n\\n 2) Obtaining gradients through BP can cause the vanishing gradient problem when the model contains a composition of many nonlinear layers. And it is clear that a layer-wise, gradient-based optimization approach is less subject to this issue since one no longer needs to differentiate through multiple layers for each gradient computation.\"}", "{\"title\": \"Reply to Reviewer 3 [2/2]\", \"comment\": \"- the intuition of layer-wise optimality: on page 4, the paper states that \\\"the global min of R_l wrt S_{l-1} can be explicitly identified prior to any training\\\" but intuitively this must condition on some known function/function class F^(l). Could you please enlighten me on this?\\n\\n[This is a great question and the reviewer is indeed correct. This \\\"conditioning\\\" takes the form of the assumption in Lemma 4.3 (and Lemma 4.5) that the actual F^(l) considered is the one returned by a learning paradigm minimizing the loss. Putting this in a more concise form, the optimal representation S^\\\\star_{l-1} is defined as S^\\\\star_{l-1} := argmin_{S_{l-1}} min_{F^(l)} R_l(F^(l), S_{l-1}), where the minimum over F^(l) is understood as the minimum for each fixed S_{l-1}.]\\n\\n---------------------------------------------\\n\\n- the experiments are of small-scale and, as the paper pointed out, only demonstrating the concepts. What are the main practical difficulties preventing this from being applied to bigger networks/bigger datasets?\\n\\n[Edit (Nov. 26): We have added new results on the standard MNIST and Fashion-MNIST datasets. Please refer to our summary of all changes made to the manuscript for details.]\\n\\n---------------------------------------------\\n\\n- vanishing gradients: I'm not clear how layer-wise training can avoid this issue - could you please explain this?\\n\\n[As the reviewer knows, vanishing gradient occurs when one has to compute gradients of the form df_l \\\\circ ...\\\\circ f_1(w)/dw, where f_l, ..., f_1 are layers of a deep architecture containing some nonlinear functions. Typically, the magnitude of the resulting gradient at each layer can be small due to some properties of the nonlinearities such as their ranges being between +-1 and having plateaus everywhere but in a small region around the origin, etc. And the larger the l, the smaller the magnitude of the gradient at a layer closer to the input since it would be a product of the gradients from upper layers, all of which being some numbers with small absolute value (likely less than 1).\\n\\nAlthough layer-wise training does not eliminate the computation for gradients or change these troublesome properties of the nonlinearities mentioned above, it only requires computing gradients of the form df_i(w)/dw. And since the composition of layers does not occur, the issue mentioned above would be mitigated to some extent.]\\n\\n---------------------------------------------\\n\\n- some typos: p1 emplying -> employing, p4 supress -> suppress, p5 represnetation -> representation\\n\\n[We have corrected the typos in the newest manuscript. And we are truly thankful that Reviewer 3 brought them into our attention.]\\n\\n\\n\\n[1] Rumelhart, D. E., Hinton, G. E., & Williams, R. J. (1986). Learning representations by back-propagating errors. nature, 323(6088), 533.\"}", "{\"title\": \"Reply to Reviewer 3 [1/2]\", \"comment\": \"We would like to first thank Reviewer 3 for the helpful comments and questions. Our detailed reply is provided below. And we hope that it answers the questions from Reviewer 3.\\n\\nComments from the reviewer are listed first with each preceded by a dash. Our replies are put in brackets.\\n\\n---------------------------------------------\\n\\n- the theoretical contributions seem interesting but its interpretation and practicality are somewhat non-intuitive and philosophically troubling\\n\\n[In terms of the interpretation of the theoretical results, our work showed that, thanks to the use of kernel functions as nonlinearities in a connectionist architecture, one can concretely answer the question: What is the best representation for each hidden layer? \\n\\nThis is the fundamental question behind training deep connectionist models [1]. Before, the most widely-accepted answer to this problem was an indirect one: Use BP. This is indirect since, despite that BP does the job of training the model well, it does not provide any interpretable or generalizable knowledge as to what defines a good hidden representation. In contrast, our Lemma 4.3 and Lemma 4.5 provided explicit and general conditions characterizing such optimal hidden representations.\\n\\nIn terms of practicality, these theoretical results removed the need for BP and directly made possible a layer-wise learning algorithm with optimality guarantee equivalent to that offered by BP. Among works that try to replace BP, to the best of our knowledge, ours is the first to provide such an optimality guarantee, thanks to the theoretical results in this paper.]\\n\\n---------------------------------------------\\n\\n- interpretability: it's not clear to me if this training scheme is any more interpretable than backprop training (not to mention it's not clear to me how to define interpretability for neural networks).\\n\\n[We are thankful that the reviewer brought up this important issue. Please see our reply to all reviewers for our response.]\\n\\n- Whether BP or any layer-wise training schemes is used, isn't the goal is to get S_{l-1} to the state where S_{l-1}s for examples of different classes are far away from each other as this is easier for the classifier?\\n\\n[For the hidden layers in NN, it is difficult to even talk about the notion of examples being \\\"far away from each other\\\" since there is no natural metric space in which the training can be geometrically interpreted or discussed. Of course, one may use the Euclidean space in which the hidden activation vectors live, but it is not entirely trivial (at least to us) how to prove that what backpropagation does is to push examples from distinct classes as far apart as possible in the metric of that Euclidean space.\\n\\nFor KN trained with the proposed layer-wise algorithm, on the other hand, such an interpretation can be readily applied, as we have shown in Sections 4.2.2 and 4.2.3. And this makes its learning dynamics more transparent and straightforward than NN.]\\n\\n---------------------------------------------\\n\\n- function representation: in section 2, f_j^i(x) is parameterized as a sum of kernel values evaluated at x and the training points. It's unclear to me what is x here -- input to the network or output of the previous layer?\\n\\n[Edit (Nov. 26): We have updated this section to clarify. Please refer to the newest manuscript for details.]\\n\\n- This also has a sum over all training points, so is training kMLPs in a layer-wise fashion more efficient than traditional kernel methods? \\n\\n[In terms of computational complexity, a kMLP is more demanding than a traditional kernel machine since the latter corresponds to a single node in the former. Section 4.4 provided a natural accelerating approach to mitigate this issue in practice.\\n\\nNevertheless, our results in Appendix B.5.1 suggest that kMLP performs much better than traditional kernel machine and kernel machine enhanced by state-of-the-art multiple kernel learning algorithms.]\\n\\n---------------------------------------------\\n\\n- training scheme: what is the order of layers being trained? input to output or output to input? I'm slightly hazy on how to obtain F^(l-1)(S) to compute G_{l-1}. \\n\\n[The training proceeds from input to output. Each layer is trained and frozen afterward. For example, one first train F^(1) to minimize some dissimilarity measure between the ideal kernel matrix G^\\\\star and the actual kernel matrix G_1 defined as (G_1)_{mn} = k^(2)(F^(1)(x_m), F^(1)(x_n)). After the training of F^(1), freeze it and call the frozen state F^{(1)*}. Now start training F^(2) to minimize some dissimilarity measure between G^\\\\star and kernel matrix G_2 defined as (G_2)_{mn} = k^(3)(F^(2) \\\\circ F^{(1)*}(x_m), F^(2) \\\\circ F^{(1)*}(x_n)). And so on.]\\n\\n---------------------------------------------\"}", "{\"title\": \"Reply to Reviewer 2 [2/2]\", \"comment\": \"-----------------\", \"questions\": \"-----------------\\n\\n- 1) Since individual node is simply a hyperplane in the induced kernel space, why not just specify the cost function as the risk + \\\\tau norm(weights) ? What is the benefit of explicitly talking about gaussian complexities and delineating Theorem 4.2 when the same can be achieved by writing a much simpler form?\\n\\n[The fact that minimizing empirical loss + \\\\tau norm (weights) effectively minimizes the true (expected) risk is ultimately justified for NN by bounds analogous to that in Theorem 4.2 [6][7][8]. And since KN is structurally different from NN, we wanted to be more rigorous by deriving similar bounds for our novel architecture and justifying the effectiveness of the layer-wise training algorithm from first principles. In other words, we wanted to prove that the learning algorithm we designed for KN would actually minimize the true risk and hence guarantee generalization to unseen data.\\n\\nDespite that, as the reviewer has pointed out, each node in a KN is a hyperplane in an RKHS, it is not entirely clear to us how the above result should follow easily as, after all, the hyperplane is in an RKHS instead of the input space.]\\n\\n- Lemmas 4.4 and 4.5 should be straightforward extensions too if just used in this form since Lemma C.1 follows easily, and again could be simplified a lot by just using the regularized cost function. Am I missing something here?\\n\\n[The difficulty in going from Theorem 4.2 + Lemma 4.3 to Lemma 4.4 + Lemma 4.5 is that the losses are different and so are the dimensions of the layers. These two factors eventually required a different bound (Lemma 4.4) and also a new proof of optimality for the so-defined optimal representation in Lemma 4.3.]\\n\\n---------------------------------------------\\n\\n- 2) Lemma 4.3 assumes separability (since c should be > a for \\\\tau to be positive) of classes, and also balanced classes (since number of positives = number of negatives). Why are these assumptions reasonable ? I understand that the empirical evaluation presented do justify the methodology, but I am wondering if based on these assumptions the theoretical results are of any use in the way they are currently presented. \\n\\n[About the first assumption, recall from Section 3 that a := min_{x, y} k(x, y) and c := max_{x, y} k(x, y). Hence, unless the kernel chosen is a constant (which is of course not of interest in practice), a < c by construction. So we are really making no assumption here. Moreover, just to clarify, neither Lemma 4.3 nor Lemma 4.5 requires a separability assumption.\\n\\nIn terms of the assumption that the classes are balanced, we have been working on it since the initial submission and it turns out that this assumption is not needed at all. We have updated the proof. Please see the new proof on page 16 for details. We thank the reviewer for bringing this up.]\\n\\n----------\", \"minor\": \"----------\\n\\n- Below Def 4.1 \\\"to a standard normal distribution\\\" should be \\\"according to P\\\".\\n\\n[Here, we refer to the sequence g_1, ..., g_N as the sequence that is being ``fitted''. And the g_n's are i.i.d. standard normal random variables. This interpretation was taken directly from [7]. We have updated the manuscript to clarify.]\\n\\n---------------------------------------------\\n\\n- Some typos\\n\\n[We have corrected the typos in the newly uploaded version and we thank the reviewer for brining them into our attention.]\\n\\n\\n\\n[1] Park, J., & Sandberg, I. W. (1991). Universal approximation using radial-basis-function networks. Neural computation, 3(2), 246-257.\\n\\n[2] Cybenko, G. (1989). Approximation by superpositions of a sigmoidal function. Mathematics of control, signals and systems, 2(4), 303-314.\\n\\n[3] Funahashi, K. I. (1989). On the approximate realization of continuous mappings by neural networks. Neural networks, 2(3), 183-192.\\n\\n[4] Barron, A. R. (1993). Universal approximation bounds for superpositions of a sigmoidal function. IEEE Transactions on Information theory, 39(3), 930-945.\\n\\n[5] Sun, S., Chen, W., Wang, L., Liu, X., & Liu, T. Y. (2016, February). On the Depth of Deep Neural Networks: A Theoretical View. In AAAI (pp. 2066-2072).\\n\\n[6] Bartlett, P. L. (1997). For valid generalization the size of the weights is more important than the size of the network. In Advances in neural information processing systems (pp. 134-140).\\n\\n[7] Bartlett, P. L., & Mendelson, S. (2002). Rademacher and Gaussian complexities: Risk bounds and structural results. Journal of Machine Learning Research, 3(Nov), 463-482.\\n\\n[8] Shalev-Shwartz, S., & Ben-David, S. (2014). Understanding machine learning: From theory to algorithms. Cambridge university press.\"}", "{\"title\": \"Reply to Reviewer 2 [1/2]\", \"comment\": \"Firstly, we would like to thank Reviewer 2 for the insightful review. We have found the comments and questions really helpful and we now address them in details. We do hope that Reviewer 2 finds our response satisfying.\\n\\nComments from the reviewer are listed first with each preceded by a dash. Our replies are put in brackets.\\n\\n-----------------\", \"comments\": \"-----------------\\n\\n- I think the interpretability claims have some merits but are over-stated.\\n\\n[We are thankful that the reviewer brought up this important issue. Please see our reply to all reviewers for our response.]\\n\\n---------------------------------------------\\n\\n- Furthermore, the expressive power of universal approximation through kernels holds only asymptotically. So I am not sure if the authors can claim equivalence in expressive powers to more traditional NNs theoretically.\\n\\n[Generally, we think that the issue of expressive power is a highly abstract one and it is usually difficult to argue which model possesses stronger expressive power in very concrete terms. Although, we do agree with the reviewer that the universal approximation property of kernel machines holds only when we do not limit the number and the positions of its centroids [1]. Nevertheless, to the best of our knowledge, all classical universal approximation results for NNs are also asymptotical results [2][3][4].\\n\\nMoreover, intuitively, a single node in a KN (a kernel machine) is already (asymptotically) a universal function approximator. In contrast, it takes at least two layers of NN to be (asymptotically) a universal function approximator. \\n\\nFurther, in terms of some complexity measures such as Gaussian complexity, Lemma B.2 in our paper seems to show that the model complexity of kMLP is comparable to that of MLP [5]. And they also scale in a similar way in the depth and width of the network.\\n\\nCombining the arguments above, we expect KN to be at least comparable to NN in terms of expressive power, which is corroborated by our experimental results in the paper.]\"}", "{\"title\": \"Note on interpretability\", \"comment\": \"\", \"concern_on_interpretability\": \"All three reviewers pointed out that it is somewhat imprecise to claim that KN or the layerwise learning paradigm is more interpretable than NN trained with BP. We appreciate the reviewers for bringing up this important issue and we will try our best to clarify our claim.\\n\\nFirst of all, we think that KN is more interpretable in the sense that thanks to the use of kernel function, the model can be embedded in an inner product space in which it is linear. The inner product space provides constructions to interpret learning geometrically and the model being linear makes it easy to visualize and to work with. This enables us to reduce rather abstract problems such as what is the best hidden representation at a given layer to geometric ones. Our derivation of the optimal hidden representations essentially utilized this nice property of KN. Also, this provided a straightforward geometric interpretation of the learning dynamics in greedily-trained KN, as we have discussed in page 5 and page 6.\\n\\nFor NN, the nonlinearities lack such a natural mathematical construction in which we can easily talk about geometric concepts such as distance, angle, etc. Note that although we can embed the hidden activations of an NN in some proper Euclidean space, the model is still not linear and hence is difficult to deal with in that space, defeating the purpose of such an embedding. Moreover, in contrast to KN learned layer-by-layer, the interpretation of the learning dynamics of NN learned with BP remains a challenging theoretical problem. Interestingly, the most notable work along this line of research showed (empirically) that the learning dynamics in NN seem to agree with our theory for KN [1].\\n\\nMoreover, the design process of a KN is now more transparent and intuitive with layer-wise training. This is because by construction of the layer-wise learning algorithm, the quality of hidden representations can be evaluated explicitly at each layer, which provides more information to the user and makes it possible to trace the bad performance of the overall model to a certain layer and debug the layers individually. \\n\\nAlthough, we are not claiming that KN is as transparent as a simple linear model in which the contribution of each input feature to the output can be directly identified. We agree that in that sense, KN and NN are both difficult, if not impossible, to interpret. We agree that this is perhaps the more commonly used definition for model interpretability in machine learning. And we have changed the title of the paper as well as the parts in which interpretability of KN is discussed to avoid any confusion. The changes are reflected in the newest manuscript.\\n\\n\\n\\n[1] Raghu, M., Gilmer, J., Yosinski, J., & Sohl-Dickstein, J. (2017). Svcca: Singular vector canonical correlation analysis for deep learning dynamics and interpretability. In Advances in Neural Information Processing Systems (pp. 6076-6085).\"}", "{\"title\": \"Good paper, some things are oversold\", \"review\": \"This paper attempts to learn layers of NNs greedily one at a time by using kernel machines as nodes instead of standard nonlinearities. The paper is well-written and was an interesting read, despite being notation heavy.\\n\\nI think the interpretability claims have some merits but are over-stated. Furthermore, the expressive power of universal approximation through kernels holds only asymptotically. So I am not sure if the authors can claim equivalence in expressive powers to more traditional NNs theoretically. I have some additional questions about the paper, and I am reserving my recommendation on this paper till the authors answer them. \\n\\n1) Since individual node is simply a hyperplane in the induced kernel space, why not just specify the cost function as the risk + \\\\tau * norm(weights) ? What is the benefit of explicitly talking about gaussian complexities and delineating Theorem 4.2 when the same can be achieved by writing a much simpler form? Lemmas 4.4 and 4.5 should be straightforward extensions too if just used in this form since Lemma C.1 follows easily, and again could be simplified a lot by just using the regularized cost function. Am I missing something here?\\n\\n\\n2) Lemma 4.3 assumes separability (since c should be > a for \\\\tau to be positive) of classes, and also balanced classes (since number of positives = number of negatives). Why are these assumptions reasonable ? I understand that the empirical evaluation presented do justify the methodology, but I am wondering if based on these assumptions the theoretical results are of any use in the way they are currently presented.\", \"minor\": \"Below Def 4.1 \\\"to a standard normal distribution \\\" should be \\\"according to P\\\".\\nSome typos, please proof read e.g. spelling error \\\"represnetation \\\".\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Interesting theoretical work which could be improved by further experimental work\", \"review\": \"****Reply to authors' rebuttal****\\n\\nDear Authors,\\n\\nThank you very much for all the effort you have put into the rebuttal. Based on the improved theoretical and experimental results, I have decided to increase my score from 5 to 6.\\n\\nBest wishes,\\nRev 1\\n\\n\\n****Original review****\\n\\n\\nThis paper explores integration of kernel machines with neural networks based on replacing the non-linear function represented by each neuron with a function living in some pre-defined RKHS. From the theoretical standpoint, this work is a clear improvement upon the work of Zhang et al. (2017). Authors further propose a layer-wise training algorithm based on optimisation of a particular similarity measure between embeddings based on their class assignments at each layer, which eliminates necessity of gradient-based training. However, the experimental performance of the proposed algorithm is somewhat lacking in comparison, perhaps because the authors focus on kernelised equivalents of MLPs instead of CNNs as Zhang et al.\\n\\nMy rating of the paper is mainly due to the lack of experimental evidence for usefulness of the layer-wise training, and absence of experimental comparison with several baselines (see details below). It is also unclear whether the structure of KNs is significantly better than that of NNs in terms of interpretability. Apart from the comments below, I would like to ask the authors to discuss relation to the following related papers:\\n\\n\\t1) Kulkarni & Karande, 2017: \\\"Layer-wise training of deep networks using kernel similarity\\\" https://arxiv.org/pdf/1703.07115.pdf\\n\\n\\t2) Scardapanea et al., 2017: \\\"Kafnets: kernel-based non-parametric activation functions for neural networks\\\" https://arxiv.org/pdf/1707.04035.pdf\", \"detailed_comments\": [\"Theory\", \"(Sec 4.1) Backpropagation (BP) is being criticised: BP is only a particular implementation of gradient calculation. It seems to me that your criticisms are thus more related to use of iterative gradient-based optimisation algorithms, rather than to obtaining gradients through BP?! Regarding the criticism that BP forces intermediate layers to correct for \\\"mistakes\\\" made by layers higher up: it seems your layer-wise algorithm attempts to learn the best possible representation in first layer, and then progresses to the next layer where it tries to correct for the potential error of the first layer and so on. In other words it seems that the errors of layers are propagated from first to last, instead of last to first as in BP, but are still being propagated in a sense. I do not immediately see why propagation forward should be preferable. Can you please further explain this point?\", \"It is proven in the appendix (Lemma B.3) that under certain conditions stacking additional layers never leads to degradation of training loss. Can you please clarify whether additional layers can be helpful even in the case where previous layers already succeeded in learning the optimal representation?\", \"(Sec 4.1) Layer-wise vs. network-wise optimality: I find the claim that BP-based learner is not aware of the network-wise optimality confusing. BP explicitly optimises for network-wise optimality and the relative contribution to the network-wise error of each weight is propagated accordingly. I suppose my confusion stems from lack of a clear description of what defines a learner \\\"aware\\\" or \\\"blind\\\" to network-wise optimality. In general, I am not convinced layer-wise optimality is a useful criterion when what we want to achieve is network-wise optimality. As you show in the appendix, if layer-wise optimality is achieved then it implies network-wise optimality; however, layer-wise optimality is only a sufficient condition and likely not a necessary one (except for the simplified scenario studied in B.3). It is thus not clear to me why layer-wise training would always be preferable to network-wise training (e.g. using BP) especially because its greedy nature might intuitively prevent learning of hierarchical representations which are commonly claimed to be key to the success of neural networks. Can you please clarify?\", \"(Sec 4.2) I think it would be beneficial to state in the introduction that the \\\"risk\\\" is with respect to the hinge loss which is common in the SVM/kernel literature but much less in the deep learning literature and thus could surprise a few people when they reach this point.\"], \"futher_questions\": [\"From Lemma 4.3, it seems that the derived representation is only optimal with respect to the **upper bound** on the empirical risk (which for \\\\tau >= 2 will be an upper bound on the population risk). I got slightly confused at this point as my interpretation of the previous text was that the representation is optimal with respect to the population risk itself. Does the upper bound have the same set of optima? Please clarify.\", \"(p.5) There are two assumptions that I find somewhat restrictive. Just before Lemma 4.3 you assume that the number of points in each class must be the same. Can you comment on whether you expect the same representation to be optimal for classification problems with significantly imbalanced number of samples per class? The second assumption is after Lemma 4.4 where you state that the stationary kernel k^{l-1} should attain its infinum for all x, y s.t. || x - y || greater than some threshold. This does not hold for many of the popular kernels like RBF, Matern, or inverse multiquadric. Do you think this assumption can be relaxed?\", \"(p.5) Choice of the dissimilarity measure for G: Can you provide more intuition about why you selected L^1 distance and whether you would expect different results with L^2 or other common metrics?\", \"(Sec 4.3) Can you please provide more detaild about the relation of the proposed objective (\\\\hat(R)(F) + \\\\tau max_j ||f_j||_H) to Lemmas 4.3 and 4.5 where the optimal representation was derived for functions that optimise an upper bound in terms of Gaussian complexity (e.g. is the representation that minimises risk w.r.t. the Gaussian bound also optimal with respect to functions that optimise this objective)?\", \"Experiments\", \"I would appreciate addition of some standard baselines, like MLP combined with dropout or batch normalisation, and optimised with RMSProp (or similar). These would greatly help with assessing competitiveness with current SOTA results.\", \"It would be nice to see the relative contribution of the two main components of the paper. Specifically, an experiment which would evaluate empirical performance of KNs optimised by some form of gradient descent vs. by your layer-wise training rule would be very insightful.\", \"Other\", \"(p.2, 1st par in Sec 2) [minor] You state \\\"a kernel machine is a universal function approximator\\\". I suppose that might be true for a certain class of kernels but not in general?! Please clarify.\", \"(p.2, 3rd par in Sec 2) [minor] Are you using a particular version of the representer theorem in the representation of f_j^{(i)} as linear combination of feature maps? Please clarify.\", \"(p.2, end of 1st par in Sec 3) L^{(i)} is defined as sup over X_i. It is not clear to me that this constant is necessarily finite and I suspect it will not be in general (it will for the RBF kernel (and most stationary kernels) used in experiments though). Finiteness of L^{(i)} is necessary for the bound in Eq. (2) to be non-vacuous. Please clarify.\", \"(p.3, after 1st display in Sec 4.2.1) [minor] Missing dot after \\\"that we wish to minimise\\\". Next sentence states \\\"**the** optimal F\\\" (emphasis mine) -- I am sorry if I overlooked it, but I did not notice a proof that a solution exists and is unique, and am not familiar enough with the literature to immediately see the answer. Perhaps a footnote clarifying the statement would help.\", \"(p.4, 1st par in Sec 4) You say \\\"A generalisation to regression is reserved for future work\\\". I did not expect that based on the first few pages. On high-level, it seems that generalisation to regression need not be trivial as, for example, the optimal representation derived in Lemma 4.3 and Lemma 4.5 explicitly relies on the classification nature of the problem. Can you comment on expected difficulty of extension to regression? Possibly state in the introduction that only classification is considered in this paper.\", \"(p.7, 1st par in Sec 6) [related] \\\"However they did not extend the idea to any **arbitrary** NN\\\" (emphasis mine). Can you please be more specific here?\", \"(p.5-6) [minor] Last sentence in Lemmas 4.3 and 4.5 is slightly confusing. Can you rephrase please?\", \"(p.6) [minor] You say \\\"the learned decision boundary would generalise better to unseen data\\\". Can you please clarify the last sentence (e.g. being more precise about the meaning of the word \\\"simple\\\" in the same sentence) and provide reference for why this is necessarily the case?\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"interesting theoretical analysis of layer-wise training of kernel-based neural networks, concerns about practicality\", \"review\": \"Summary: The paper considers so-called kernel neural networks where the non-linear activation function at each neuron is replaced by a kernelized linear operation, and analyses a layer-wise training scheme to train such networks. The theoretical claims are that (i) the optimal representation at each hidden layer can be determined by getting the similarity between two kernel matrices and (ii) this procedure gives a more interpretable training procedure and can avoid the vanishing gradient problems. Some small-scale experiments are provided.\", \"evaluation\": [\"I have a mixed feeling about this paper: the theoretical contributions seem interesting but its interpretation and practicality are somewhat non-intuitive and philosophically troubling, in my opinion. I did not check the proofs in the appendix so I might have missed some critical info or have not fully understood the experimental set-up.\", \"interpretability: it's not clear to me if this training scheme is any more interpretable than backprop training (not to mention it's not clear to me how to define interpretability for neural networks). Whether BP or any layer-wise training schemes is used, isn't the goal is to get S_{l-1} to the state where S_{l-1}s for examples of different classes are far away from each other as this is easier for the classifier?\", \"function representation: in section 2, fj^i(x) is parameterized as a sum of kernel values evaluated at x and the training points. It's unclear to me what is x here -- input to the network or output of the previous layer? This also has a sum over all training points, so is training kMLPs in a layer-wise fashion more efficient than traditional kernel methods?\", \"training scheme: what is the order of layers being trained? input to output or output to input? I'm slightly hazy on how to obtain F^{(l-1)}(S) to compute G_{l-1}.\", \"the intuition of layer-wise optimality: on page 4, the paper states that \\\"the global min of R_l wrt S_{l-1} can be explicitly identified prior to any training\\\" but intuitively this must condition on some known function/function class F^(l). Could you please enlighten me on this?\", \"the experiments are of small-scale and, as the paper pointed out, only demonstrating the concepts. What are the main practical difficulties preventing this from being applied to bigger networks/bigger datasets?\", \"vanishing gradients: I'm not clear how layer-wise training can avoid this issue - could you please explain this?\", \"some typos: p1 emplying -> employing, p4 supress -> suppress, p5 represnetation -> representation\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}" ] }
B1GIQhCcYm
Unsupervised one-to-many image translation
[ "Samuel Lavoie-Marchildon", "Sebastien Lachapelle", "Mikołaj Bińkowski", "Aaron Courville", "Yoshua Bengio", "R Devon Hjelm" ]
We perform completely unsupervised one-sided image to image translation between a source domain $X$ and a target domain $Y$ such that we preserve relevant underlying shared semantics (e.g., class, size, shape, etc). In particular, we are interested in a more difficult case than those typically addressed in the literature, where the source and target are ``far" enough that reconstruction-style or pixel-wise approaches fail. We argue that transferring (i.e., \emph{translating}) said relevant information should involve both discarding source domain-specific information while incorporate target domain-specific information, the latter of which we model with a noisy prior distribution. In order to avoid the degenerate case where the generated samples are only explained by the prior distribution, we propose to minimize an estimate of the mutual information between the generated sample and the sample from the prior distribution. We discover that the architectural choices are an important factor to consider in order to preserve the shared semantic between $X$ and $Y$. We show state of the art results on the MNIST to SVHN task for unsupervised image to image translation.
[ "Image-to-image", "Translation", "Unsupervised", "Generation", "Adversarial", "Learning" ]
https://openreview.net/pdf?id=B1GIQhCcYm
https://openreview.net/forum?id=B1GIQhCcYm
ICLR.cc/2019/Conference
2019
{ "note_id": [ "H1xLpi20J4", "rkxgF6RRC7", "SJeMJqfPR7", "ryeutYzPRQ", "SyeAA_Mw07", "r1lSJuMvR7", "HkewPqKK2X", "H1gp9H-xhX", "r1xDSgvJ3Q" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544633277984, 1543593336280, 1543084506058, 1543084416376, 1543084245618, 1543083996819, 1541147231041, 1540523413319, 1540481086564 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1363/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1363/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1363/Authors" ], [ "ICLR.cc/2019/Conference/Paper1363/Authors" ], [ "ICLR.cc/2019/Conference/Paper1363/Authors" ], [ "ICLR.cc/2019/Conference/Paper1363/Authors" ], [ "ICLR.cc/2019/Conference/Paper1363/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1363/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1363/AnonReviewer2" ] ], "structured_content_str": [ "{\"metareview\": \"The paper formulates the problem of unsupervised one-to-many image translation and addresses the problem by minimizing the mutual information.\\n\\nThe reviewers and AC note the critical limitation of novelty and comparison of this paper to meet the high standard of ICLR. \\n\\nAC decided that the authors need more works to publish.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Lack of novelty\"}", "{\"title\": \"Rating unchanged\", \"comment\": \"Thanks for your rebuttal. Some issues are fixed but the comparisons with some other works, e.g. perceptual losses, latent code regression constraint and Augmented CycleGAN, are not mentioned. I still think the novelty and comparisons are limited. So I keep the rating.\"}", "{\"title\": \"Thank you for detailed review and pointing out important flaws.\", \"comment\": \"Thank you AnonReviewer1 for the review and bringing some important points. Lack of comparison to existing models is answered in a general comment.\\n\\n> It is not clear how to use mutual information (MINE) for learning. There is no explicit definition of loss function considering MINE term. \\nThe total generator loss combines GAN loss and MI; the latter is estimated between noise prior and generated sample. MINE is optimized concurrently to the GAN discriminator. We agree that explicitly stating TI-GAN objective is important; we will add it to the next revision of our paper.\\n\\n> It is difficult to read due to inconsistent usage of terms (e.g., Figure 3 and 4 (c)s)\\nWe will fix the inconsistency on Figures 3 and 4 that you pointed out\\n\\n> For better understanding, it requires to compare the patterns of MINE loss and adversarial loss.\\nThis is a good point. We will add a more thorough analysis of both MINE losses, which includes ablations studies and plots that evaluate the losses of each MINE estimator.\\n\\n> What is the main difference in the results between DCGAN-based and UNet-based models?\\nUNet models achieved relatively good sample quality and disentanglement between semantics and SVHN-specific features. However, in comparison to DCGAN, the transfer was often incorrect. We will include qualitative results comparing the two architectures in the next version of our paper.\\n\\n> Minor comments\\nWe will add all your recommendation in the next version of our paper.\"}", "{\"title\": \"Thank you for a detailed review.\", \"comment\": \"Thank you AnonReviewer2 for the review. We refer to the lack of comparison in a general comment.\\n\\n> The visualization results of TI-GAN, TI-GAN+minI, CycleGAN should be listed with the same source input for fair and easy comparison. For example the failure case of figure 8 mentioned in Section 5.2 only appears in Figure 5 (1) not in Figure 5 (2). \\n\\nGood point, we will add that. However, we think that the results would reflect the same conclusion, that is Ti-GAN using a U-net architecture fails without the MI penalty.\\n\\n> What does the full name of \\u201cTI-GAN\\u201d ? \\n\\nTwo-Input GAN. We will make it more explicit in the paper.\\n\\n> Figure 6 is not mentioned in the experiments.\\n\\nIt should be mentioned, but due to a typo in the latex, we referenced figure 8 instead of figure 6. It will be fixed.\\n\\n> What does the \\u201cFigure A\\u201d mean in Section 4.2 ?\\n\\nWe meant to reference the figures in the appendix A. We will make it more explicit by referencing the figures directly.\"}", "{\"title\": \"Reply to Reviewer 3. Our motivation is different from InfoGAN.\", \"comment\": \"Thank you AnonReviewer3 for the review. Lack of comparison is answered in a general comment.\\n\\n> The proposed method is a simple extension of InfoGAN, applied to image-to-image translation and replacing the mutual information part with MINE.\\nThe purpose of using the mutual information in our paper is different from the one presented in InfoGAN. In our paper, we use the mutual information as a mean to penalize the model for uniquely using the information coming from the noise distribution and disregarding the source. \\nAll I2I modes that aim to produce multimodal (many-to-many) transfer use some sort of prior noise to account for domain-specific features of the target domain (i.e. features not present in the source domain). This, however may lead to a failure mode, where learnt transfer function is agnostic to the source, as shown in Figure 7 (1).\"}", "{\"title\": \"General comment\", \"comment\": \"We would like to thank all reviewers for their effort and pointing out important flaws of the paper. We agree that more comparisons with more recent I2I translation technique are needed. In particular, more in-depth study of the previous work would make it clearer that certain I2I tasks, especially ones that involve more geometric changes (such as MNIST to SVHN), are not yet solved, and that the proposed model addresses certain problems involved in such tasks in a novel way.\"}", "{\"title\": \"Good formulation, but not novel and short comparison\", \"review\": [\"==== After rebuttal ===\", \"I thank the authors for responses. I carefully read the response. But it is difficult to find a reason to increase the score. So, I keep my score.\", \"====================\", \"Unsupervised image-to-image (I2I) translation is an important issue due to various applications and it is still challenging when applied to diverse image data and data where domain gap is large. This paper employs a neural mutual information estimator (MINE) to deal with I2I translation between two domains where there is a large gap. However, this paper contains several issues.\", \"1. Pros. and Cons.\", \"(+) Mathematical definition of I2I translation\", \"(+) Application of mutual information for conserving content.\", \"(-) Lack of comparison with recent I2I models\", \"(-) Lack of experimental results and ablation studies\", \"(-) Unclear novelty\", \"2. Major comments\", \"The novelty of this paper is not clear. Excluding the mathematical definition, it seems that the proposed TI simply combines DCGAN and MINE-based statistical networks. For clarifying the novelty, the detailed architecture and final objective functions can be helpful.\", \"Recent works on unsupervised I2I translation are omitted including UNIT [1], MUNIT [2], and DRIP [3]. Also, the authors need to clarify the main difference of TI-GAN from comparing models.\", \"It is not clear to relate the mathematical definition of domain transfer to one-to-many translation within large domain gap.\", \"It is not clear how to use mutual information (MINE) for learning. There is no explicit definition of loss function considering MINE term.\", \"It is short of comparing other state-of-the art models such as UNIT, MUNIT, DRIP, and AugCycleGAN. They compared their results with CycleGAN only.\", \"Experiments are not enough to support the authors\\u2019 insist. There is not any quantitative metric or qualitative result on generating edge-to-shoes.\", \"It is difficult to read due to inconsistent usage of terms (e.g., Figure 3 and 4 (c)s)\", \"For better understanding, it requires to compare the patterns of MINE loss and adversarial loss.\", \"Experiments on more datasets such as animal, season, faces or USPS datasets.\", \"What is the main difference in the results between DCGAN-based and UNet-based models?\", \"Minor\", \"cicle_times symbol looks the product between distribution. But it should be defined before being used.\", \"A reference of CycleGAN is incorrectly cited.\", \"There are some typos in the paper.\", \"page 1: dependent \\u2192 depend\", \"page 3: by separate \\u2192 by separating\", \"page 6: S a I \\u2192 S and I\", \"1. Ming-Yu Liu, Thomas Breuel, and Jan Kautz. Unsupervised image-to-image translation networks, CoRR, abs/1703.00848, 2017\", \"2. Xun Huang, Ming-Yu Liu, Serge Belongie, Jan Kautz, Multimodal Unsupervised Image-to-Image Translation, CoRR. abs/1804.04732\", \"3. Hsin-Ying Lee, Hung-Yu Tseng, Jia-Bin Huang, Maneesh Kumar Singh, Ming-Hsuan Yang, Diverse Image-to-Image Translation via Disentangled Representations, ECCV 2018.\"], \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Good problem formulation, Not Novel method.\", \"review\": \"This paper formulated the problem of unsupervised one-to-many image translation and addressed the problem by minimizing the mutual information. A principle formulation of such problem is quite interesting. However, the novelty of this paper is limited. The proposed the method is a simple extension of InfoGAN, applied to image-to-image translation and replacing the mutual information part with MINE.\\n\\nThe experiments, which only include edge to shoes and MNIST to SVHN, are also not comprehensive and convincing. This paper also lacks discussion of several quite important related references for one-to-many image translation.\", \"xogan\": \"One-to-Many Unsupervised Image-to-Image Translation\\nToward Multimodal Image-to-Image Translation\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Nice problem formulation but limited model novelty and comparisons.\", \"review\": \"This paper formalizes the problem of unsupervised translation and proposes an augmented GAN framework which uses the mutual information to avoid the degenerate case.\", \"pros\": [\"The formulation for the problem of unsupervised translation is insightful.\", \"The paper is well written and easy to follow.\"], \"cons\": [\"The contribution to the GAN model of this paper is to add the mutual information penalty (MINE, Belghazi et al., 2018) to the GAN loss, which seems incremental. I also wonder if some perceptual losses or latent code regression constraint used in previous works [1,2] can also achieve the same goal.\", \"Comparison to \\u201cAugmented CycleGAN: Learning Many-to-Many Mappings from Unpaired Data\\u201d should be done, since it\\u2019s a close related work for unsupervised many-to-many image translation.\", \"The visualization results of TI-GAN, TI-GAN+minI, CycleGAN should be listed with the same source input for fair and easy comparison. For example the failure case of figure 8 mentioned in Section 5.2 only appears in Figure 5 (1) not in Figure 5 (2).\", \"Minor issues: 1) What does the full name of \\u201cTI-GAN\\u201d ? 2) Figure 6 is not mentioned in the experiments. 3) What does the \\u201cFigure A\\u201d mean in Section 4.2 ?\", \"[1] Multimodal Unsupervised Image-to-Image Translation, ECCV\\u201918\", \"[2] Diverse Image-to-Image Translation via Disentangled Representations, ECCV\\u201918\", \"Overall, this paper proposes a nice formulation for the problem of unsupervised translation. But the contribution to the GAN model seems incremental and comparisons to other methods are not enough. My initial rating is rejection.\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
HkxLXnAcFQ
A Closer Look at Few-shot Classification
[ "Wei-Yu Chen", "Yen-Cheng Liu", "Zsolt Kira", "Yu-Chiang Frank Wang", "Jia-Bin Huang" ]
Few-shot classification aims to learn a classifier to recognize unseen classes during training with limited labeled examples. While significant progress has been made, the growing complexity of network designs, meta-learning algorithms, and differences in implementation details make a fair comparison difficult. In this paper, we present 1) a consistent comparative analysis of several representative few-shot classification algorithms, with results showing that deeper backbones significantly reduce the gap across methods including the baseline, 2) a slightly modified baseline method that surprisingly achieves competitive performance when compared with the state-of-the-art on both the mini-ImageNet and the CUB datasets, and 3) a new experimental setting for evaluating the cross-domain generalization ability for few-shot classification algorithms. Our results reveal that reducing intra-class variation is an important factor when the feature backbone is shallow, but not as critical when using deeper backbones. In a realistic, cross-domain evaluation setting, we show that a baseline method with a standard fine-tuning practice compares favorably against other state-of-the-art few-shot learning algorithms.
[ "few shot classification", "meta-learning" ]
https://openreview.net/pdf?id=HkxLXnAcFQ
https://openreview.net/forum?id=HkxLXnAcFQ
ICLR.cc/2019/Conference
2019
{ "note_id": [ "ryx5H2VCfH", "Bklu8z2wGB", "ByeHZRrvfr", "BJxhOmgVVE", "Bye5ng5xEV", "HkloHv_hxV", "SJx2ptEhgV", "HklXEE2FeE", "B1gwuwZLeV", "S1lH_j94gN", "S1eOn4vQeV", "B1lauBoZlE", "SylLlFSke4", "Byl1iLIhRX", "HylrSpd9RQ", "rklgcRlSC7", "BJlgV0er07", "B1l0Vper0X", "H1xDM6erA7", "S1ehsqeSAX", "HyeAj7bFnX", "HJlAtk3vhm", "r1xNrc0Ts7" ], "note_type": [ "comment", "official_comment", "comment", "official_comment", "comment", "official_comment", "comment", "comment", "official_comment", "comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1564523586449, 1564095056337, 1564069373361, 1549169524487, 1548947633960, 1545533250881, 1545517508278, 1545352234738, 1545111407099, 1545018221474, 1544938672111, 1544824181177, 1544669421983, 1543427734810, 1543306556714, 1542946440019, 1542946344144, 1542946102222, 1542946063031, 1542945443567, 1541112742427, 1541025669617, 1540381243560 ], "note_signatures": [ [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper1361/Authors" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper1361/Authors" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper1361/Authors" ], [ "(anonymous)" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper1361/Authors" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper1361/Authors" ], [ "ICLR.cc/2019/Conference/Paper1361/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1361/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1361/Authors" ], [ "ICLR.cc/2019/Conference/Paper1361/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1361/Authors" ], [ "ICLR.cc/2019/Conference/Paper1361/Authors" ], [ "ICLR.cc/2019/Conference/Paper1361/Authors" ], [ "ICLR.cc/2019/Conference/Paper1361/Authors" ], [ "ICLR.cc/2019/Conference/Paper1361/Authors" ], [ "ICLR.cc/2019/Conference/Paper1361/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1361/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1361/AnonReviewer2" ] ], "structured_content_str": [ "{\"comment\": \"Thanks for the response. Now it makes sense to me. Although meta-learning might not see every data point through random sampling, but it should see most of them.\", \"title\": \"Response\"}", "{\"title\": \"Response\", \"comment\": \"Thanks for your comment. However, both meta-learning methods and baseline/baseline++ methods see all available data points in training. Despite meta-learning methods only use n shots to train in each episode, the n shot batches they use are different in each episode.\"}", "{\"comment\": \"The baseline and baseline++ model use all available examples in each class to train instead of n shots, while the meta learning methods only use n shots to train. In other words, the baseline/baseline++ methods see more data points in training. I wonder such comparison is fair.\", \"title\": \"Fair Comparison?\"}", "{\"title\": \"Code publicly available\", \"comment\": \"Thank you for your interests. We have made our code publicly available: https://github.com/wyharveychen/CloserLookFewShot\\n\\nPlease free to drop us an email at [email protected] if you have any questions.\"}", "{\"comment\": \"Hi, thanks for this interesting work. Do you have public available codes to reproduce the results?\", \"title\": \"Avaliable Code\"}", "{\"title\": \"Response\", \"comment\": \"Thanks for your questions.\\n\\nFor hyper-parameters, we use the standard parameters of the ResNet architecture. We use the standard image size of 224 x 224 as input to the ResNet. We believe that the major difference between our re-implementation and the publicly available code for meta-learning methods (including ProtoNet, MatchingNet, MAML, and relation networks) lies in the data augmentation. \\n\\nWe expect to release our code by mid-Jan. Before that, we would be happy to chat more if you have specific questions. Feel free to drop us an email at [email protected]\"}", "{\"comment\": \"Hi authors,\\n\\nI want to reproduce the results on various methods (MatchingNets, ProtoNets, etc.) with ResNet backbones here.\\nBut I could not get the performance like in the paper.\\nWould you mind to give hyper parameters and image size detail? or tricks that you used?\", \"title\": \"Reproduce\"}", "{\"comment\": \"Thank you for the answers.\\nI do appreciate this work. It provides rigorous experiments.\", \"title\": \"Thank you\"}", "{\"title\": \"Response to the three questions\", \"comment\": \"Hi, thanks for your questions! We reply to the three questions below.\\n\\n1. Did the authors run your learning for matching networks, prototypical networks, maml, and relation networks with episodic training (sampled from N-classes and K-Shot every episode) from plain networks(conv and resnet)? Or did you train from baseline networks(pre-trained)? \\n\\nYes, we train all the networks (including matching networks, prototypical networks, MAML, and relation networks) with episodic training from the plain networks. All the networks were randomly initialized with He initialization, the standard initialization used in ResNet.\\n\\n2. What is the number of iteration here? is it the number of episode? or the number you learn the feature extractor (baseline)?\\n \\nYes, the number of iterations refers to the number of the episode. Thanks for pointing this out. We will use the number of episode in the revised manuscript for clarity.\\n\\n3. In MAML paper, they stated that using 64 filters may cause overfitting, do the authors suffer the same thing as you change the backbone of MAML?\\n\\nWe do not see the overfitting effect from observing the validation loss. We believe that it is due to the data augmentation used in all our experiments.\"}", "{\"comment\": \"Hi,\\n\\nThis is a good insight for different backbones impacting the performance in few-shot classification.\\n\\nI want to verify several things here.\\n\\n1. Did the authors run your learning for matching networks, prototypical networks, maml, and relation networks with episodic training (sampled from N-classes and K-Shot every episode) from plain networks(conv and resnet)? Or did you train from baseline networks(pre-trained)? \\n2. What is the number of iteration here? is it the number of episode? or the number you learn the feature extractor (baseline)?\\n3. In MAML paper, they stated that using 64 filters may cause overfitting, do the authors suffer the same thing as you change the backbone of MAML?\\n\\nThanks in advance.\", \"title\": \"Experiments\"}", "{\"title\": \"Thank you, we will\", \"comment\": \"Thank you, we will include them in the appendix of the revised manuscript.\"}", "{\"title\": \"Probably include the discussion in the paper\", \"comment\": \"Thank the authors for the further response. The matrix of different settings seems informative. The authors are encouraged to include it in the paper.\"}", "{\"metareview\": \"This paper provides a number of interesting experiments for few-shot learning using the CUB and miniImagenet datasets. One of the especially intriguing experiments is the analysis of backbone depth in the architecture, as it relates to few-shot performance. The strong performance of the baseline and baseline++ are quite surprising. Overall the reviewers agree that this paper raises a number of questions about current few-shot learning approaches, especially how they relate to architecture and dataset characteristics.\", \"a_few_minor_comments\": [\"In table 1, matching nets are mistakenly attributed to Ravi and Larochelle. Should be Vinyals et al.\", \"The notation for cosine similarity in section 3.2 is odd. It looks like you\\u2019re computing some cosine function of two vectors which doesn\\u2019t make sense. Please clarify this.\", \"There are a few results that were promised after the revision deadline, please be sure to include these in the final draft.\"], \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"An intriguing experimental paper on the current state of few-shot learning.\"}", "{\"title\": \"Reply about the third setting\", \"comment\": \"Thanks R1 for the reply. Our goal of showing the cross-domain adaptation is to highlight the limitations of existing few-shot classification algorithms problem in handling domain shift. Our results in this setting show that 1) the baseline algorithm surprisingly outperforms all other few-shot classification methods and 2) the performance of few-shot classification algorithms can greatly benefit from further adaptation to the target domain even with a limited amount of data. We believe that our unified experimental setup will facilitate future efforts along this direction.\\n\\nIn the following, we also provide a taxonomy of existing work in related topics based on the availability of labeled/unlabeled data in the target domain, we would add the table to the appendix of the camera ready version to provide a more complete picture for the readers.\\n\\nDomain adaptation (DA): Evaluated on the *same* classes\\t\\t\\t\\t\\t\\n\\t\\t\\t\\t\\t \\t Source domain\\t\\t Target domain\\t\\n\\t\\t\\t\\t Domain shift\\tLabeled \\tUnlabeled\\tLabeled (few) Unlabeled\\nSupervised DA\\t\\t\\n[Saenko et al., ECCV 2010]\\t V\\t\\t V\\t -\\t\\t V \\t\\t -\\n[Motiian et al., NIPS 2017]\\t\\t\\t\\t\\t\\n\\n\\nSemi-supervised DA\\t\\t V\\t\\t V\\t -\\t\\t V\\t\\t V\\n\\n\\nUnsupervised DA \\t\\t V\\t\\t V\\t -\\t\\t -\\t\\t V\", \"few_shot_classification\": \"Evaluated on the *novel* classes\\n\\t\\t\\t\\t\\t\\t Base class\\t\\t Novel class\\t\\n\\t\\t\\t\\t Domain shift\\tLabeled \\t Unlabeled\\tLabeled (few) Unlabeled\\nFew-shot\\t\\t\\t -\\t\\t V\\t -\\t\\t V\\t\\t -\\n\\n\\nCross-domain few-shot\\t\\t\\n[Ours (third setting);\\t\\t V\\t\\t V\\t -\\t\\t V\\t\\t -\\nDong et al. ECML-KDD 2018]\\t\\t\\t\\t\\t\\n\\n\\nSemi-supervised few-shot \\t\\t\\t\\t\\n[Ren et al. ICLR 18]\\t\\t -\\t\\t V\\t V\\t\\t V\\t\\t V\"}", "{\"title\": \"Regarding the third setting / Q4\", \"comment\": \"I appreciate the authors' efforts in improving the experiments. Regarding the third setting (cross-domain adaptation), I still think it is not necessary to introduce it to few-shot learning, at least not now. Instead, it is probably better to focus on and try to advance the conventional problem setup for now. Moreover, as the authors point out, the third setting is related to several previously studied directions. I would recommend the authors to discuss those in the paper --- it is probably not a good idea to simply remove Motiian et al. (2017) in the revised PDF.\"}", "{\"title\": \"Responses to AnonReviewer2 -- part 1\", \"comment\": \"Thanks for your opinions! Our responses are as follow:\", \"q1\": \"Is there an overlap between CUB and mini-ImageNet? If so, then domain shift experiments might be too optimistic or even then it is not a big deal?\", \"a1\": \"There are only 3 out of 64 base classes that are *birds* in the mini-ImageNet dataset. Furthermore, these three categories (house_finch, robin, toucan) are different from the 200 bird categories in CUB. Thus, a large domain shift still exists between the mini-ImageNet and the CUB dataset.\", \"q2\": \"The paper includes much redundant information which could go to the appendix in order to not weary the reader. For instance, everything related to Table 1. There is also some overlap between Section 2 and 3.3, while MAML, for instance, is still not well explained. Also, tables with too many numbers are difficult to read, e.g. Table 4.\", \"a2\": \"Thanks for the comments.\\nFirst, our purpose for showing Table 1 is two-fold: 1) it validates our reimplementation by comparing results from the reported numbers and 2) it shows that the implementations of the Baseline method in prior works are underestimated.\\n\\nSecond, we have included a more detailed description of MAML in the revised paper. \\n\\nThird, thanks for the suggestion. To improve the readability, we have modified Table 4 in the original paper to a figure (see Figure 5 in the revised paper). We include the detailed numbers in the appendix for reference.\", \"q3\": \"Many of the few-shot learning papers use Omniglot, so I think it would be a valuable addition to the appendix. Moreover, there exists a cross-domain scenario with Omniglot-> MNIST which I would also like to see in the appendix.\", \"a3\": \"Thanks for the suggestions. We did not include Omniglot because its performance has been saturated in most of the recent work (~99%). We will add the results to the appendix in the camera-ready version for completeness. We agree that the Omniglot-> MNIST experiment will be a good addition to the paper. We will also add the results to the appendix in the camera-ready version.\"}", "{\"title\": \"Responses to AnonReviewer2 -- part 2\", \"comment\": \"Q4: In the Matching Nets paper, there is a good baseline classifier based on k-NNs. Do you know how does that one compares to Baseline and Baseline++ models if used with the same architecture for the feature extractor?\", \"a4\": \"Here we show our 1-shot and 5-shot accuracy of Baseline and Baseline++ with the softmax and 1-NN classifier on the mini-ImageNet dataset with a Conv4 backbone. We only include the result of k = 1 with cosine distance to match the setting of Matching Nets paper.\\n\\n1-shot\\n \\t\\t softmax\\t\\t 1-NN (cosine distance)\\nBaseline\\t 42.11% +- 0.71%\\t44.18% +- 0.69%\\nBaseline++ 48.24% +- 0.75%\\t49.57% +- 0.73%\\n\\n5-shot\\n \\t\\t softmax\\t\\t 1-NN (cosine distance)\\nBaseline\\t 62.53% +- 0.69%\\t56.68% +- 0.67%\\nBaseline++ 66.43% +- 0.63%\\t61.93% +- 0.65%\\n\\nAs shown above, using 1-NN classifier has better performance than that of using the softmax classifier in 1-shot setting, but softmax classifier is better in 5-shot setting instead. We note that that the number presented here are not directly comparable to the results reported in the Matching Nets paper because we use a different \\u201cmini-ImageNet\\u201d separation. In this paper, we follow the data split provided by [Ravi et al. ICLR 2017], which is used in most few-shot papers. We have included the result in the appendix of the revised paper.\", \"q5\": \"The conclusion from the network depth experiments is that \\u201cgaps among different methods diminish as the backbone gets deeper\\u201d. However, in a 5-shot mini-ImageNet case, this is not what the plot shows. Quite the opposite: the gap increased. Did I misunderstand something? Could you please comment on that?\", \"a5\": \"Sorry for the confusion. As addressed in 4.3, gaps among different methods diminish as the backbone gets deeper *in the CUB dataset*. In the mini-ImageNet dataset, the results are more complicated due to the domain difference. We further discuss this phenomenon in Section 4.4 and 4.5. We have clarified related texts in the revised paper.\"}", "{\"title\": \"Responses to AnonReviewer1 -- part 1\", \"comment\": \"Thanks for your comments! Our responses are as follow:\", \"q1\": \"\\u201cUsing validation set to determine the free parameters...\\u201d\", \"a1\": \"Thank you for the comment. In our paper, we did use the validation set to select the best number of training iterations for meta-learning methods. Specifically, the exact iterations for experiments on the mini-ImageNet in the 5-shot setting with a four-layer ConvNet are:\\n\\n- ProtoNet: 24,600 iterations\\n- MatchingNet: 35,300 iterations\\n- RelationNet: 37,100 iterations\\n- MAML: 36,700 iterations\\n\\nWe have clarified this in the revised paper. \\n\\nOn the other hand, we were not able to use the validation set for the Baseline and Baseline++. Note that validation set for few-shot problem splits by class, and does not split data in one class. With these validation classes in meta-training stage, one can validate how well the model can predict novel classes in meta-testing stage. However, the Baseline and Baseline++ methods cannot predict validation classes, as they has a fixed softmax layer to predict base classes. On the other hand, for meta-learning methods, the class to predict is conditioned on the class in the support set. Thus, with the support set in validation class, meta-learning methods can predict the validation class. As an alternative for Baseline and Baseline++, we directly train 400 epoches. We observe convergence from the training curve in both the Baseline and Baseline++ methods.\\n\\nFor the learning rate and optimizer, we use Adam with an initial learning rate 0.001 for all of the methods because the ProtoNet, RelationNet, and MAML methods all use the same setting as described in the respective papers. However, we cannot find information about the learning rate for MatchingNet. The learning rate of 0.001 is also given as a default hyper-parameter for Tensorflow and PyTorch. The results in Table 1 of our paper ensure that the results reproduce the performance presented in the original papers.\\n\\nFor other hyper-parameters such as the network depth in the backbone architecture, we have a detailed comparison as shown in Section 4.3 of the paper.\", \"q2\": \"The results of RelationNet are missing in Table 4.\", \"a2\": \"Adapting RelationNet using training data in the support set (from novel classes) at the meta-testing stage is non-trivial. As the relation module in RelationNet takes convolution maps as input, we are not able to not replace the relation module with a softmax layer as we do for the ProtoNet and MatchingNet.\\n\\nAs an alternative, at the meta-testing stage, we split the training data in the novel class into support and query data and use them to update the relation module. Specifically, we take the RelationNet with a ResNet-18 feature backbone. We randomly split the few training data in novel class into 3 support and 2 query data to finetune the relation module for 100 epochs. The results on CUB, mini-ImageNet and mini-ImageNet ->CUB are shown below.\\n\\n\\t\\t CUB\\t\\t\\tmini-ImageNet\\tmini-ImageNet -> CUB\\noriginal\\t 82.75% +- 0.58%\\t69.83% +- 0.68%\\t57.71% +- 0.73%\\nadapted\\t 83.17% +- 0.57%\\t70.49% +- 0.68%\\t58.54% +- 0.72%\\n\\nIn all three cases, adapting the relation module using the support data in the meta-testing stage improves the results. However, the improvement is somewhat marginal. We have included the additional results in the revised paper.\"}", "{\"title\": \"Responses to AnonReviewer1 -- part 2\", \"comment\": \"Q3: Another concern is that the same number of novel classes is used in the training and the testing stage. A more practical application of the learned meta model is to use it to handle different testing scenarios.\", \"a3\": \"Thanks for pointing this out. As suggested, we conduct the experiments of 5-way meta-training and N-way meta-testing (where we vary the number of N to be 5, 10, and 20) to examine the effect of handling testing scenarios that are different from training. We compare the methods Baseline, Baseline++, MatchingNet, ProtoNet, and RelationNet. Note that we are unable to apply the MAML method as MAML learns the initialization for the classifier and can thus only be updated to classify the same number of classes.\\n\\nWe show the experimental results on mini-ImageNet with 5-shot meta-training as follows.\", \"backbone\": \"ResNet18\\n\\t 5-way test\\t 10-way test\\t 20-way test\\nBaseline\\t 74.27% +- 0.63%\\t 55.00% +- 0.46%\\t 42.03% +- 0.25%\\nBaseline++\\t *75.68% +- 0.63%*\\t*63.40% +- 0.44%*\\t *50.85% +- 0.25%*\\nMatchingNet\\t 68.88% +- 0.69%\\t 52.27% +- 0.46%\\t 36.78% +- 0.25%\\nProtoNet\\t 73.68% +- 0.65%\\t 59.22% +- 0.44%\\t 44.96% +- 0.26%\\nRelationNet\\t 69.83% +- 0.68%\\t 53.88% +- 0.48%\\t 39.17% +- 0.25%\\n\\nOur results show that for classification with a larger-way (e.g., 10 or 20-way) in the meta-testing stage, the proposed Baseline++ compares favorably against other methods in both shallow or deeper backbone settings.\\n\\nWe attribute the results to two reasons. \\n1) To perform well in a larger N-way classification setting, one needs to further reduce the intra-class variation to avoid misclassification. Thus, in both shallow and deeper backbone settings, Baseline++ has better performance than Baseline.\\n\\n2) As meta-learning algorithms were trained to perform 5-way classification in the meta-training stage, the performance of these algorithms may drop significantly when increasing the N-way in the meta-testing stage because the tasks of 10-way or 20-way classification are harder than that of 5-way classification. \\n\\nOne may address this issue by performing a larger N-way classification in the meta-training stage (as suggested in [Snell et al. NIPS 2017]). However, this may encounter the issue of memory constraint. For example, to perform a 20-way classification with 5 support images and 15 query images in each class, we need to fit a batch size of 400 (20 x (5 + 15)) that must fit in the GPUs. Without special hardware parallelization, the large batch size may prevent us from training models with deeper backbones such as ResNet. We have include the result in the appendix of the revised paper.\", \"q4\": \"It is misleading by the following: \\u201cVery recently, Motiian et al. (2017) addresses the few-shot domain adaptation problem.\\u201d...\", \"a4\": \"Thanks for the correction. Indeed, both Saenko et al. Gong et al. address the supervised domain adaptation problem with only a few labeled instances prior to [Motiian et al., NIPS 2017].\\n\\nOn the other hand, we would like to point out another research direction. Very recently, the method in [Dong et al. ECML-PKDD 2018] addresses the few-shot problem where both the domain *and* the categories change. This work is more related to our setting, as we also consider novel category accuracy in few-shot classification under domain differences. We have corrected the statement in the revised paper.\"}", "{\"title\": \"Responses to AnonReviewer3\", \"comment\": \"Thanks for your comments! Our responses are as follow:\", \"q1\": \"If a relatively simple modification could improve the baselines, are there simple modifications available to other meta-learning algorithms being investigated?\", \"a1\": \"The simple modification we made for the baseline approach is to replace the softmax layer with a distance-based classifier. However, among other meta-learning algorithms, only the MAML method is applicable to this modification. Both ProtoNet and MatchingNet already use distance-based classifier in their algorithm. RelationNet has its own relation module so is not applicable for this modification. While MAML could adopt this strategy, we did not include it into our experiment since our primary goal is not to improve one specific method.\", \"q2\": \"If the other algorithms are not as good as they claimed, can you give any insights on why and what to improve?\", \"a2\": \"\", \"meta_learning_for_few_shot_classification_algorithms_are_not_as_good_as_they_claimed_because_of_the_following_two_aspects\": \"First, in the CUB setting, the gap among each algorithm diminished when using a deeper backbone. That is, with a deeper feature backbone, the improvement from different meta-learning algorithm become less significant. Our results suggest that both deeper backbones and meta-learning algorithms both aim to reduce intra-class variation for improving few-classification accuracy. Consequently, when intra-class variation has been dramatically reduced using a deeper backbone, the contribution from meta-learning becomes less significant.\\n\\nSecond, in the CUB -> mini-ImageNet setting where a larger domain shift exists, the Baseline method outperforms all meta-learning algorithms. That is, existing meta-learning algorithms are not robust to larger domain shift. As discussed in section 4.4, while meta-learning methods learn to learn from the support set during the meta-training stage, all of the base support sets are still within the same dataset. Thus, these algorithms did not learn how to learn from a support set with large domain shift.\\n\\nWith our results, we encourage the community to tackle the challenge of potential domain shifts in the context of few-shot learning. We will release the source code and evaluation setting that will facilitate future research directions.\"}", "{\"title\": \"Conclusion is a bit confusing\", \"review\": \"The paper tried to propose a systematic/consistent way for evaluating meta-learning algorithms. I believe this is a great direction of research as the meta-learning community is growing quickly. However, my question is if a relatively simple modification could improve the baselines, are there simple modifications available to other meta-learning algorithms being investigated? If the other algorithms are not as good as they claimed, can you give any insights on why and what to improve?\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}", "{\"title\": \"A nice experimental survey; experiment design could be improved\", \"review\": \"This paper gives a nice overview of existing works on few-shot learning. It groups them into some intuitive categories and meanwhile distills a common framework (Figure 2) employed by the methods. Moreover, the authors selected four of them, along with two baselines, to experimentally compare their performances under a cleaned experiment protocol.\\n\\nThe experiments cover three few-shot learning scenarios respectively for generic object recognition, fine-grained classification, and cross-domain adaptation. While I do *not* think the third scenario is \\u201cmore practical\\u201d, it is certainly nice to have it included in the experiments. \\n\\nThe experiment setup is unfortunately questionable. Since there is a validation set, one should use it to determine the free parameters (e.g., the number of epochs, learning rates, etc.). However, it seems like the same set of free parameters are used for different methods, making the comparison unfair because this set may favor some methods and yet hurt the others. \\n\\nThe results of RelationNet are missing in Table 4.\\n\\nAnother concern is that the same number of novel classes is used in the training and the testing stage. A more practical application of the learned meta model is to use it to handle different testing scenarios. There could be five novel classes in one scenario, 10 novel classes in another, and 100 in the third, etc. The number of labeled examples per class may also vary from one testing scenario to anther.\", \"it_is_misleading_by_the_following\": \"\\u201cVery recently, Motiian et al. (2017) addresses the few-shot domain adaptation problem.\\u201d There are a few variations in domain adaptation (DA). The learner has access to the fully labeled source domain and a small set of labeled target examples in supervised DA, to the source domain, a couple of labeled target examples, and many unlabeled target examples in semi-supervised DA, and to the source domain and many unlabeled target data points in the unsupervised DA. These have been studied long before (Motiian et al., 2017), for instance the works of Saenko et al. (2010) and Gong et al. (2013).\\n\\n[ref] Saenko K, Kulis B, Fritz M, Darrell T. Adapting visual category models to new domains. InEuropean conference on computer vision 2010 Sep 5 (pp. 213-226). Springer, Berlin, Heidelberg.\\n\\n[ref] Gong B, Grauman K, Sha F. Connecting the dots with landmarks: Discriminatively learning domain-invariant features for unsupervised domain adaptation. InInternational Conference on Machine Learning 2013 Feb 13 (pp. 222-230).\\n\\nOverall, the paper is well written and may serve as a nice survey of existing works on few-shot learning. The unified experiment setup can facilitate the future research for fair comparisons, along with the three testing scenarios. However, I have some concerns as above about the experiment setups and hence also the conclusions.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"thought-provoking observations and nice comparative experiments\", \"review\": \"There are a few things I like about the paper.\\n\\nFirstly, it makes interesting observations about the evaluation of the few-shot learning approaches, e.g. the underestimated baselines, and compares multiple methods in the same conditions. In fact, one of the reasons for accepting this paper would be to get a unified and, hopefully, well-written implementation of those methods. \\n\\nSecondly, I like the domain shift experiments, but I have the following question. The description of the CUB says that there is an overlap between CUB and ImageNet. Is there an overlap between CUB and mini-ImageNet? If so, then domain shift experiments might be too optimistic or even then it is not a big deal?\\n\\nOne thing I don\\u2019t like is that, in my opinion, the paper includes much redundant information which could go to the appendix in order to not weary the reader. For instance, everything related to Table 1. There is also some overlap between Section 2 and 3.3, while MAML, for instance, is still not well explained. Also, tables with too many numbers are difficult to read, e.g. Table 4. \\n\\n---- Other notes -----\\n\\nMany of the few-shot learning papers use Omniglot, so I think it would be a valuable addition to the appendix. Moreover, there exists a cross-domain scenario with Omniglot-> MNIST which I would also like to see in the appendix. \\n\\nIn the Matching Nets paper, there is a good baseline classifier based on k-NNs. Do you know how does that one compares to Baseline and Baseline++ models if used with the same architecture for the feature extractor?\\n\\nThe conclusion from the network depth experiments is that \\u201cgaps among different methods diminish as the backbone gets deeper\\u201d. However, in a 5-shot mini-ImageNet case, this is not what the plot shows. Quite the opposite: the gap increased. Did I misunderstand something? Could you please comment on that?\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
Bkl87h09FX
Looking for ELMo's friends: Sentence-Level Pretraining Beyond Language Modeling
[ "Samuel R. Bowman", "Ellie Pavlick", "Edouard Grave", "Benjamin Van Durme", "Alex Wang", "Jan Hula", "Patrick Xia", "Raghavendra Pappagari", "R. Thomas McCoy", "Roma Patel", "Najoung Kim", "Ian Tenney", "Yinghui Huang", "Katherin Yu", "Shuning Jin", "Berlin Chen" ]
Work on the problem of contextualized word representation—the development of reusable neural network components for sentence understanding—has recently seen a surge of progress centered on the unsupervised pretraining task of language modeling with methods like ELMo (Peters et al., 2018). This paper contributes the first large-scale systematic study comparing different pretraining tasks in this context, both as complements to language modeling and as potential alternatives. The primary results of the study support the use of language modeling as a pretraining task and set a new state of the art among comparable models using multitask learning with language models. However, a closer look at these results reveals worryingly strong baselines and strikingly varied results across target tasks, suggesting that the widely-used paradigm of pretraining and freezing sentence encoders may not be an ideal platform for further work.
[ "natural language processing", "transfer learning", "multitask learning" ]
https://openreview.net/pdf?id=Bkl87h09FX
https://openreview.net/forum?id=Bkl87h09FX
ICLR.cc/2019/Conference
2019
{ "note_id": [ "SkgxrvzlP4", "rJerSIq5gE", "rJgTYBCxlN", "r1xYl8rzaQ", "BJebTrSGT7", "S1lCFrBzpm", "r1eDs0UkaQ", "SJeRLFcth7", "HJlVdg5FhX" ], "note_type": [ "comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1552062264353, 1545410109005, 1544770948644, 1541719537029, 1541719480589, 1541719429632, 1541529247297, 1541151061758, 1541148779581 ], "note_signatures": [ [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper1360/Authors" ], [ "ICLR.cc/2019/Conference/Paper1360/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1360/Authors" ], [ "ICLR.cc/2019/Conference/Paper1360/Authors" ], [ "ICLR.cc/2019/Conference/Paper1360/Authors" ], [ "ICLR.cc/2019/Conference/Paper1360/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1360/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1360/AnonReviewer1" ] ], "structured_content_str": [ "{\"comment\": \"Additional note on the final decision is shocking!\", \"title\": \"Subjective decision of the area chairs????\"}", "{\"title\": \"Grumbly reply\", \"comment\": \"I know that this is a moot point, but I do want to leave a brief response for the record.\\n\\nI sincerely hope that the stated reasons for rejection are not the real reasons. I won't claim that this paper deserved to get in, but I will claim that the stated reasons\\u2014that the paper presents new analysis and system-comparison results (rather than new methods) based on a several-months-old baseline\\u2014set very a worrying precedent.\\n\\nThis would rule out _any_ time-consuming analysis on exactly those topics where analysis would be most useful.\\n\\n\\u2013 SB\"}", "{\"title\": \"perhaps not strong novelty but interesting insights based on extensive experiments on ELMO\", \"metareview\": \"This paper presents an extensive empirical study to sentence-level pre-training. The paper compares pre-trained language models to other potential alternative pre-training options, and concludes that while pre-trained language models are generally stronger than other alternatives, the robustness and generality of the currently available method is less than ideal, at least with respect to ELMO-based pretraining.\", \"pros\": \"The paper presents an extensive empirical study that offers new insights on pre-trained language models with respect to a variety of sentence-level tasks.\", \"cons\": \"The primarily contributions of this paper is empirical and technical novelty is relatively weak. Also, the insights are based just on ELMO, which may have a relatively weak empirical impact. The reviews were generally positive but marginally positive, which reflect that insights are interesting but not overwhelmingly interesting. None of these is a deal-breaker per say, but the paper does not provide sufficiently strong novelty, whether based on insights or otherwise, relative to other papers being considered for acceptance.\", \"verdict\": \"Leaning toward reject due to relatively weak novelty and empirical impact.\", \"additional_note_on_the_final_decision\": \"The insights provided by the paper are valuable, thus the paper was originally recommended for an accept. However, during the calibration process across all areas, it became evident that we cannot accept all valuable papers, each presenting different types of hard work and novel contributions. Consequently, some papers with mostly positive (but marginally positive) reviews could not be included in the final cut, despite their unique values, hard work, and novel contributions.\", \"recommendation\": \"Reject\", \"confidence\": \"3: The area chair is somewhat confident\"}", "{\"title\": \"Author response\", \"comment\": \"Thanks! We agree with your overall assessment. (I guess that\\u2019s not surprising...)\\n\\nThe single-task baselines are a bit confusing, and we\\u2019ll clarify that point in an update shortly. \\n\\nAs you describe, we pretrain a model on the same single task that we later evaluate it on. The tricky point here is that we follow the exact same training procedure here as in the other rows of that table, to make sure we\\u2019re isolating the effect of the data and not the training procedure. In that procedure, we pretrain an encoder, freeze it, then initialize and train a new task-specific model, so we wind up training two different task-specific models: one (always without attention) that\\u2019s used only during encoder pretraining, and another (always with attention) that\\u2019s trained on the frozen encoder and then evaluated.\\n\\nThis is fairly complex, but it allows us to make the comparisons we want to make. The single-task baselines in the original GLUE paper are trained in a single pass with a single target-task model, so those runs aren\\u2019t precisely comparable to ours.\"}", "{\"title\": \"Author response\", \"comment\": \"Thanks!\\n\\nAs you note, tuning is a real limitation that we acknowledge in the paper, and it represents an unavoidable trade-off. By training a lightly-regularized parameter-rich model under training conditions that are generally known to work well, we intend to make a rough but informative comparison possible between data-rich pretraining objectives. \\n\\nOur lack of substantial hyperparameter tuning means that small differences across pretraining settings are unlikely to be meaningful, and likely means that our performance numbers for our lowest-data pretraining tasks are weaker than they would otherwise be, but we still believe that our overall conclusions are both correct and informative as-is.\"}", "{\"title\": \"Author response\", \"comment\": \"Thanks for your review! We agree that we have presented informative results on a few questions surrounding pretraining. We agree with some of your additional points, but we strongly disagree with your implication they make this paper not worth publishing.\\n\\nWe don\\u2019t \\u201cbelieve that ELMO is the best contextual language model\\u201d, as you suggest. _Best_ is subjective, but we note on page 5 that Radford et al.\\u2019s Transformer model outperforms ELMo on GLUE. \\n\\nWe focus on ELMo because it represented the state of the art as of spring, when we started building out the infrastructure for our experiments. Given the scale of this experiment, we would not have had time to conduct a similar analysis that follows the model and methods of Radford et al. (or BERT, which was released **after the ICLR deadline**), but we agree that a similar such analysis would be worthwhile.\\n\\nWe also think that research on ELMo-style models would be worthwhile even if these fine-tuning based systems had come out earlier. ELMo\\u2019s pretrain-and-freeze approach to sentence representations is already being widely deployed (see Kitaev and Klein on parsing, or Gehrmann on summarization, for example), and it is likely the most practical approach if one wants to deploy a full NLP toolkit in a setting like a mobile phone where one cannot store a full separate fine-tuned 100MB-1GB encoder file for each task model that the toolkit provides. \\n\\nFurther, many of our experiments on supervised pretraining do not involve ELMo, and we argue that the paper still offers more than enough worthwhile results even if you do not accept that ELMo is in any way worth studying.\\n\\nZooming out a bit, though, we *strongly* disagree with the premise that it\\u2019s not worth publishing a paper if the baseline system does not represent the state of the art as of *publication* time, or that such papers cannot have a substantial impact. That would rule out most experimental papers, and it would rule out _any_ time-consuming analysis on exactly those topics where analysis would be most useful.\"}", "{\"title\": \"Many experiments on a fast-moving field, without clear conclusion\", \"review\": \"The work presented in this paper relates to the impact of the dataset on the performance of contextual embedding (namely ELMO in this paper) on many downstream tasks, including GLUE tasks, but also alternative NLP tasks.\\n\\nThe work is focused on experiments, and draws several conclusions that are interesting, mostly around the amount of gain one can expect and the fact that the choice of the dataset is task-dependent. \\n\\nOne of the issue is that the authors if seems to believe that ELMO is the best contextual language model. The field is moving so quickly that the experiments might become invalid pretty soon (e.g. see BERT model referenced below).\\n\\nFinally, the analysis is mostly descriptive and there is few insight by the author about what should be the future work, apart from \\\"we need a better understanding\\\".\", \"minor_details\": \"\", \"page_1\": \"\\\"can yield very strong performance on NLP tasks\\\" is a very busy way to express the fact that Sentence Encoders work well in practice.\\n\\nThe field evolves quickly and ELMO has now a competitive models called BERT (arXiv.org\\u00a0>\\u00a0cs\\u00a0> arXiv:1810.04805). I understand that the results of the current papers would hove to be re-run on all these tasks, but I'm afraid the current paper will have a limited impact if it does not use the most effective method at the date of publication...\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"This paper presents an extremely comprehensive comparison of sentence representation methods.\", \"review\": \"Only a handful of NLP tasks have an ample amount of labeled data to get state-of-the-art results without using any form of transfer learning. Training sentence representation in an unsupervised manner is hence crucial for real-world NLP applications.\\nContextualized word representations have gained a lot of interest in recent years and the NLP and ML community could benefit from such detailed comparison of such methods.\\n\\nThis paper's biggest strength is the experimental setting. The authors cover a lot of ground in comparing a lot of the recent work, both qualitatively and quantitatively -- there are a lot of experiments.\\nI do understand the computational limitations of the authors (as they mention on HYPERPARAMETER TUINING) and I do agree with their statement \\u201c The choice not to tune limits our ability to diagnose the causes of poor performance when it occurs\\u201d.\\nExtensive hyper-parameter tuning can make a substantial different when dealing with NN models, maybe the authors should have considered dropping some of the tasks (the article has more than enough IMHO) and focus on a smaller sub set of tasks with proper hyper-parameter tuning.\\nTable 2 is very interesting, the results suggesting that we are indeed very far from fully robust sentence representation method.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Valuable systematic study of pre-training tasks' influence on downstream task performance\", \"review\": \"This paper presents a thorough and systematic study of the effect of pre-training over various NLP tasks on the GLUE multi-task learning evaluation suite, including an examination of the effect of language model-based pre-training using ELMo. The main conclusion is that both single-task and LM-based pre-training helps in most situations, but the gain is often not large, and not consistent across all GLUE tasks.\\n\\nThis paper represents an impressive amount of experimentation. The study and the experimental results will be useful and interesting to the community. The result that some tasks' performance are negatively correlated with each other is surprising. The paper is clearly written. \\n\\nOne clarification question I have is about what the \\\"Single-task\\\" pre-training means. The paper seems to suggest that it consists of pre-training a model on the same task on which it is later evaluated. I'm confused by what this means, and how this is different from just training on that task.\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
ByeLmn0qtX
Variational Domain Adaptation
[ "Hirono Okamoto", "Shohei Ohsawa", "Itto Higuchi", "Haruka Murakami", "Mizuki Sango", "Zhenghang Cui", "Masahiro Suzuki", "Hiroshi Kajino", "Yutaka Matsuo" ]
This paper proposes variational domain adaptation, a unified, scalable, simple framework for learning multiple distributions through variational inference. Unlike the existing methods on domain transfer through deep generative models, such as StarGAN (Choi et al., 2017) and UFDN (Liu et al., 2018), the variational domain adaptation has three advantages. Firstly, the samples from the target are not required. Instead, the framework requires one known source as a prior $p(x)$ and binary discriminators, $p(\mathcal{D}_i|x)$, discriminating the target domain $\mathcal{D}_i$ from others. Consequently, the framework regards a target as a posterior that can be explicitly formulated through the Bayesian inference, $p(x|\mathcal{D}_i) \propto p(\mathcal{D}_i|x)p(x)$, as exhibited by a further proposed model of dual variational autoencoder (DualVAE). Secondly, the framework is scablable to large-scale domains. As well as VAE encodes a sample $x$ as a mode on a latent space: $\mu(x) \in \mathcal{Z}$, DualVAE encodes a domain $\mathcal{D}_i$ as a mode on the dual latent space $\mu^*(\mathcal{D}_i) \in \mathcal{Z}^*$, named domain embedding. It reformulates the posterior with a natural paring $\langle, \rangle: \mathcal{Z} \times \mathcal{Z}^* \rightarrow \Real$, which can be expanded to uncountable infinite domains such as continuous domains as well as interpolation. Thirdly, DualVAE fastly converges without sophisticated automatic/manual hyperparameter search in comparison to GANs as it requires only one additional parameter to VAE. Through the numerical experiment, we demonstrate the three benefits with multi-domain image generation task on CelebA with up to 60 domains, and exhibits that DualVAE records the state-of-the-art performance outperforming StarGAN and UFDN.
[ "domain adaptation", "variational inference", "multi-domain" ]
https://openreview.net/pdf?id=ByeLmn0qtX
https://openreview.net/forum?id=ByeLmn0qtX
ICLR.cc/2019/Conference
2019
{ "note_id": [ "rygZ4pNCIB", "rklI0PATJE", "rJxkNsrnTm", "HJgiq9rh67", "ryl2qtS2p7", "HJeoKdH267", "BJxbaNB2aX", "rJlblAwah7", "Byedwza5nQ", "HJlwXuL527", "r1lGHlGd2m" ], "note_type": [ "comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "comment", "official_review", "official_review", "official_review" ], "note_created": [ 1568718120557, 1544574925953, 1542376230992, 1542376083293, 1542375828439, 1542375555209, 1542374585350, 1541402089199, 1541227104497, 1541199903219, 1541050425572 ], "note_signatures": [ [ "~Yifei_Wang1" ], [ "ICLR.cc/2019/Conference/Paper1359/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1359/Authors" ], [ "ICLR.cc/2019/Conference/Paper1359/Authors" ], [ "ICLR.cc/2019/Conference/Paper1359/Authors" ], [ "ICLR.cc/2019/Conference/Paper1359/Authors" ], [ "ICLR.cc/2019/Conference/Paper1359/Authors" ], [ "~Christian_B_Goldberg1" ], [ "ICLR.cc/2019/Conference/Paper1359/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1359/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1359/AnonReviewer2" ] ], "structured_content_str": [ "{\"comment\": \"Check: let u_i=u_phi=0, sigma^2=2, density ratio = int{1/sqrt(8pi) dz}, which is not integerable. For general cases, even assumed integrability, it can also be shown the equality does not hold.\\n\\nTo get your result, you should assume p(z) to be N(0, sigma^2) rather than N(0, I). If insisting your assumptions, you would end up with a quadrature rather than only linear product involved.\\n\\nFurthermore, you assume p(z)=N(0,I). This is not consistent with p(z|Di)=N(ui, sigma2), which makes p(z) a mixture of Gaussian distribution if D is categorical.\", \"title\": \"Your inner product trick (Eq. 6) seems to be wrong\"}", "{\"metareview\": \"This paper proposes using conditional VAEs for multi-domain transfer and presents results on CelebA and SCUT. As mentioned by reviewers, the presentation and clarity of the work could be improved. It is quite difficult to determine the new/proposed aspects of the work from a first read through. Though we recognize and appreciate that the authors updated their manuscript to improve its clarity, another edit pass with particular focus on clarifying prior work on conditional VAEs and their proposed new application to domain transfer would be beneficial.\\n\\nIn addition, as DIS is the main metric for comparison to prior work and for evaluation of the final approach, the conclusions about the effectiveness of this method would be easier to see if a more detailed description of the metric and analysis of the results were provided. \\n\\nGiven the limited technical novelty and discussion amongst reviewers of the desire for more experimental evidence, this work is not quite ready for publication.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Additional discussion and experiments appreciated, more improvements needed\"}", "{\"title\": \"Response to reviewer2\", \"comment\": \"Thanks for your feedback.\\n\\n> (2) In the abstract and introduction, you state that a source domain is regarded as a prior, and the target domain is regarded as a posterior. From the Method section, I am not sure whether this is a valid statement. In my understanding, equation (1) is the KL summation of all the domains. The following derivation assumes that the data of all the domains draw a distribution p(x) (which is the prior), and the data of each domain has a specific distribution p^(i)(x) (which is the posterior). Do you assume that all the domains from D_i to D_n are target domains? Then, what are the source domains?\\n \\nIn fact, the image set of the target domain p(x|D_i) was contained in the source domain p(x).\\nSpecifically, p(x) is a whole face image set, and p(x|D_i) is a face image set that people (i) like.\\nTo consider domain transfer when the image set of the source domain and the image set of the target domain are independent, we prepared two target domain sets p(x|D_1) and p(x|D_2).\\nYou can see that p(x|D_1) should be a source domain set and p(x|D_2) a target domain set.\\n\\n> (3) From eq.(2) to eq.(3), why p(D_i) = \\\\lamda_i is assumed? Is p(D_i) related to the number of the instance in D_i?\\n \\nYes, \\\\lambda_i is the percentage of D_i.\\n \\n> (4) In the prior part of eq.(3), it should have a p(D_i|x) before log p_\\\\theta(x), right?\\n \\nYes. Thank you for the observation.\\n \\n> Where is f(\\\\hat_{D}|x), in the first line of page 4, used?\\n \\nWe did not use it.\\n \\n> What are the optimizers: g and g_e?\\n \\nThe optimizer g is for both the VAE encoder and decoder; g_e is the optimizer for the VAE encoder. Both the optimizers are Adam.\\n \\n> Regarding the experimental studies, what do you want to conclude from the visualization of the domain embeddings? It would be better to give more discussion, analyses or observation for the visualization.\\n \\nVisualizing the domain embeddings, we showed that the original image set p(x) can be transformed into the image set p(x|D_i) of multiple domains.\\nHowever, we think that there were some unclear parts; therefore, we changed the image to a clearer image with a graph of quantitative comparison with other models. Please see the image on p. 8.\\n \\n> For the comparison result with StarGAN, could you elaborate the experimental settings for each method? Could you give more explanation on why MD-VAE outperforms StarGAN.\\n \\nIn the comparison experiment with the existing method, the test images of the CelebA domain transferred by the methods were compared using DIS and changing the parameter five times.\\nSince the CelebA dataset had 40 kinds of attributes, we changed the number of attributes, such as 5, 10, 20, 40, and performed domain transformation. Please see the results on p. 7.\\n \\n> Furthermore, are there any other state-of-the-art baselines that can be compared?\\n \\nWe added experiments of UFDN (NIPS, 2018) to the experimental results (p. 7) of the body.\\nThe reason for choosing UFDN is that SOTA of the domain transfer is StarGAN in the method based on GAN, but it is UFDN in the method based on VAE.\"}", "{\"title\": \"Response to reviewer3\", \"comment\": \"Thanks for your comments.\\n\\n> They assumed a specific setup where one can access domain classifiers P(Di|x), but not the samples from P(x|Di). It is a bit odd: actually they worked mostly on a special (relatively new) dataset named \\\"SCUT-FBP-5500,\\\" which seemed to contain the labeled samples (x,D1,...,Dn). Then, obviously we could access x|Di as well as Di|x. Of course, this type of fully labeled dataset is small in size.\\n \\nSince the samples from p(x|D_i) were not large to clearly generate images, we needed to obtain a sample from p(x) instead of p(x|D_i).\\n \\n> One issue lies in the latent prior learning (ie, optimization of (3)). Since they need to evaluate P(Di|x), x is limited to the labeled samples, namely those from the (small-sized) SCUT-FBP-5500 dataset only. So although they wrote expectation wrt p(x) in (3), the p(x) cannot be a large dataset like the CelebA dataset as they intended, but p(x) is limited to a small dataset like SCUT-FBP. The large samples from p(x) are only exploited in the VAE learning part.\\n \\nPlease explain this again because we are unable to understand the meaning of \\u201cthe p(x) cannot be a large dataset like the CelebA dataset.\\u201d\\nWe used the CelebA dataset as a prior distribution of facial images because clear images could not be generated using only the SCUT dataset.\\n\\n> The experimental evaluation was weak; it was evaluated on only one dataset, as compared with the standard VAE and StarGAN, which were not aimed for the particular problem setup that the authors were considering.\\n\\nIn response to your suggestions, we performed the experiments again with two additional datasets:\\n 1. CelebA (40 domains)\\n 2. MNIST (10 domains)\\nThe experiments showed good result and the details of the results have been added in the appendix.\\n \\nAs written in the revised paper, in the experiments using the SCUT-FBP-5500 dataset, we regarded the preference of one person as one domain.\\nIn additional experiments using CelebA and MNIST, we performed domain transfer of facial image attributes and numeric labels, respectively.\\nWe also conducted additional comparison experiments using CelebA with UFDN (NIPS, 2018), CVAE.\\nUFDN was chosen because the SOTA of the domain transfer was StarGAN in the method based on GAN but it was UFDN in the method based on VAE.\\nMoreover, since the proposed method is a model that extends CVAE, we also compared it with the original CVAE.\\nPlease see the updated results on page 7.\\n\\n> At least, they may be able to compare it with a baseline approach, e.g., using the samples from p(x|D_i) available from the SCUT dataset (small though), one can learn encoder/decoder models for each D_i.\\n\\nYes, we experimentally confirmed that the DualVAE is more accurate than the single domain VAEs (SD-VAEs) learned independently in each domain by an experiment using SCUT-FBP-5500 (60 domains).\\nThis is stated in the original paper in the SDVAE vs MDVAE paragraph (p.7).\\n\\n> There appears to be identity changes in many of the face image preference examples. This is unexpected. I would be more inclined to believe that personal preferences are about appearance (style) features rather than identify; yet, most examples in Fig. 6 indicate the opposite.\\n\\nSince the image in Fig. 6 was not explained, the explanation was added in section E (p.14).\\nThis figure shows that by averaging the embedding of domains, DualVAE generates images that are preferred for multiple people.\\nHowever, the more preferred the image, the more the identity will change as you have pointed out.\\nTherefore, it is important to continuously adjust the parameters to the extent that the identity does not change.\"}", "{\"title\": \"Response to reviewer1\", \"comment\": \"Thanks for your feedback.\\n\\n> The proposed method is similar to or equal to the conditional VAE. The only difference is the way the condition information is involved during training.\\n\\nWhile the key idea of the proposed method has been used by CVAE, there are no studies that have argued for the relation of the CVAE to domain adaptation.\\nTherefore, our main contribution is bridging CVAE and domain adaptation using DualVAE.\\n \\n> From a similar perspective, we can see that the results change according to the variation of the value $\\\\sigma$.\\n \\nBelow are the results of one of the additional experiments, which indicates that the performance is robust to $\\\\sigma$.\\n\\nMethod | DIS\\n-------------------------------------------\\nStarGAN | 0.087\\nUFDN | 0.002\\nDualVAE (\\u03b1=1) | 0.115\\nDualVAE (\\u03b1=10) | 0.143\\nDualVAE (\\u03b1=10^2) | 0.112\\nDualVAE (\\u03b1=10^3) | 0.109\\nDualVAE (\\u03b1=10^4) | 0.146 \\n\\n\\u03b1=\\\\sigma^{- 2}; the number of domains: 40\\nSince variance of the prior p(x) is 1, we set \\u03b1 >= 1.\\nAdditional details are listed in Fig. 16, Appendix G (p. 19)\\n\\n> It is not intuitive how significant the improvement of 5% in PIS. It would be good to provide the intuitive understanding of the improvement.\\n\\nPIS = reconstruction score + domain transfer score.\\nPlease see Figure 4 (p. 8) and Appendix B to intuitively understand PIS.\\n(In the paper, we changed the name of PIS to DIS).\\nWe compared DualVAE with StarGAN, UFDN, and CVAE, and showed the relation between DIS and the result of domain transfer using each method.\"}", "{\"title\": \"Response to Christian\", \"comment\": \"Thank you for your comments. We intend to make the code public.\"}", "{\"title\": \"We have rewritten the body to reflect the reviewers' suggestions\", \"comment\": \"Thank you for your review comments. We have rewritten the body to reflect your suggestions.\\nWe have added a few more experiments in the appendix and have increased the number of pages from 12 to 25 (including the appendix).\\nPlease note that we have renamed several terms in the body.\\n1. Multi-domain VAE (MD-VAE) --> DualVAE\\n2. Preferential Inception Score (PIS) --> Domain Inception Score (DIS)\"}", "{\"comment\": \"The paper proposes a quite clever trick with inner product and variational autoencoder to address the important research issue of convergence in domain adaptation. It shows an experimental result for transfer across quite large (-60) domains.\\n\\nI'd like to use the technique in my work because several solutions such as StarGAN did not converge although I've tried.\\nIf you have any source code, I'm happy if you can share us it publicly/privately.\", \"title\": \"Very clever idea. Is the source code publicly available?\"}", "{\"title\": \"Unclear contribution\", \"review\": [\"To this reviewer\\u2019s understanding, the proposed method is very similar or equal to the conditional VAE. The only difference comes from the way of involving the condition information during training.\\u0000 This should be clarified and further, it is necessary to compare with the conditional VAE in the experiments, rather than the vanilla VAE.\", \"The proposed method uses a predefined and fixed value of the variance $\\\\sigma^{2}$, which is very informative and should be estimated from data in inference. Basically, there is no specification on this value in their experiments.\", \"In a similar perspective, how the results changes according to the variation of the value $\\\\sigma$.\", \"It is not intuitive how significant the improvement of 5% in PIS. It would be good to provide the intuitive understanding of the improvement.\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"A preference learning generative model (in deep setting), with somewhat unintuitive setting and weak experimental evaluation\", \"review\": \"1) Summary of the paper:\\n\\nThe paper brings up a relatively new problem of learning a generative model for multiple domains. The domains, D1,...,Dn, may refer to person-specific preferred images, for instance, and they focus on how to build generative models P(x|Di), which represents a set of images preferred by subject i.\\n\\nThey assumed a specific setup where one can access domain classifiers P(Di|x), but not the samples from P(x|Di). It is a bit odd: actually they worked mostly on a special (relatively new) dataset named \\\"SCUT-FBP-5500\\\", which seems to contain labeled samples, (x,D1,...,Dn) -- then, obviously we can access x|Di as well as Di|x. Of course, this type of fully labeled dataset is small-sized.\\n\\nTheir approach is basically to partition the latent space by the domains D1,...,Dn. They utilize the standard VAE model which is shared across the domains, and introduce domain-specific latent priors P(z|D_i) which are Gaussians. The learning is essentially a combination of the VAE learning and the latent prior learning, where the latter is done by enforcing the generated samples x from each Di to be consistent with the domain classifier P(Di|x). This strategy sounds reasonable enough.\\n\\nOne issue lies in the latent prior learning (ie, optimization of (3)). Since they need to evaluate P(Di|x), x is limited to the labeled samples, namely those from the (small-sized) SCUT-FBP-5500 dataset only. So although they wrote expectation wrt p(x) in (3), the p(x) cannot be a large dataset like the CelebA dataset as they intended, but p(x) is limited to a small dataset like SCUT-FBP. The large samples from p(x) are only exploited in the VAE learning part.\", \"the_experimental_evaluation_is_weak\": \"evaluated on only one dataset, compared with just standard VAE and StarGAN which are not aimed for the particular problem setup the authors are considering.\\n\\nAt least, they may be able to compare it with a baseline approach, e.g., using the samples from p(x|D_i) available from the SCUT dataset (small though), one can learn encoder/decoder models for each D_i.\\n\\n2) Strengths:\\n\\nRelatively unique problem (but unusual and unintuitive setup) and a reasonable approach.\\n\\n3) Weak points:\\n\\n-The writing is sloppy. It doesn't read very well, and difficult to follow. Contains many typos.\\n\\n-Weak in experimental evaluation and comparison with other (baseline) approaches.\\n\\n-There appears to exist identity change in many of face image preference examples. This is unexpected. I would be more inclined to believe that personal preferences are about appearance (style) features rather than identify. Yet most examples in Fig.6 indicate the opposite.\\n\\n- Writing would benefit from laying out intuition beyond both the model and the experimental results.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Review\", \"review\": \"In this paper, the authors propose a variational domain adaptation framework for learning multiple distributions through variational inference. The proposed framework assumes a prior, and models each domain as a posterior. A multi-domain variational auto-encoder is then proposed to implement the concept of multi-domain semi-supervision. Experimental studies are done to show the effectiveness of the proposed framework.\\n\\nThis paper does not deal with the conventional domain adaptation problem as many existing domain adaptation works do. It focuses on the adaptation task of data generation. Here are some comments:\\n(1)\\tIt would be better to clarify the adaptation task by giving a concrete real-word example in the introduction. Specifically, you may want to specify what the source and target tasks are, and what the assumption you have made on the source and target tasks is.\\n(2)\\tIn the abstract and introduction, you state that a source domain is regarded as a prior, and target domain is regarded as posterior. From the Method section, I am not sure whether this is a valid statement. In my understanding, equation (1) is the KL summation of all the domains. The following derivation assumes that the data of all the domains draw a distribution p(x) (which is the prior), and the data of each domain has a specific distribution p^(i)(x) (which is the posterior). Do you assume that all the domains from D_i to D_n are target domains? Then, what are the source domains?\\n(3)\\tFrom eq.(2) to eq.(3), why p(D_i) = \\\\lamda_i is assumed? Is p(D_i) related to the number of the instance in D_i?\\n(4)\\tIn the prior part of eq.(3), it should have a p(D_i|x) before log p_\\\\theta(x), right? Where is f(\\\\hat_{D}|x), in the first line of page 4, used? What are the optimizers: g and g_e?\\n(5)\\tRegarding the experimental studies, what do you want to conclude from the visualization of the domain embeddings? It would be better to give more discussion, analyses or observation for the visualization. For the comparison result with StarGAN, could you elaborate the experimental settings for each method? Could you give more explanation on why MD-VAE outperforms StarGAN. Furthermore, are there any other state-of-the-art baselines that can be compared? \\n\\nOverall, I think this is an interesting paper. However, there are some unclear parts need to be further clarified. The experimental studies are a litter weak in the sense that (1) it needs more discussion and analyses on the results; and (2) more baselines need to be compared.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}" ] }
HygS7n0cFQ
Fast Exploration with Simplified Models and Approximately Optimistic Planning in Model Based Reinforcement Learning
[ "Ramtin Keramati", "Jay Whang", "Patrick Cho", "Emma Brunskill" ]
Humans learn to play video games significantly faster than the state-of-the-art reinforcement learning (RL) algorithms. People seem to build simple models that are easy to learn to support planning and strategic exploration. Inspired by this, we investigate two issues in leveraging model-based RL for sample efficiency. First we investigate how to perform strategic exploration when exact planning is not feasible and empirically show that optimistic Monte Carlo Tree Search outperforms posterior sampling methods. Second we show how to learn simple deterministic models to support fast learning using object representation. We illustrate the benefit of these ideas by introducing a novel algorithm, Strategic Object Oriented Reinforcement Learning (SOORL), that outperforms state-of-the-art algorithms in the game of Pitfall! in less than 50 episodes.
[ "Reinforcement Learning", "Strategic Exploration", "Model Based Reinforcement Learning" ]
https://openreview.net/pdf?id=HygS7n0cFQ
https://openreview.net/forum?id=HygS7n0cFQ
ICLR.cc/2019/Conference
2019
{ "note_id": [ "Ske5y_n-eE", "Syeo0AeX6Q", "H1liHsyThQ" ], "note_type": [ "meta_review", "official_review", "official_review" ], "note_created": [ 1544828897787, 1541766866696, 1541368643163 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1357/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1357/AnonReviewer4" ], [ "ICLR.cc/2019/Conference/Paper1357/AnonReviewer3" ] ], "structured_content_str": [ "{\"metareview\": [\"Pros:\", \"rather novel approach to using optimistic MCTS for exploration with deterministic models\", \"positive rewards on Pitfall\"], \"cons\": [\"lost of domain-specific knowledge\", \"deteministic models\", \"lacking clarity\", \"lacking ablations\", \"no rebuttal\", \"I agree with both reviewers that the paper is not good enough to be accepted.\"], \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Meta-review\"}", "{\"title\": \"Good sample-efficient performance on Pitfall using planning and imperfect models, but with limited impact due to simplifications that are hard to remove/circumvent.\", \"review\": \"-- Summary --\\n\\nThe paper proposes to learn (transition) models (for MDPs) in terms of objects and their interactions. These models are effectively deterministic and are compatible with algorithms for planning with count-based exploration. The paper demonstrates the performance of one such planning method in toy tasks and in Pitfall, as well as a comparison with other planning methods in the toy tasks. The proposed model-based method, called SOORL, yields agents that perform better on Pitfall with a small amount of data.\\n\\n-- Assessment --\\n\\nAs a positive, the results of the paper are favorable compared to previous work, with good sample efficiency, and they demonstrate the viability of the proposed approach. The most negative point is that SOORL relies on limiting domain-specific biases that are hard to remove or circumvent.\\n\\n-- Clarity --\\n\\nThe paper is somewhat clear. There are many typos and mistakes in writing, and at parts (for example, the second paragraph of Section 4.2) the explanations are not clear.\\n\\n-- Originality --\\n\\nI believe the work is original. The paper explores a natural idea and the claims/results are not surprising, but as far as I am aware it has not been tried before.\\n\\n-- Support --\\n\\nThe paper provides support for some of the claims made. The comparison to related work contains unsupported claims (\\\"we studied how imperfect planning can affect exploration\\\") and could be more upfront about the weaknesses of the proposed method. The claims in the introduction are sufficiently supported.\\n\\n-- Significance --\\n\\nIt would be hard to scale SOORL to other tasks, so it is unlikely to be adopted where end-to-end learning is wanted. Therefore I believe the impact of the paper to be limited.\\n\\nThere is also the question of whether the paper will attract interest and people will work on addressing the limitations of SOORL. I would like to hear more from the authors on this point.\\n\\n-- For the rebuttal --\\n\\nMy greatest doubt is whether the paper will attract enough interest if published, and it would be helpful to hear from the authors on why they think future work will build on the paper. Why is the proposed approach a step in the right direction?\\n\\n-- Comments --\", \"sample_efficiency\": \"The paper should be more clear about this point. It seems that 50 episodes were used for getting the positive reward in Pitfall, which is great.\", \"object_detection\": \"I am happy with the motivation about how we can remove the hand-made object detection. It is important the other strong assumptions (object interaction matrix, for example) can be removed as well. My opinion on simplifications is this: They are ok if they are being used to make experiments viable and they can be removed when scaling up; but they are not ok if there is no clear way to remove them.\", \"known_interaction_matrix\": \"It may be possible to remove this requirement using the tools in [1]\", \"deterministic_model\": \"The use of no-ops to make the model deterministic seems right if the ultimate goal is to make the model deterministic, but it seems unsuited if the model is to be used for control. Maybe the model needs to be temporally extended as I thought the paper was proposing in Section 4.2 but Section 4.3 suggests that this temporal extension was not a good idea. Is my understanding correct?\", \"exploration\": \"I was a bit confused about how the text discusses exploration. UCT uses OFU, but the text suggests that it does not. What are the components for exploration? Both a bonus on unseen transitions and the confidence interval bonus? Also, the paper would have to provide support for the claim that \\\"with limited number of rollouts, the agent might not observe the optimistic part of the model, in contrast to optimistic MCTS where optimism is build into every node of the tree\\\". However, it is fair to say that in the to domains MCTS seemed has performed better, and for that reason it has been chosen instead of Thompson Sampling for the later experiments.\", \"writing\": [\"The paper has a number of typos and mistakes that need to be fixed. To point out a few:\", \"I would suggest more careful use of \\\"much\\\" and \\\"very\\\"\", \"For citations, \\\"Diuk et al. (2008) also proposed...\\\" and \\\"(UCT, Kocsis & Szepesvari, 2006)\\\"\"], \"claims\": \"I think the claims made in the introduction could be stated more clearly in the conclusion. (Intro) \\\"We show how to do approximate planning\\\" --> (Conclusion) \\\"Our model learning produces effectively deterministic models that can then be used by usual planning algorithms\\\".\\n\\n-- References --\\n\\n[1] Santoro et al., 2017. \\\"A simple neural network module for relational reasoning\\\"\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"A heavily engineered approach which achieves good performance in limited settings\", \"review\": \"This paper proposes a model-based object-oriented algorithm, SOORL.\\nIt assumes access to an object detector which returns a list of objects with their attributes, an interaction function which detects interactions between objects, and a set of high-level macro actions. Using a simplified state representation obtained through the object detector, it performs optimistic MCTS while simultaneously learning transition and reward models. The method is evaluated on two toy domains, PongPrime and miniPitfall, as well as the Atari game Pitfall. It achieves positive rewards on Pitfall, which previous methods have not been able to do. \\n\\nDespite good experimental results on a notoriously hard Atari game, I believe this work has limited significance due to the high amount of prior knowledge/engineering it requires (the authors note that this is why they only evaluate on one Atari game). I think this would make a good workshop paper, but it's not clear that the contributions are fundamental or generally applicable to other domains. Also, the paper is difficult to follow (see below).\", \"pros\": [\"good performance on a difficult Atari game requiring exploration\", \"sample efficient method\"], \"cons\": [\"paper is hard to follow\", \"approach is evaluated on few environments\", \"heavily engineered approach\", \"unclear whether gains are due to algorithm or prior knowledge\"], \"specific_comments\": \"- Section 3 is hard to follow. The authors say that they are proposing a new optimistic MCTS algorithm to support deep exploration guided by models, but this algorithm is not described or written down explicitly anywhere. Is this the same as Algorithm 3 from Section 5? They say that at each step and optimistic reward bonus is given, but it's unclear which bonus this is (they mention several possibilities) or how it relates to standard MCTS.\\nIn Section 3.1, it is unclear what the representation of the environment is. I'm guessing it is not pixels, but it is discrete states? A set of features? \\nThe authors say \\\"we provided the right model class for both experiments\\\" - what is this model class? \\n\\n- Concerning the general organization of the paper, it would be clearer to first present the algorithm (i.e. Section 5), go over the different components (model learning, learning macro actions, and planning), and then group all the experiments together in the same section. \\nThe first set of experiments in Sections 3.1 and 3.2 can be presented within the experiments section as ablations. \\n\\n- Although the performance on Pitfall is good, it's unclear how much gains are due to the algorithm and how much are due to the extra prior knowledge. It would be helpful to include comparisons with other methods which have access to the same prior knowledge, for example with DQN/A3C and pseudo-count exploration bonuses using the same feature set and macro actions as SOORL uses.\", \"minor\": [\"Page 2: \\\"Since the model...the new model estimates\\\": should this be part of the previous sentence?\", \"Page 5: \\\"There are reasonable evidence\\\" -> \\\"There is reasonable evidence\\\"\", \"Page 5: \\\". we define a set of...\\\" -> \\\". We define a set of...\\\"\", \"Page 8: \\\"any function approximation methods\\\" -> \\\"method\\\"\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
HkzSQhCcK7
STCN: Stochastic Temporal Convolutional Networks
[ "Emre Aksan", "Otmar Hilliges" ]
Convolutional architectures have recently been shown to be competitive on many sequence modelling tasks when compared to the de-facto standard of recurrent neural networks (RNNs) while providing computational and modelling advantages due to inherent parallelism. However, currently, there remains a performance gap to more expressive stochastic RNN variants, especially those with several layers of dependent random variables. In this work, we propose stochastic temporal convolutional networks (STCNs), a novel architecture that combines the computational advantages of temporal convolutional networks (TCN) with the representational power and robustness of stochastic latent spaces. In particular, we propose a hierarchy of stochastic latent variables that captures temporal dependencies at different time-scales. The architecture is modular and flexible due to the decoupling of the deterministic and stochastic layers. We show that the proposed architecture achieves state of the art log-likelihoods across several tasks. Finally, the model is capable of predicting high-quality synthetic samples over a long-range temporal horizon in modelling of handwritten text.
[ "latent variables", "variational inference", "temporal convolutional networks", "sequence modeling", "auto-regressive modeling" ]
https://openreview.net/pdf?id=HkzSQhCcK7
https://openreview.net/forum?id=HkzSQhCcK7
ICLR.cc/2019/Conference
2019
{ "note_id": [ "HJx6AfAZgV", "SkgDcXX-x4", "HyxGRY_Mk4", "ByxYKYdz1E", "Skl3BDFoA7", "SJlBz456pX", "BJeA9X5TTQ", "r1lmRf5p6Q", "S1xUAx5667", "BkeTXrG6nm", "Syg22hUs2Q", "SJghxFJ5hm" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544835797467, 1544790927276, 1543829962045, 1543829889361, 1543374660136, 1542460429170, 1542460309943, 1542460107454, 1542459597591, 1541379365136, 1541266611654, 1541171444052 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1356/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1356/Authors" ], [ "ICLR.cc/2019/Conference/Paper1356/Authors" ], [ "ICLR.cc/2019/Conference/Paper1356/Authors" ], [ "ICLR.cc/2019/Conference/Paper1356/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1356/Authors" ], [ "ICLR.cc/2019/Conference/Paper1356/Authors" ], [ "ICLR.cc/2019/Conference/Paper1356/Authors" ], [ "ICLR.cc/2019/Conference/Paper1356/Authors" ], [ "ICLR.cc/2019/Conference/Paper1356/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1356/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1356/AnonReviewer3" ] ], "structured_content_str": [ "{\"metareview\": \"The paper presents a generative model of sequences based on the VAE framework, where the generative model is given by CNN with causal and dilated connections.\\n\\nNovelty of the method is limited; it mainly consists of bringing together the idea of causal and dilated convolutions and the VAE framework. However, knowing how well this performs is valuable the community.\\n\\nThe proposed method appears to have significant benefits, as shown in experiments. The result on MNIST is, however, so strong that it seems incorrect; more digging into this result, or sourcecode, would have been better.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Meta-Review\"}", "{\"title\": \"Effectiveness of TCN and densely connected latent variables\", \"comment\": \"To better understand if the experimental improvements shown in our paper only stem from the hierarchical latent space or whether the synergy between the dilated CNNs and latent variable hierarchy is important, we ran additional experiments (as suggested by R1). We replaced the deterministic TCN blocks with LSTM cells and kept the latent structure intact, dubbed RNNLadder. We used TIMIT and IAM-OnDB for speech and handwriting datasets. The log-likelihood performance measured by ELBO is provided below:\\n\\n=======================================================\\n TIMIT IAM-OnDB \\n=======================================================\\n 25x256-LadderRNN (Normal) 28207 1305 \\n 25x256-LadderRNN-dense (Normal) 27413 1278 \\n=======================================================\\n 25x256-LadderRNN (GMM) 24839 1381 \\n 25x256-LadderRNN-dense (GMM) 26240 1377 \\n=======================================================\\n 5x512-LadderRNN (Normal) 49770 1299 \\n 5x512-LadderRNN-dense (Normal) 48612 1374 \\n=======================================================\\n 5x512-LadderRNN (GMM) 47179 1359 \\n 5x512-LadderRNN-dense (GMM) 50113 1581 \\n=======================================================\\n 25x256-STCN (Normal) 64913 1327 \\n 25x256-STCN-dense (Normal) 70294 1729 \\n=======================================================\\n 25x256-STCN (GMM) 69195 1339 \\n 25x256-STCN-dense (GMM) 71386 1796 \\n=======================================================\\n\\nModels in the table have similar number of trainable parameters. The most direct translation of the the STCN architecture into an RNN counterpart has 25 stacked LSTM cells with 256 units each. Similar to STCN, we use 5 stochastic layers. Please note that stacking this many LSTM cells is unusual and resulted in instabilities during training. The performance is similar to vanilla RNNs. Hence, we didn\\u2019t observe a pattern of improvement with densely connected latent variables. The second RNNLadder configuration uses 5 stacked LSTM cells with 512 units and a one-to-one mapping with the stochastic layers. \\n\\nThis experiments show that the modular structure of our latent variable design does allow for the usage of different building blocks. Even when attached to LSTM cells, it boosts the log-likelihood performance (see 5x512-LadderRNN), in particular when used with dense connections. However, the empirical results suggest that the densely connected latent hierarchy interacts particularly well with dilated CNNs. We believe this is due to the hierarchical nature in both sides of the architecture. On both datasets STCN models achieved the best performance and presented significant improvements with the dense connections. This supports our contribution of a latent variable hierarchy, which models different aspects of information from the input time-series.\"}", "{\"title\": \"Response to the update\", \"comment\": \"If it is advised by the reviewer, we would be glad to improve Figure 2. We aimed to visualize dense connections and highlight the difference between STCN and STCN-dense models in Figure 2 as a graphical model. Figure 5 (in appendix section) could be used as a replacement of Figure 2.\\n\\n\\u201c... decision to omit dependencies from the distributions p and q at the top of page 5...\\u201d this is because we don\\u2019t follow standard conditioning procedure. In other words, the top-most layer is only conditioned on d_t^L while the lower layers (l+1) depend on d_t^l and z_t^l.\\n\\nWe will update Table 3 to the same convention used in other tables, i.e., NLL measured by ELBO.\"}", "{\"title\": \"Removing the MNIST experiment\", \"comment\": \"We are glad that the reviewer finds the paper much improved. Furthermore, we agree that the MNIST experiment is not important to convey the contribution of our work and hence we are happy to remove it since it does not add much in this context. Since discarding only the STCN-dense result only, would result in an incomplete experiment, we suggest to remove the whole MNIST experiment - guidance welcome. We also appreciate the debug suggestions. We will follow up on these.\"}", "{\"title\": \"Updates look pretty good overall, but one huge issue remains.\", \"comment\": \"The new updates are much improved, and the direct discussion of closely related work greatly relieves my concern in this area. Thank you for the updates and improvements.\\n\\nHowever, I cannot accept the MNIST STCN-dense number without extraordinary evidence (the level of which is frankly impossible to give in a double blind conference review). It would be a serious issue for any follow-on work, and without extremely strong (to the level of replication / rerunning the code and at least some days of digging) evidence, I cannot update my score due to this point alone.\\n\\nI *strongly* urge the authors to avoid this particular number (even leaving the pure STCN without dense connections seems fine), as the rest of the results seem quite solid and the contribution of the paper is meaningful - there is no need to have this controversy when the focus of the paper is not really MNIST modeling. Other papers with similarly radical improvements (~62 to far lower) have had to be withdrawn or reworked due to methodology concerns, and I would really not like to see the same thing here, when it isn't necessary for the message or concept of the paper.\\n\\nAs far as debug strategies if you really, really want to be confident in the result, you can multiply every contribution in the dense connections which is connected to the original input by 0 (this may be tricky), the number should fall back to something reasonable. If it breaks entirely, or if the number stays really low, these are both serious causes for concern. Adding huge amounts of noise on these connections should also force the model to fall back to alternate connections, and shouldn't break things utterly if it is a real scenario - it should fall back to something roughly like the standard STCN.\\n\\nWithout that particular number as an issue, I would definitely raise my score - the updates address most of my other concerns.\"}", "{\"title\": \"Comments and Clarifications\", \"comment\": \"***Missing citations and novelty claim\\nWe thank the reviewer for useful pointers to additional related papers. In the revised version, we added a more complete related work section. In particular, we discuss the most closely related Stochastic Wavenet paper in detail. While SWaveNet and ours combine TCNs with stochastic variables there are important differences in how this is achieved. Furthermore, we show that these design choices have implications in terms of modelling power and our architecture outperforms SWaveNet despite not having access to future information. Furthermore, we provide log-likelihood results from Variational Bi-LSTM and Stochastic Wavenet are inserted into the result table. In order to provide more evidence, we also include experiments on the Blizzard dataset. \\n\\nWe would like to emphasize that the main difference between our model and the models with autoregressive decoders (i.e., PixelVAE, Improved Variational Autoencoders for Text Modeling using Dilated Convolutions) is the sequential structure of our latent space. For every timestep x_t we have a corresponding latent variable z_t, similar to stochastic RNNs, which helps modeling the uncertainty in sequence data. We aim to combine TCNs with a powerful latent variable structure to better model sequence data rather than learning disentangled or interpretable representations. The updated results show that our design successfully preserves the modeling capacity of TCNs and representation power of latent variables.\\n\\n*** Handwriting sample figure.\\nIn order to make a direct comparison, we include a new figure (similar to VRNN) comparing generated handwriting samples of VRNN, Stochastic Wavenet and STCN-dense. The original figure referred to by the reviewer is now in the Appendix.\\n\\n*** MNIST results\\n(Also see the answer to R1) We include a new figure comparing the performance of STCN, STCN-dense and VRNN on single test samples from seq-MNIST. We find that STCN-dense makes very precise probability predictions for the pixel values as opposed to other models, this explains the drastic increase in likelihood performance. \\nWe include a table providing KL loss per latent variable across the whole dataset. We also provide a comparison between SKIP-VAE (Avoiding Latent Variable Collapse with Generative Skip Models) and our model. It shows that STCN-dense effectively uses the latent space capacity (indicated by high KL values) and encodes the required information to reconstruct the input sequence. We also provide generated MNIST samples in order to show that the discrepancy between the prior and approximate posterior does not degrade generative modeling capacity.\\nFinally, in our MNIST experiments, we followed Z-forcing paper\\u2019s instructions. See reply to R1 for details of the experimental protocol.\"}", "{\"title\": \"Comments and Clarifications\", \"comment\": \"*** Clarifications for figures and equations\\nWe apologize for the confusion. As the reviewer mentions the dilated convolutional stacks d_t^l has dependency reaching further and further back in time. \\nIn the original Fig. 2 we aimed to simplify the model details and show only a graphical model representation. The caption provides an explanation of the (updated) figure in the revised version. Moreover, the \\u201cConv\\u201d equation (Eq. 2 in the revised version) is now a corrected to be a function of multiple time-steps, explicitly showing the hierarchy across time.\\n\\n***Details of the inference and generative networks\\nThe difference between the prior and the approximate posterior, i.e., inference network are the respective input time-steps. The prior at time-step t is conditioned on all the input sequence until t-1, i.e., x_{1:t-1}. The inference network, on the other hand, is conditioned on the input until step t, i.e., x_{1:t}. \\nAt sampling time, we only use the prior. In other words, the prior sample z_t (conditioned on x_{1:t-1}) is used to predict x_t. Here we follow the dynamic prior concept of Chung et al. (2015). During training of the model, the KL term in the objective encourages the prior to be predictive of the next step. \\n\\n*** f^{(l)} and f^{(o)} functions.\\nf^{(l)} stands for neural network layers consisting of 1d convolution operations with filter size 1: Conv -> ReLu -> Conv -> ReLu which is then used to calculate mu and sigma of a Normal distribution.\\n\\nf^{(o)} corresponds the output layer of the model. Depending on the task we either use 1d Conv or Wavenet blocks. Network details are provided in the appendix of the revised paper.\\n\\n*** Clarification on MNIST results.\\nThis was indeed a typo. We report negative log-likelihood performance, measured by ELBO. We correct this in the revised version.\\nIn Fig. 4 (in the submitted version) we wanted to emphasize that STCN-dense can reconstruct the low-level details such as noisy pixels, which results in large improvement in the likelihood. We agree the STCN and VRNN provide smoothed and perceptually beautiful results. However, such enhancements lower the likelihood performance. Since the figure did not convey this clearly, we updated the figure in the revised version.\\n\\n\\n***References\\nJunyoung Chung, Kyle Kastner, Laurent Dinh, Kratarth Goel, Aaron C Courville, and Yoshua Bengio. A recurrent latent variable model for sequential data. In Advances in neural information processing systems, pp. 2980\\u20132988, 2015.\"}", "{\"title\": \"Comments and Clarifications\", \"comment\": \"***\\\"significant challenges that the authors overcame in reaching the proposed method.\\\"\\nThe goal of our work was to design a modular extension to the vanilla TCN, while improving the modelling capacity via the introduction of hierarchical stochastic variables. In particular, we did not want to modify deterministic TCN layers (as is the case for Stochastic WaveNet, Lai et al., 2016) since this may limit scalability, flexibility and may limit the maximum receptive field size.\", \"these_goals_are_motivated_by_findings_from_the_initial_phases_of_the_project\": \"1) Initial attempts involved standard hierarchical latent variable models, none outperformed the VRNN baseline. \\n2) The precision-weighted update of approximate posterior, akin to LadderVAEs, significantly improved experimental results. \\n3) As can be seen from our empirical results, the increasing receptive field of TCNs provides different information context to different latent variables. This enables our architectures to more efficiently leverage the latent space and partially prevents latent space collapse issues highlighted in the literature (Dieng et al., 2018, Zhao et al., 2017). The introduction of skip connections from every latent variable to the output layer directly in the STCN-dense variant seems to afford the network the most flexibility in terms of modelling different datasets (see p.8 & Tbl. 3 in the revised paper).\\n\\n*** Effectiveness of TCN and densely connected latent variables\\nThanks for the interesting question. We agree that using multiple levels of the latent variables directly to make predictions is very effective. As we explain in the revised version of our submission, in STCN and STCN-dense models, the latent variables are provided with a different level of expressiveness. Hence, depending on the task and dataset, the model can focus on intermediate variables which have a different context. We think that this is an important aspect of our work, which can only be achieved by using the dilated CNNs. One can stack RNN cells similar to TCN blocks and use our densely connected latent space concept. In this scenario, the hierarchy would only be implicitly defined by the network architecture. However, since the receptive field size does not change throughout the hierarchy it is unclear whether the same effectiveness would be attained. Moreover, we note that combining our hierarchical stochastic variables with stacked LSTMs would inverse the effect on computational efficiency that we gain from the TCNs. \\n\\n***\\u201cMNIST performance\\nYes, binarization of the MNIST is fixed in advance. We followed the procedure detailed in the Z-forcing paper closely. Naturally, we will release code and pre-processing scripts so that the results can be verified. Here is our experimental protocol:\\n1) We used the binarized MNIST dataset of Larochelle and Murray (2011). It was downloaded from http://www.cs.toronto.edu/~larocheh/public/datasets/binarized_mnist/binarized_mnist_train.amat\\n2) We trained all models without any further preprocessing or normalization. The first term of the ELBO, i.e., the reconstruction loss, is measured via binary cross-entropy. \\nWe provide an in-depth analysis in the revised version, showing that the STCN-dense architecture makes very precise probability predictions, also for pixel values close to character discontinuities. This provides very accurate modeling of edges and in consequence, gives very good likelihood performance. See (new) Figure 4 in the revised version.\\n\\n*** Clarifications\\nWe updated and clarified the Figure in the revised version. The generative model only relies on the prior. At sampling time, samples from the prior latent variables are used both in prediction of the observation and computation of the next layer\\u2019s latent variable. Therefore the generative model takes the input sequence until t-1, i.e., x_{1:t-1} in order to predict x_t.\\n\\u201cThe term \\\"kla\\\" appears in table 1, but it seems that it is otherwise not defined. I think this is the same term and meaning that appears in Goyal et al. (2017), but it should obviously be defined here.\\u201d\\nYes. It stands for annealing of the weight of KL loss term. We now clarified the language in tables and captions. \\n\\n***References\\nLai, G., Li, B., Zheng, G., & Yang, Y. (2018). Stochastic WaveNet: A Generative Latent Variable Model for Sequential Data. arXiv preprint arXiv:1806.06116.\\nAdji B Dieng, Yoon Kim, Alexander M Rush, and David M Blei. Avoiding latent variable collapse with generative skip models. arXiv preprint arXiv:1807.04863, 2018.\\nShengjia Zhao, Jiaming Song, and Stefano Ermon. Learning hierarchical features from generative models. arXiv preprint arXiv:1702.08396, 2017.\\nLarochelle, Hugo, and Iain Murray. The neural autoregressive distribution estimator. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics. 2011.\"}", "{\"title\": \"Updates in the revised paper\", \"comment\": \"We thank all reviewers for their constructive comments. Our work combines the computational advantages of temporal convolutional networks (TCN) with the representational power and robustness of stochastic latent spaces. Based on the reviewer\\u2019s feedback we have prepared an updated revision of the paper. Furthermore, we will respond to each review in a detailed manner below.\", \"the_most_important_changes_in_the_revised_version_can_be_summarized_as_follows\": [\"We cleaned up the description of the background, method and improved the figures describing our model.\", \"We include an extensive discussion of related work as suggested by R3 and include direct comparisons to the state-of-the-art, where possible.\", \"During our experiments, we found that using separate \\\\theta and \\\\phi parameters for f^{l} is much more efficient, than to share the parameters of f^{l} (i.e., layers calculating mean and sigma of Normal distributions of the latent variables) for the prior and approximate posterior as suggested by S\\u00f8nderby et al. (2016) and as was the case at submission time.\", \"With this change implemented, we re-ran experiments and updated the tables in the paper. On IAM-OnDB, Deepwriting, TIMIT and MNIST we now report state-of-the-art log-likelihood results (even compared to additional models listed by R3). We also evaluate our model on the Blizzard dataset where only the Variational Bi-LSTM architecture is marginally better than STCN-dense (i.e., 17319 against 17128) but has access to future information.\", \"We include additional results on MNIST and provide insights why STCN-dense gives a large improvement in terms of reconstruction.\", \"We updated figures and equations throughout to improve clarity of presentation.\", \"-----\", \"Casper Kaae S\\u00f8nderby, Tapani Raiko, Lars Maal\\u00f8e, S\\u00f8ren Kaae S\\u00f8nderby, and Ole Winther. Ladder variational autoencoders. In Advances in neural information processing systems, pp. 3738\\u20133746, 2016.\"]}", "{\"title\": \"Ok paper with a reasonable -- though somewhat obvious -- approach to generative modeling of sequence data\", \"review\": \"This paper presents a generative sequence model based on the dilated CNN\\npopularized in models such as WaveNet. Inference is done via a hierarchical\\nvariational approach based on the Variational Autoencoder (VAE). While VAE\\napproach has previously been applied to sequence modeling (I believe the\\nearliest being the VRNN of Chung et al (2015)), the innovation where is the\\nintegration of a causal, dilated CNN in place of the more typical recurrent\\nneural network. \\n\\nThe potential advantages of the use of the CNN in place of\\nRNN is (1) faster training (through exploitation of parallel computing across\\ntime-steps), and (2) potentially (arguably) better model performance. This\\nsecond point is argued from the empirical results shown in the\\nliterature. The disadvantage of the CNN approach presented here is that\\nthese models still need to generate one sample at a time and since they are\\ntypically much deeper than the RNNs, sample generation can be quite a bit\\nslower.\\n\\nNovelty / Impact: This paper takes an existing model architecture (the\\ncausal, dilated CNN) and applies it in the context of a variational\\napproach to sequence modeling. It's not clear to me that there are any\\nsignificant challenges that the authors overcame in reaching the proposed\\nmethod. That said, it certainly useful for the community to know how the\\nmodel performs.\", \"writing\": \"Overall the writing is fairly good though I felt that the model\\ndescription could be made more clear by some streamlining -- with a single\\npass through the generative model, inference model and learning.\", \"experiments\": \"The experiments demonstrate some evidence of the superiority\\nof this model structure over existing causal, RNN-based models. One point\\nthat can be drawn from the results is that a dense architecture that uses multiple levels of the\\nlatent variable hierarchy directly to compute the data likelihood is\\nquite effective. This observation doesn't really bear on the central message\\nof the paper regarding the use of causal, dilated CNNs. \\n\\nThe evidence lower-bound of the STCN-dense model on MNIST is so good (low)\\nthat it is rather suspicious. There are many ways to get a deceptively good\\nresult in this task, and I wonder if all due care what taken. In\\nparticular, was the binarization of the MNIST training samples fixed in\\nadvance (as is standard) or were they re-binarized throughout training?\", \"detailed_comments\": \"- The authors state \\\"In contrast to related architectures (e.g. (Gulrajani et\\nal, 2016; Sonderby et al. 2016)), the latent variables at the upper layers\\ncapture information at long-range time scales\\\" I believe that this is\\nincorrect in that the model proposed in at least Gulrajani et al also \\n\\n- It also seems that there is an error in Figure 1 (left). I don't think\\nthere should be an arrow between z^{2}_{t,q} and z^{1}_{t,p}. The presence\\nof this link implies that the prior at time t would depend -- through\\nhigher layers -- on the observation at t. This would no longer be a prior\\nat that point. By extension you would also have a chain of dependencies\\nfrom future observations to past observations. It seems like this issue is\\nisolated to this figure as the equations and the model descriptions are\\nconsistent with an interpretation of the model without this arrow (and\\nincluding an arrow between z^{2}_{t,p} and z^{1}_{t,p}.\\n\\n- The term \\\"kla\\\" appears in table 1, but it seems that it is otherwise not\\ndefined. I think this is the same term and meaning that appears in Goyal et\\nal. (2017), but it should obviously be defined here.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Interesting new architecture, but some clarity issues\", \"review\": \"This paper introduces a new stochastic neural network architecture for sequence modeling. The model as depicted in figure 2 has a ladder-like sequence of deterministic convolutions bottom-up and stochastic Gaussian units top-down.\\n\\nI'm afraid I have a handful of questions about aspects of the architecture that I found confusing. I have a difficult time relating my understanding of the architecture described in figure 2 with the architecture shown in figure 1 and the description of the wavenet building blocks. My understanding of wavenet matches what is shown in the left of figure 1: the convolution layers d_t^l depend on the convolutional layers lower-down in the model, thus with each unit d^l having dependence which reaches further and further back in time as l increases. I don't understand how to reconcile this with the computation graph in figure 2, which proposes a model which is Markov! In figure 2, each d_{t-1}^l depends only on on the other d_{t-1} units and the value of x_{t-1}, which then (in the left diagram of figure 2) generate the following x_t, via the z_t^l. Where did the dilated convolutions go\\u2026? I thought at first this was just a simplification for the figure, but then in equation (4), there is d_t^l = Conv^{(l)}(d_t^{l-1}). Shouldn't this also depend on d_{t-1}^{l-1}\\u2026? or, where does the temporal information otherwise enter at all? The only indication I could find is in equation (13), which has a hidden unit defined as d_t^1 = Conv^{(1)}(x_{1:t}).\\n\\nAdding to my confusion, perhaps, is the way that the \\\"inference network\\\" and \\\"prior\\\" are described as separate models, but sharing parameters. It seems that, aside from the initial timesteps, there doesn't need to be any particular prior or inference network at all: there is simply a transition model from x_{t-1} to x_{t}, which would correspond to the Markov operator shown in the left and middle sections of figure 2. Why would you ever need the right third of figure 2? This is a model that estimates z_t given x_t. But, aside from at time 0, we already have a value x_{t-1}, and a model which we can use to estimate z_t given x_{t-1}\\u2026!\\n\\nWhat are the top-to-bottom functions f^{(l)} and f^{(o)}? Are these MLPs?\\n\\nI also was confused in the experiments by the >= and <= on the reported numbers. For example, in table 2, the text describes the values displayed as log-likelihoods, in which case the ELBO represents a lower bound. However, in that case, why is the bolded value the *lowest* log-likelihood? That would be the worst model, not the best \\u2014 does table 2 actually show negative log-likelihoods, then? In which case, though, the numbers from the ELBO should be upper bounds, and the >= should be <=. Looking at figure 4, it seems like visually the STCN and VRNN have very good reconstructions, but the STCN-dense has visual artifacts; this would correspond with the numbers in table 2 being log-likelihoods (not negative), in which case I am confused only by the choice of which model to bold.\", \"update\": \"Thanks for the clarifications and edits. FWIW I still find the depiction of the architecture in Figure 2 to be incredibly misleading, as well as the decision to omit dependencies from the distributions p and q at the top of page 5, as well as the use in table 3 of \\\"ELBO\\\" to refer to a *negative* log likelihood.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Clearly written, but lacking comparisons\", \"review\": \"The focus on novelty (mentioned in both the abstract, and conclusion as a direct claim) in the presentation hurts the paper overall. Without stronger comparison to other closely related work, and lack of citation to several closely related models, the claim of novelty isn't defined well enough to be useful. Describing what parts of this model are novel compared to e.g. Stochastic WaveNet or the conditional dilated convolutional decoder of \\\"Improved VAE for Text ...\\\" (linked below, among many others) would help strengthen the novelty claim, if the claim of novelty is needed or useful at all. Stochastic WaveNet in particular seems very closely related to this work, as does PixelVAE. In addition, use of autoregressive models conditioned on (non-variational, in some sense) latents have been shown in both VQ-VAE and ADA among others, so a discussion would help clarify the novelty claim.\\n\\nEmpirical results are strong, though (related to the novelty issue) there should be greater comparison both quantitatively and qualitatively to further work. In particular, many of the papers linked below show better empirical results on the same datasets. Though the results are not always directly comparable, a discussion of *why* would be useful - similar to how Z-forcing was included.\\n\\nIn the qualitative analysis, it would be good to see a more zoomed out view of the text (as in VRNN), since one of the implicit claims of the improvement from dense STCN is improved global coherence by direct connection to the \\\"global latents\\\". As it stands now the text samples are a bit too local to really tell. In addition, the VRNN samples look quite a bit different than what the authors present in their work - what implementation was used for the VRNN samples (they don't appear to be clips from the original paper)? \\n\\nOn the MNIST setting, there are many missing numbers in the table from related references (some included below), and the >= 60.25 number seems so surprising as to be (possibly) incorrect - more in-depth analysis of this particular result is needed. Overall the MNIST result needs more description and relation to other work, for both sequential and non-sequential models.\\n\\nThe writing is well-done overall, and the presented method and diagrams are clear. My primary concern is in relation to related work, clarification of the novelty claim, and more comparison to existing methods in the results tables.\", \"variational_bi_lstm_https\": \"//arxiv.org/abs/1711.05717\", \"stochastic_wavenet_https\": \"//arxiv.org/abs/1806.06116\", \"pixelvae_https\": \"//arxiv.org/abs/1611.05013\", \"filtering_variational_objectives_https\": \"//github.com/tensorflow/models/tree/master/research/fivo\", \"improved_variational_autoencoders_for_text_modeling_using_dilated_convolutions_https\": \"//arxiv.org/abs/1702.08139\", \"temporal_sigmoid_belief_networks_for_sequential_modeling_http\": \"//papers.nips.cc/paper/5655-deep-temporal-sigmoid-belief-networks-for-sequence-modeling\\n\\nNeural Discrete Representation Learning (VQ-VAE) https://arxiv.org/abs/1711.00937\", \"the_challenge_of_realistic_music_generation\": \"modelling raw audio at scale (ADA) https://arxiv.org/abs/1806.10474\", \"learning_hierarchical_features_from_generative_models_https\": \"//arxiv.org/abs/1702.08396\", \"avoiding_latent_variable_collapse_with_generative_skip_models_https\": \"//arxiv.org/abs/1807.04863\", \"edit\": \"Updated score after second revisions and author responses\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
ryzHXnR5Y7
Select Via Proxy: Efficient Data Selection For Training Deep Networks
[ "Cody Coleman", "Stephen Mussmann", "Baharan Mirzasoleiman", "Peter Bailis", "Percy Liang", "Jure Leskovec", "Matei Zaharia" ]
At internet scale, applications collect a tremendous amount of data by logging user events, analyzing text, and collecting images. This data powers a variety of machine learning models for tasks such as image classification, language modeling, content recommendation, and advertising. However, training large models over all available data can be computationally expensive, creating a bottleneck in the development of new machine learning models. In this work, we develop a novel approach to efficiently select a subset of training data to achieve faster training with no loss in model predictive performance. In our approach, we first train a small proxy model quickly, which we then use to estimate the utility of individual training data points, and then select the most informative ones for training the large target model. Extensive experiments show that our approach leads to a 1.6x and 1.8x speed-up on CIFAR10 and SVHN by selecting 60% and 50% subsets of the data, while maintaining the predictive performance of the model trained on the entire dataset.
[ "data selection", "deep learning", "uncertainty sampling" ]
https://openreview.net/pdf?id=ryzHXnR5Y7
https://openreview.net/forum?id=ryzHXnR5Y7
ICLR.cc/2019/Conference
2019
{ "note_id": [ "SkxZRW51x4", "HygQI0gny4", "r1gDA2CN14", "rkxIQEdqAm", "HylwSMOcCX", "Sylti-Oc0Q", "SkxFGD7c2X", "HkxQhmG5nQ", "rygxBMfw2m", "HylHT7Y0cX", "r1liOizs5m" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review", "official_comment", "comment" ], "note_created": [ 1544688072787, 1544453707340, 1543986383032, 1543304221721, 1543303742999, 1543303585400, 1541187345062, 1541182379505, 1540985400062, 1539376061377, 1539152755179 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1355/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1355/Authors" ], [ "ICLR.cc/2019/Conference/Paper1355/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1355/Authors" ], [ "ICLR.cc/2019/Conference/Paper1355/Authors" ], [ "ICLR.cc/2019/Conference/Paper1355/Authors" ], [ "ICLR.cc/2019/Conference/Paper1355/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1355/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1355/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1355/Authors" ], [ "(anonymous)" ] ], "structured_content_str": [ "{\"metareview\": \"There reviewers unanimously recommend rejecting this paper and, although I believe the submission is close to something that should be accepted, I concur with their recommendation.\\n\\nThis paper should be improved and published elsewhere, but the improvements needed are too extensive to justify accepting it in this conference. I believe the authors are studying a very promising algorithm and it is irrelevant that the algorithm is a relatively obvious one. Ideally, the contribution would be a clear experimental investigation of the utility of this algorithm in realistic conditions. Unfortunately, the existing experiments are not quite there.\\n\\nI agree with reviewer 2 that the method is not particularly novel. However, I disagree that this is a problem, so it was not a factor in my decision. Novelty can be overrated and it would be fine if the experiments were sufficiently insightful and comprehensive.\\n\\nI believe experiments that train for a single epoch on the reduced dataset are absolutely essential in order to understand the potential usefulness of the algorithm. Although it would of course be better, I do not think it is necessary to find datasets traditionally trained in a single pass. You can do single epoch training on other datasets even though it will likely degrade the final validation error reached. This is the type of small scale experiment the paper should include, additional apples-to-apples baselines just need to be added. Also, there are many large language modeling datasets where it is reasonable to make only a single pass over the training set. The goal should be to simulate, as closely as is possible, the sort of conditions that would actually justify using the algorithm in practice.\\n\\nAnother issue with the experimental protocol is that, when claiming a potential speedup, one must tune the baseline to get a particular result in the fewest steps. Most baselines get tuned to produce the best final validation error given a fixed number of steps. But when studying training speed, we should fix a competitive goal error rate and then tune for speed. Careful attention to these experimental protocol issues would be important.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Interesting task, but clear consensus among reviewers to reject this paper based on limited experiments\"}", "{\"title\": \"Popular benchmarks make multiple passes for higher accuracy and SVP with shorter schedules\", \"comment\": \"Great question! Our method is generic and could be applied to training procedures that only make a single pass over the data. Unfortunately, we did not have access to a large task that is solved in a single pass over the dataset. To the best of our knowledge, all of the popular benchmarked datasets make multiple passes (epochs) over the dataset.\\n\\nHowever, we did consider a similar issue about whether more epochs over a small dataset might be the same as fewer epochs over a large dataset. Based on experimental results, condensing the learning rate schedule (i.e., fewer epochs) and using the full dataset hurts accuracy and can't achieve the same speed-up on CIFAR10. For SVHN, we can substantially condense the schedule from Huang et al. (2016), but SVP stacks with the condensed schedule, giving an additional speed-up (i.e. ~1.8x). We can include these additional results in the appendix of the final draft of the paper.\\n\\nAlso, there is initial theoretical work that shows that one pass over the data isn't enough for hard problems:\", \"https\": \"//papers.nips.cc/paper/8034-statistical-optimality-of-stochastic-gradient-descent-on-hard-learning-problems-through-multiple-passes\"}", "{\"title\": \"why ever train for more than one epoch on the reduced dataset?\", \"comment\": \"I don't see why it would be useful to make multiple passes over the data used for training the target model (the selected subset of the training set). If you want to get a speedup in training, it should be strictly better to use fresh data and never look at any data point more than once. Wouldn't it be better to conduct experiments with one pass over the subsampled dataset to see how useful the algorithm is in this regime?\"}", "{\"title\": \"Experiments in progress and clarifications about figures\", \"comment\": \"Thank you for your thoughtful review and suggestions. Here are our responses:\\n\\n# large-scale experiments\\nWe are currently running experiments on ImageNet, but the results will not be ready before the response deadline.\\n\\n# Comparison against baselines in section 2\\nWe agree that more comparison against existing methods such as importance sampling would be valuable. we aimed to compare against \\u201cNot All Samples Are Created Equal: Deep Learning with Importance Sampling\\u201d from Katharopoulos & Fleuret (2018) as it represents the most recent published work in the area. Unfortunately, we were unable to complete the experiments before the response deadline.\\n\\n# Learning to teach (L2T)\\nWe agree learning to teach is relevant and included it in the related work section.\\n\\n# Final accuracy of different models in a table\\nWe believe Table 1 should address this concern, and we changed the structure to make it more clear. The most important data from Figure 4 and 5 is captured in the table. We could add the additional metrics from Figure 5, but the main point of that figure is to show that all of the metrics perform about the same, which would just add redundant rows to the table. \\n\\n# Smaller number of epochs\\nGreat question. Yes, \\\"epoch\\\" in Table 1 means how many epochs the proxy model is trained. We have preliminary results that suggest the diversity of the subset is an important factor in maintaining quality. Looking at the CDF of entropy on CIFAR10, for example, shows that only around 20% of points have relatively high entropy and that entropy quickly decays after the first 20%. However, the target model is only able to match the same level of accuracy with a larger subset as shown in Table 1. This suggests that the subset needs to be sufficiently representative in addition to containing the most difficult. We hypothesize that the higher error of smaller architectures and partial training might result in increased randomness, which could improve the representativeness of the resulting subsets.\"}", "{\"title\": \"Clarifications about novelty and impact\", \"comment\": \"Thank you for your thoughtful review and suggestions. Here are our responses:\\n\\n# SVP and importance sampling\\nWe agree that more comparison against existing methods such as importance sampling would be valuable. We aimed to compare against \\u201cNot All Samples Are Created Equal: Deep Learning with Importance Sampling\\u201d from Katharopoulos & Fleuret (2018) as it represents the most recent published work in the area. Unfortunately, we were unable to complete the experiments before the response deadline.\\n\\n# Active learning, originality, and significance \\nWe agree that we leverage uncertainty sampling (Lewis & Gale, 1994) from active learning to select the points with highest informativeness. However, in active learning a model is generally trained to select the next point (Settles, 2012) or batch (Sener & Savarese, 2018), which is efficient in terms of labels, but often computationally expensive. While this can be effective when deciding which data to acquire labels for from an expensive labeler (e.g. a human), the computational cost is too high to accelerate training over an existing large labeled dataset. Using a proxy reduces the cost of selection by up to a 100x for Amazon Review Polarity or 30x for CIFAR10. This is such a substantial improvement that uncertainty sampling can now be extended to reduce computational costs of training in addition to labeling costs. To clarify this point, we added more detail in the introduction.\\n\\n# Figure 3: model correlation\\nWe increased the size of the labels and improved the figure caption. The key takeaway is that ensembling multiple small models together through rank combination improves our approximation of the large model\\u2019s uncertainty as shown by the increase in correlation ranking.\\n\\n# Robustness to the choice of proxy model architecture\\nBy \\u201crobust to the choice of proxy model architecture\\u201d we mean that while we discussed different steps in Section 3.1 to create the proxy model, and explored various uncertainty measures in Section 3.2 to select data points via proxy, our approach allows for a wide range of configurations. The proxy is important but it is easy to find one that is good enough in practice and doesn\\u2019t require extensive hyperparameter tuning.\"}", "{\"title\": \"Experiments in progress\", \"comment\": \"Thank you for your thoughtful review and suggestions. Here are our responses:\\n\\n1) We agree that it would be valuable to provide more comparison against related work. The work in the \\u201cOptimization and Importance Sampling\\u201d section shares the closest relation to our approach by improving convergence and in some cases training time. In particular, we aimed to compare against \\u201cNot All Samples Are Created Equal: Deep Learning with Importance Sampling\\u201d from Katharopoulos & Fleuret (2018) as it represents the most recent published work in the area. Unfortunately, we were unable to complete the experiments before the response deadline.\\n\\n2) We are currently running experiments on ImageNet, but the results will not be ready before the response deadline. However, for both SVHN and CIFAR10, the larger model improves accuracy significantly over the proxy. For CIFAR10, there is a 2.5% difference in absolute error (47% relative error) between ResNet20 and ResNet164 . With partial training, the difference in absolute error between the ResNet20 and ResNet164 is ~12%. Interestingly, despite the limited capacity of the proxy models, our approach still selects a subset that maintains accuracy and performs much better than random. In fact, for CIFAR10, using ResNet20 as the select proxy performs better than using ResNet164 (the target model) for selection as shown in Table 1. \\n\\n3) This is a great observation. We have preliminary results that suggest the diversity of the subset is an important factor in maintaining quality. Looking at the CDF of entropy on CIFAR10, for example, shows that only around 20% of points have relatively high entropy and that entropy quickly decays after the first 20%. However, the target model is only able to match the same level of accuracy with a larger subset as shown in Table 1. This suggests that the subset needs to be sufficiently representative in addition to containing the most difficult. Fortunately, we hypothesize that the proxy model can also give us an efficient way to calculate the representativeness of each example as well as uncertainty, allowing us to algorithmically construct such as subset. We are already looking into this but couldn\\u2019t complete the necessary experiments while also attempting to address your feedback above. However, we can include CDFs of entropy and some preliminary discussion of this point.\"}", "{\"title\": \"Review\", \"review\": \"General:\\nThe paper proposed an algorithm named Select Via Proxy(SVP), which can be used for data sampling. The idea is simple and straightforward: 1) use a proxy model to get decision boundary 2) train the large target model on the data points close to the decision boundary.\", \"strength\": \"1. Roughly this is a well-written paper. The main idea is quite clear to me.\\n2. Empirical validation of the experiments looks good. The results show that SVP help reduce the training time with ResNet. The author(s) also showed the influence of different quantifying uncertainty methods.\", \"possible_improvements\": \"1. In Related Work, several previous works were mentioned. Although the author(s) claimed that SVP can be combined with them, it's better to show the performance of SVP compared with them. This would show the significance of the work.\\n2. In the experiments, I was hoping to see how well SVP works on ImageNet. The problem is that: For ResNet152 and ResNet164, they are relatively too deep on such small data sets. Since the dimension of the data points(images) is not high, SVP can easily catch a reasonable decision boundary with a smaller model. I am almost sure ResNet20 is good enough to do this. I am more concerned about the situation where the capacity of the model is challenged by the size of the dataset. e.g. The data sets of autonomous driving are usually extremely large and even very deep models cannot be fully trained on that. \\n3. The data points close to the decision boundary can be considered as tough data points, whose features might be hard to be caught by the model. If training model only on these data points, the trained model may just memorize tough data points and not learn the other data points from the data set. One solution is that, while training on tough data points, the model should also be trained on a small portion of well-learned data points. I don't think training only on the points close to the decision boundary is enough and was more expecting to see some discussion about this in the paper.\", \"conclusion\": \"\", \"my_two_biggest_concerns_are\": \"1) The algorithm is not tested on large data seta 2) The algorithm is not tested with the models of limited capacity. As a conclusion, I tend to vote for rejection.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Interesting approach to data selection, but needs comparative experiments\", \"review\": \"# Summary\\nThe paper presents a method for identifying and selecting the most informative subset of the training dataset in order to reduce training time while maintaining test accuracy. The method consists of training a proxy model that is smaller and has been trained for fewer epochs, and which can optionally be ensembled. Experiments show promising results, indicating that some datasets can be reduced to half the size without impacting model performance.\\n\\n# Quality\\nThe paper appears sound and of good quality. Background literature is cited and the proposed method is discussed in sufficient detail.\\n\\nI would, however, like to see some additional comparative experiments. All experiments are constructed to show that the method can indeed achieve accuracy comparable to the full model but with a smaller training set. I would like to see how it compares to existing strategies -- are there any reason to pick this method over existing ones?\\nSince the last sentence in section 2 states that the proposed method is orthogonal to previous subsampling techniques, and therefore can be combined with any of them, it would be interesting to see how SVP compares to these and whether a combination of, say, SVP and importance sampling will in fact achieve better performance than the importance sampling on its own.\\nAdditionally, given the model's high resemblance to active learning, it would be interesting to see it compared to some prominent active learning methods.\\n\\n# Clarity\\nThe paper reads quite well. I particularly like the paragraph headlines, which makes it easy to get an overview of the paper.\\n\\nThe figures are generally nice and readable, except for figure 3, which I don't understand. Maybe I am missing it, but I can't find an explanation for what the rows and columns indicate, and the labels themselves should also be increased in size.\\n\\n# Originality\\nI do not find the paper particularly novel. To me, the proposed method seems to be a variant of active learning, not orthogonal to this as it is claimed in section 2. The choice of surrogate model and uncertainty metric might be new, but the method itself boils down to uncertainty sampling, a well-known strategy in active learning.\\nHowever, I am happy to change my mind if the authors can explain to me exactly how their method differs from active learning.\\n\\n# Significance\\nWhile techniques for speeding up training without sacrificing performance are, of course, always interesting, I find the proposed method to be rather incremental and not significant enough for ICLR. It would be better suited as a workshop paper.\\n\\n# Other notes\\nIn the last paragraph of section 1, you write that \\\"Our proposed framework is robust to the choice of proxy model architecture.\\\" I am not sure what you mean by this. Do you mean that one can choose any model as the proxy (which is clearly correct) or do you mean that the method is \\\"proxy agnostic\\\" in the sense that any proxy model will work better than no proxy? If the latter is the case, I would like some arguments for this. Also, if the method is indeed proxy agnostic, it should be possible to remove the proxy completely and select the data in some other way.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}", "{\"title\": \"meaningful work, but lacks more supporting evidences\", \"review\": \"This paper studies a very simple and intuitive method to boost the training speed of deep neural networks. The authors first train some light weighted proxy models, using these models to rank the data according to its uncertainty, and then pick the most uncertain subset to train the final model. Experiments on CIFAR10/SVHN/Amazon Review Polarity demonstrates the effectiveness.\\n\\nIn general, I think the authors did a decent job in showing that such a simple idea could surprisingly work well to boost NN training. I believe it will inspire future works on speeding up NN training. However, to form a solid ICLR publication, plenty of future works need to be done.\\n\\n1)\\tI will not be fully convinced if an idea aiming to speed up, is only verified on small scale dataset (e.g., CIFAR10). It will be much better if there are large scale experiments conducted such as on ImageNet and WMT neural machine translation. \\n\\n2)\\tPlease well position some related works. First, it would be more interesting and informative if some baselines in section 2 (especially those in \\u201cOptimization and Importance Sampling\\u2019), are compared with. Second, there are important related works omitted such as L2T [1], which also talks/shows the possibly of using partial training data to achieve speed up.\\n\\n3)\\tSome writing issues: it would be better to *clearly* demonstrate the final accuracy of different models (i.e. ResNet 164 trained on whole data and selected subset), such as putting them into a table, but not merely showing them vaguely in the curves and text. I\\u2019m also note sure about the meaning of `epoch\\u2019 in Table 1: does it mean how many epochs the proxy model is trained? If so, I can hardly get the intuition of why smaller epochs works better. I noted a conjecture raised by the authors in the last sentence of paragraph \\u201ccomparing different proxies\\u201d. However, I cannot catch the exact meaning. \\n\\n[1] Fan, Y., Tian, F., Qin, T., Li, X. Y., & Liu, T. Y. Learning to Teach. ICLR 2018\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"the surprising result is that simple metrics from small proxy models are so effective.\", \"comment\": \"Thank you for your comment. Our key observation is that the behavior of a smaller model is a useful and efficient proxy for training data selection. After evaluating entropy, loss, confidence, and margin as shown in Figure 5, we found limited differences in performance. Of course, the use of uncertainty as a quality metric is just one option among many. We are already looking at additional formulations that use intermediate activations and combine uncertainty sampling with other metrics that focus on diversity. Our preliminary results show improved performance for small fractions of the data. We plan to include these results in the final version of the paper. However, for us, the surprising result is that simple metrics from small proxy models are so effective.\"}", "{\"comment\": \"You do provide an efficient and simply idea (finding the subset of the original training data to achieve comparable performance with less complexity) for many time-consuming training tasks in CV and NLP fields .\\n\\nHowever, your formulas in quantifying the uncertainty merely take the classification probability into consideration, which can not be an general data selection model , have you thought that before? anyway no offense , the novelty of this paper is not enough~ hope you do not mind my one-sided comment : )\", \"title\": \"question about your uncertainty quantifying approach\"}" ] }
Byxr73R5FQ
Successor Options : An Option Discovery Algorithm for Reinforcement Learning
[ "Manan Tomar*", "Rahul Ramesh*", "Balaraman Ravindran" ]
Hierarchical Reinforcement Learning is a popular method to exploit temporal abstractions in order to tackle the curse of dimensionality. The options framework is one such hierarchical framework that models the notion of skills or options. However, learning a collection of task-agnostic transferable skills is a challenging task. Option discovery typically entails using heuristics, the majority of which revolve around discovering bottleneck states. In this work, we adopt a method complementary to the idea of discovering bottlenecks. Instead, we attempt to discover ``landmark" sub-goals which are prototypical states of well connected regions. These sub-goals are points from which densely connected set of states are easily accessible. We propose a new model called Successor options that leverages Successor Representations to achieve the same. We also design a novel pseudo-reward for learning the intra-option policies. Additionally, we describe an Incremental Successor options model that iteratively builds options and explores in environments where exploration through primitive actions is inadequate to form the Successor Representations. Finally, we demonstrate the efficacy of our approach on a collection of grid worlds and on complex high dimensional environments like Deepmind-Lab.
[ "Hierarchical Reinforcement Learning" ]
https://openreview.net/pdf?id=Byxr73R5FQ
https://openreview.net/forum?id=Byxr73R5FQ
ICLR.cc/2019/Conference
2019
{ "note_id": [ "Hyli0PpWx4", "B1et7uIGyV", "BkgI4DIz1E", "SkekOG7cA7", "ryxPblrK0m", "HyxOpOYm07", "Bkeydt9fAm", "HJxsEFcMAQ", "SJlOZYFGA7", "ByxqybYGA7", "SJeyQWBI67", "r1g-1y95nm", "BJlo6uzc3m", "BJlZtQO4sX", "rkgf6rTQjm", "rJg0XY0WoX" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review", "comment", "official_comment", "comment" ], "note_created": [ 1544832979228, 1543821344957, 1543821102079, 1543283302932, 1543225343057, 1542850751668, 1542789479499, 1542789427282, 1542785280225, 1542783201645, 1541980439181, 1541213913065, 1541183682673, 1539765113475, 1539720633653, 1539594534423 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1354/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1354/Authors" ], [ "ICLR.cc/2019/Conference/Paper1354/Authors" ], [ "ICLR.cc/2019/Conference/Paper1354/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1354/Authors" ], [ "ICLR.cc/2019/Conference/Paper1354/AnonReviewer4" ], [ "ICLR.cc/2019/Conference/Paper1354/Authors" ], [ "ICLR.cc/2019/Conference/Paper1354/Authors" ], [ "ICLR.cc/2019/Conference/Paper1354/Authors" ], [ "ICLR.cc/2019/Conference/Paper1354/Authors" ], [ "ICLR.cc/2019/Conference/Paper1354/AnonReviewer4" ], [ "ICLR.cc/2019/Conference/Paper1354/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1354/AnonReviewer1" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper1354/Authors" ], [ "(anonymous)" ] ], "structured_content_str": [ "{\"metareview\": [\"Pros:\", \"simple, sensible subgoal discovery method\", \"strong inuitions, visualizations\", \"detailed rebuttal, 15 appendix sections\"], \"cons\": [\"moderate novelty\", \"lack of ablations\", \"assessments don't back up all claims\", \"ill-justified/mismatching design decisions\", \"inefficiency due to relying on a random policy in the first phase\", \"There is consensus among the reviewers that the paper is not quite good enough, and should be (borderline) rejected.\"], \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Meta-review\"}", "{\"title\": \"Response (1/2)\", \"comment\": \"Thank you for going through our work and providing valuable feedback. We hopefully address the concerns and questions raised in this response and we would be happy to expand on any point unclear in this response.\\n\\n1. The reward used to train the options:\\n One way to understand the proposed reward function is to look at Figure 22 (Appendix O). This reward function corresponds to one single option and dictates the intra-option policy for the same. The difference between the values printed on the individual states corresponds to the reward for a transition between the two states. Hence the designed reward function ensures that we reach a state with the largest value of $\\\\phi(s) . |Psi(Sub-goal)$\\n\\n2. Off-policy SR:\\nLearning the SR in an off-policy manner is extremely hard and we are unaware of any such formulations. This is primarily because it is very difficult to relate the discounted visitation counts of one policy to that of another.\\n\\n3. The candidate states formula seems very heuristic.\\nWe have a very good reason for the choice of the candidate states formula. The candidate states are those states which have a moderately developed SR. The formula uses the L1 norm which encodes the visitation count of a particular state (See [1]), hence encoding the degree to which the SR of a state is developed. The clustering from the SR-options ensures that the chosen sub-goals are sufficiently spread apart. Can you kindly clarify the comment: \\u201cgoing to one next state would give a 1/(1-gamma) SR value\\u201d. The maximum magnitude of the SR is 1/(1-gamma) so the statement isn\\u2019t too clear to us.\\n\\n4. Fig 5:\\nAs stated in the image caption, the images on the left are for the Successor Options and the images on the right are for Eigen-options (the method we compare to). The coloured states (yellow to light blue) are the termination states of all options. This clearly emphasizes the point that the Eigen-options have nearby sub-goals. With regards to the statement (If so many subgoals are close by, why would an agent explore?), this is precisely the problem with the Eigen-options framework, and this is the basis on which we claim that Successor options will exhibit better empirical performance. With regards to the trajectory distribution, Appendix J addresses the same. \\n\\n5. Experimental Evaluation:\\nThe reviewer claims that we use tricks to detract from the clarity of the paper, an assessment we politely disagree with.\\n\\n5a. \\u201cAuxiliary tasks make relative merit unclear:\\nCan the reviewer kindly clarify which auxiliary tasks are being referred to here? The only auxiliary loss we use is the image reconstruction loss (auto-encoder loss) which ensures that the Successor Features do not learn the null vector. This is a very commonly used loss in Successor Representation based papers ( See [2] which justify the usage of the same). This loss is very much part of our described framework and does not in an any form, hide the merit of the reported results\\n\\n5b. Sampling options less would defeat the purpose of constructing them:\\nWe have demonstrated evaluations for Q-learning (only actions), and SR-options with uniform and non-uniform exploration and hence we do not \\u201cdetract the clarity of the results\\u201d using hacks. We have attempted to be as transparent as possible in the reported results and we believe we stick to the method described. There are two points we would like to make 1) The primary focus of this work is on Option-discovery and hence we do not delve deeply into areas related to learning with options. There are a plethora of possibilities worth exploring and this does not detract from the utility of the options themselves. 2) Sampling the option less does not mean that there is no utility in constructing them. By that logic, one should expect a naive Q-learner to have the best performance (which is clearly not the case). So why is this scheme useful? Options are used to navigate to key parts of the state space (that primitive actions are unable to), while actions are used to explore the newly discovered region in state space. This is analogous to using an airplane to travel to a new city (using an option to land in new parts of state space), following which one explores the city on foot (each step is a primitive action). Hence the options are still essential, even if they are sampled infrequently since they perform a sequence of transition that has a negligible probability of happening. Sampling the options frequently would translate to jumping between regions in state space without exploring any one region. See Appendix J for more details\\n\\n[1 ] Machado, Marlos C., Marc G. Bellemare, and Michael Bowling. \\\"Count-based exploration with the successor representation.\\\" arXiv preprint arXiv:1807.11622 (2018).\\n\\n[2] Kulkarni, Tejas D., et al. \\\"Deep successor reinforcement learning.\\\" arXiv preprint arXiv:1606.02396 (2016).\"}", "{\"title\": \"Response (2/2)\", \"comment\": \"6a. Experiment description\\nGrid-worlds have always been deterministic and most works build on this assumption. Appendix B furnishes details on the environment. \\n\\n6b. Are parameters optimized for each algorithm?\\nWe used the same set of hyper-parameters for all the 4 grid-worlds and hence we have not excessively tuned each value, indicating the robustness of the approach. Appendix E, Appendix L and Appendix M discuss some of these choices. \\n\\n6c. Deepmind Lab experiments: The network used seems gigantic, was this optimized or was this the first choice that came to mind? Would this not overfit? What is the nonlinearity?\\n\\nSince our work focuses on option-discovery, we attempt to look at learnt options in the Deepmind Lab domain. Hence our only method for comparison is to look at other option discovery frameworks. Our network is in no way gigantic with respect to any Deep reinforcement learning work. DQN (Mnih et al[3]), uses a similar sized network for Atari. Our network has a couple more hidden layers owing to the fact that the RGB input for the Deepmind-lab task is almost 4 times bigger than the input from the Atari-2600 suite of games. Non-linearity is essential in generalizing well across high-dimensional images. \\nWith regards to the overfitting, Deep learning models tend to be heavily over-parameterized and rarely display any signs of overfitting. Furthermore, the concept of overfitting is still relatively undefined in Deep reinforcement learning since we lack clear definitions of train/test error.\", \"small_comments\": \"1. This synergy enables the rapid learning Successor representations by improving sample efficiency. AND Argmax a\\u2019 before eq 1\", \"a\": \"This is not the case in the current setup because the Option-discovery is done only once, following which the learnt options are used to solve tasks with different start and end states. Since there are a large number of such configurations, the computation to learn the options is negligible in comparison. Hence, we do not account for the cost of Option discovery. Furthermore, it is unclear how one would do so if the options address multiple MDPs/tasks.\\n\\n\\n[3] Mnih, Volodymyr, et al. \\\"Human-level control through deep reinforcement learning.\\\" Nature 518.7540 (2015): 529.\"}", "{\"title\": \"This could be an interesting paper but currently requires a lot of experimental and writing improvements\", \"review\": \"The paper proposes to use successor features for the purpose of option discovery. The idea is to start by constructing successor features based on a random policy, cluster them to discover subgoals, learn options that reach these subgoals, then iterate this process. This could be an interesting proposal, but there are several conceptual problems with the paper, and then many more minor issues, which put it below the threshold at the moment.\", \"bigger_comments\": \"1. The reward used to train the options (eq 5) could be either positive or negative. Hence, it is not clear how or why this is related to getting options that go to a goal.\\n2. Computing SRs only for a random policy seems like it will waste potentially a lot of data. Why not do off-policy learning of the SR while performing the option?\\n3. The candidate states formula seems very heuristic. It does not favour reaching many places necessarily (eg going to one next state would give a 1/(1-gamma) SR value)\\n4. Fig 5 is very confusing. There are large regions of all subgoals and then subgoals that are very spread out. If so many subgoals are close by, why would an agent explore? It could just jump randomly in that region for a while. It would have been useful to plot the trajectory distribution of the agent when using the learned options to see what exactly the agent is doing\\n5. There are some hacks that detract from the clarity of the results and the merits of the proposed method. For example, options are supposed to be good for exploration, so sampling them less would defeat the purpose of constructing them, but that is exactly what the authors end up doing. This is very strange and seems like a hack. Similarly, the use of auxiliary tasks makes it unclear what is the relative merit of the proposed method. It would have been very useful to avoid using all these bells and whistles and stick as closely as possible to the stated idea.\\n6. The experiments need to be described much better. For example, in the grid worlds are action effects deterministic or stochastic? Are start state and goal state drawn at random but maintained fixed across the learning, or each run has a different pair? Are parameters optimized for each algorithm? In the plots for the DM Lab experiments, what are we looking at? Policies? End states? How do options compare to Q-learning only in this case? Do you still do the non-unit exploration? The network used seems gigantic, was this optimized or was this the first choice that came to mind? Would this not overfit? What is the nonlinearity?\", \"small_comments\": [\"This synergy enables the rapid learning Successor representations by improving sample efficiency.\", \"Argmax a\\u2019 before eq 1\", \"Inconsistent notation for the transition probability p\", \"Eq 3 and 4 are incorrect (you seem to be one-off in the feature vectors used)\", \"Figure 2 is unclear, it requires more explanation\", \"Eq 6 does not correspond to eq 5\", \"In Fig 6 are the 4 panes corresponding top the 4 envs.? Please explain. Also this figure needs error bars\", \"It would be useful to plot not just AUC, but actual learning curves, in order to see their shape (eg rising faster and asymptoting lower may give a better AUC).\", \"Does primitive Q-learning get the same number of time steps as *all* stages of the proposed algorithm? If not, it is not a fair comparison\", \"It would be nice to also have quantitative results corresponding to the experiments in Fig 7.\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Response to Comments\", \"comment\": \"Figure 6 and 13:\\nThe 4 plots correspond to the 4 grid-worlds in Figure 3 (More details given in Appendix B). We see that this can be confusing to a reader and will make this fact more explicit in the paper. We will also expand on the captions to clarify the same.\\n\\nThe comments definitely point us in directions along which we can improve the paper readability and we thank you for the same and will attempt to incorporate the same.\"}", "{\"title\": \"Additional Comments\", \"comment\": \"Thank you for these clarifications, some of my comments are below - hopefully they can further improve the presentation of the paper. However, I believe the paper still significantly lacks from a strong quantitative demonstration of the usefulness of successor options (or alternatively, more theoretical justification, although I think this is much harder to achieve). While overall this is not a bad paper, in general and as my score indicates, I would lean towards rejecting.\\n\\nFigure 6 & 13\\nThe plots here could use more description in the caption (and main text). What do the four plots correspond to? What exactly is the task/environment?\\n\\n\\\"The plot indicates that the Successor representations are segregated into 3 distinct clusters\\\"\\nI guess this is subjective, but the shown clusters do not seem especially clear; i.e., they seem quite noisy and of varying shapes. I think alternative modes of visualization would help here.\"}", "{\"title\": \"Response (2/2)\", \"comment\": \"3) SR Augmented with previously learnt options:\\nThe incremental SR-options procedure alternates between building the SR and clustering the learnt SR to generate options.\\nWhich option used? : The options generated from SR-options derived from the current SR. The SR is constantly modified and made more accurate and the options change based on the current values of the SR.\\nHow many of them?: This is the hyper-parameter which is decided beforehand like in the SR-options algorithm. This remains fixed throughout training. \\nWhat way are they used?: They are used to learn the SR using a TD-based update rule (Equation 3). They are used in conjunction with the actions as in any other options framework. As mentioned in the paper, the SR is updated only when a primitive action is executed (not when executing options).\\n\\n4) Why is L1 norm a proxy for how developed SR is:\\nMachado et al. [1] demonstrate that the L1 norm of the SR can be used as a proxy for the count in the grid-world and function approximator settings. The intuition for the same is that the L1-norm of the SR (for the grid-world case) is exactly equal to 1/(1-gamma) which is shown in Appendix E. As a result, the L1-norm of the SR starts from zero and increases with every update to this constant. Hence the magnitude of the SR serves as a good proxy for the visitation count of that state. \\n\\n5)Relation to FuN:\\nThis seems like an interesting parallel between the two methods. While FuN has a manager and a sub-manager with the manager directing the sub-manager towards a region in state-space, SR-options also has controllers operating at two levels in a hierarchy with the higher level directing the lower level controller to certain sub-goals but instead using a pseudo reward. We will expand on the similarities in a revised version of the paper.\\n\\n6) Rho in figure 4:\\nRho in Figure-4 is the layer that is reconstructed like an auto-encoder (auxiliary loss), present in the top-most head of the network (as indicated in the top right of Figure 4). This is necessary to ensure that the built SR does not learn a trivial solution like the null-vector.\\n\\n7a) Figure 8a t-SNE plot:\\nThe t-SNE plot demonstrates that the SR constructed by the function approximator is able to spatially segregate the states like in Figure 9. Hence the learnt SR is able to infer the graphical structure even for high dimensional spaces.\\n\\n7b) Utility of NU scheme:\\n\\u201cThis seems to be somewhat contradictory to the assumption that primitive actions are not enough to explore effectively this environment\\u201d.\\nWhile primitive actions on their own may not suffice, this does contradict the fact that we may need to use actions more frequently than options. Options lead us to new regions in the state space where primitive actions have a hard time reaching. However, primitive actions are still required to explore the region that the option has navigated to. \\n\\nThe reasoning for the same is that an option terminates at a specific state. Every time that option is called, it terminates at a state \\u2018s\\u2019 with the highest visitation count. Hence, the agent will spend a large fraction of its time around that state. This intuition is justified in Appendix J (Figure 14 and Figure 15) which demonstrates that the non-uniform exploration scheme is more beneficial.\\n\\n8) More details on the incremental approach:\\nWe will definitely improve the exposition and make the final version more clear to the reader. When the new clusters are built, the options policies are learnt from scratch since it is not trivial to map the old set of options to the new ones we intend to learn. The clusters stabilize in the end, because of the fact that we sample a collection of states based on their L1 norm. However, when the SR of all states are reasonably developed, the L1 norm of all states is expected to be identical (See Appendix E) and hence we have a random sample of states in the end. This would be equivalent to performing the original SR-options procedure.\\n\\n9) Suggested baseline approach:\\nActing greedily wrt to the SRs should be sufficient to reach the goal for the grid-worlds we worked with. However, this isn\\u2019t the case universally. We do not add this baseline since this method is not generic enough and will fail when there are two rewards present, for instance. Using the SR will result in following the Q-value function of the uniformly random policy which clearly need not be optimal for a large number of problems.\\n\\nWe hope the above response clarifies most of the concerns and queries raised by the reviewer and we would be happy to clarify more details. We hope the reviewer takes the above rebuttal into account and modifies the score appropriately.\\n\\n[1 ] Machado, Marlos C., Marc G. Bellemare, and Michael Bowling. \\\"Count-based exploration with the successor representation.\\\" arXiv preprint arXiv:1807.11622 (2018).\"}", "{\"title\": \"Response (1/2)\", \"comment\": \"Thank you for reviewing our work and for providing valuable feedback. We attempt to address some of the questions raised below:\\n\\n--Significance:\\nWhat environments does this work on? This should work with any environment with complicated reward structures (conjunction of multiple rewards for example or moving back and forth between rooms), provided that the environment has reversible transitions. The reversibility enforces that the SR-options can be learnt using our proposed intrinsic reward. We would also argue that questioning the applicability of SR-options is tantamount to questioning the usability of Eigen-options or bottleneck based options. Another area of applicability is the finite horizon case where the episode ends after 100 steps. Note that techniques such as Eigen-options fail in such a case.\\n\\n-- Properties of the discovered subgoal states:\\nThe aim of SR-options is to navigate to states that are \\u201clandmark states\\u201d or representatives of regions. Formally, this would translate to a state that has the largest value of $\\\\phi(s) \\\\cdot \\\\Psi(cluster-center)$. Figure 5 effectively conveys the qualitative nature of SR-Options. The state coloured in yellow (each state corresponds to the termination state for a different option) indicates that each option leads the agent to a different part of state space.\\n\\n-- Improvement in performance over Eigen-options:\\n\\nWe believe that we vastly improve over eigen-options (magnitude of improvement depends on the evaluation metric, the reward and the environment size). It is hence not trivial to label the improvement as mild/vast. To clarify this point further, we have added Appendix J (See Figure-13) which highlights this fact more clearly and plots the training curves (Figure 6 reports the area under these curves). We report scores at 200 different intervals and a difference of around 30-50 in the AUC metric is in-fact a vast improvement in performance since this implies we learn 50000 steps earlier than all other methods.\\n\\n\\n1) Choice of latent learning\\n\\n1a) Uniform random policy: The choice of the policy is something we did not discuss due to a constraint on space. We have added an Appendix N that delineates some details on the same. The policy used to form an SR is effectively a prior over the state space, determining which parts of the state space are relevant. Hence, a policy that results in the agent spending more time in a certain region in state space, will result in a larger number of sub-goals in that region. This is clearly demonstrated in Figure 21. Hence a uniformly random policy is a suitable choice for our experiments since we do not prefer any part of the state space over the other.\\n\\n1b) Choice of reward and termination condition: The primary goal of each option is to reach the state $s$ where $SR(cluster-center) . \\\\phi(s)$ is maximized. Hence in order to encourage the agent to maximze this values, we define the pseudo reward $r(s_t, s_{t+1} = SR(cluster-center). (\\\\phi(s_{t+1}) - \\\\phi(s_t))$ (No gamma here). In a grid-world $\\\\phi(s)$ is a one-hot vector and hence this would correspond to terminating the option when a state with the highest visitation count is reached (hence our claim of landmark sub-goals). Once we reach the state with maximum value of $SR(cluster-center). \\\\phi(s)$, we want to terminate the options and hence Q(s,a) <=0 (Q-value under proposed intrinsic reward) which is satisfied for this state, is a suitable condition. Moving to any other state from this state will result in a negative reward.\\n\\n1c)To clarify, Equation 4 is a typo and should include a discount factor \\\\gamma. Thank you for pointing out the same.\\n\\n1d) Identical SR value of neighbouring states: For the case of $\\\\gamma=1$, Equation 2 effectively equates the SR, to a vector encoding the visitation counts of different states. So while neighbouring states have similar counts (smoothness expected) it is understandable that they do not have the exact same visitation count. Hence \\\\psi_s(s_{t+1}) and \\\\psi_s(s_{t}) are unlikely to have the same value when an expectation over visitation count is applied.\\n\\n2) SR and smoother transition:\\nThe SRs do not have very smooth transitions across boundaries and this is further indicative of the fact that a uniformly random policy has a low probability of transitioning from one well-connected region to another (hence we need options to do the same). However, even if the magnitudes are very small, this does not necessarily mean that the agent lacks sufficient information. The shaping always points in the right direction even if the shaping rewards have small magnitudes. This small magnitude suffices for the agent to determine the suitable action from each state. Figure 22 (Appendix O) visualizes the SR values in a more lucid fashion.\"}", "{\"title\": \"Response\", \"comment\": \"Thank you for reviewing our work and for providing valuable feedback. We will attempt to answer all of the queries raised in this response.\\n\\n1)Explaining Figure 5\\n\\nIn eigen-options, the termination condition is deterministic. We consider a sub-goal to be those states, where the option terminates. Each eigen-option is tied to a specific colour, and the colours marked on the grid-world correspond the sub-goals of the respective options (states where eigen-option terminates). Note that each eigen-option can have multiple sub-goals or just a single sub-goal (at least one sub-goal guaranteed since every option terminates). The plot illustrates that eigen-options often have overlapping sub-goals (See Figure 5, two images on the right), which implies that the various eigen-options terminate in nearby regions in state-space. In contrast, each SR-option is useful since it leads the agent to a region in state space, that no other option navigates to (See Figure 5, two images on the right).\\n\\n2)The evaluation method in Figure 6 seems highly non-standard; the acronym AUC is usually not used in this setting. Why not just plot the line plots themselves?\\n\\nWe wanted to use a single metric in order to compare the different methods (based on the speed of learning and quality of final policy, both of which are captured in the defined AUC). Although the AUC is not commonly used, it can be argued to be extremely similar to regret, commonly used in Bandit literature. We have also added the line-plots (See Appendix I, Figure 13) which more clearly highlights the improvement in performance as a result of using SR-options. \\n\\n3) Interpreting Figure 8:\\n\\nFigure 8 plots the successor representation for a sample set of states in Deepmind-Lab. The plot indicates that the Successor representations are segregated into 3 distinct clusters, indicating that the SRs learn to segregate the states into 3 regions. We are looking at other methods to visualize the Deepmind-Lab options since an aerial view isn\\u2019t viable. The first option moves down along a corridor and the second option goes through the door, from 1 room to another. \\n\\n4) Other environments:\\n\\nSince other option discovery methods are also applicable to solving the tasks we consider, we modify them so as to improve the task difficulty further. This is done by restricting each episode to a fixed horizon, thus not allowing random walks to aid in obtaining a well-represented SR and hence good SR-options. We argue that this is a reasonable setting as (i) Many complex tasks restrict access to parts of state space inherently when using random actions because of their local dynamics (consider Montezuma\\u2019s revenge where random exploration does not allow visiting all parts of the state space) and (ii) analysing over grid-worlds still allows us to gather useful insights on how better our method performs over prior techniques (this is difficult in the function approximation setting). Such a setting is precisely realized in the incremental SR-options case, where we show that directed exploration is possible even when most parts of the state space is initially inaccessible. The final set of options are better equipped in solving any reward based task as compared to the initial starting set. This is not possible when using other techniques such as eigen-options.\\n \\nThe primary goal of these options is to provide the agents access to different parts of the state-space by performing a sequence of transitions that typically have a low probability. Hence, regardless of the reward structure (say conjunction of multiple rewards), these options are useful on tasks which have reversible transitions (since latent learning is valid in this situation). Tasks which require going back and forth between two regions in state space (say retrieving a key in another room to open door in one room) would also benefit from SR-Options. It is also easy to make the argument that SR-Options benefit cases where Eigen-options and Bottleneck options are useful in nature. \\n\\nWe hope the above response clarifies most of the concerns raised by the reviewer and we would be happy to further clarify any more details. We hope the reviewer takes the above rebuttal into account and modifies the score appropriately.\"}", "{\"title\": \"Response\", \"comment\": \"Firstly, thank you for spending your time, reviewing our work and for the valuable feedback. We hope to answer a few of the concerns and queries raised by the reviewer.\\n\\n1) Incremental comparing to previous methods learning eigen-option and learning bottleneck states based on SRs. \\n\\nWe believe that our work is significantly different from the Eigen-options [1] and Eigen-options based on Successor Representation [2]. Our primary contribution in this work is to provide an option discovery mechanism to landmark sub-goals, a type of sub-goal that hasn\\u2019t been considered in prior works. With regards to the work on Eigen-options with Successor Representations, Machado et al. [2] prove a theorem indicating that under certain assumptions, the options discovered by Eigen-options and Successor Representation based Eigen-options are identical in nature. Hence there isn\\u2019t any similarity to our work, in terms of the nature of options discovered, or the manner in which Successor Representations are used. We believe that the only commonality between the two papers is the problem of option discovery that both works attempt to address. \\n\\n2) How sensitive is the performance to the number of subgoals? How is the number of options determined?\\nWe did not excessively tune the number of sub-goals and chose the number to be close to the number of rooms in that environment. Figure 19 in Appendix \\u201cL\\u201d varies the number of sub-goals for the 3rd environment (3rd Figure from left in Figure 3). We don\\u2019t observe a significant variance in performance and believe our method is fairly robust to this hyper-parameter and Figure 18 qualitatively explains the same since the locations of the sub-goals adapt to the hyperparameter, the total number of sub-goals.\\n\\n3) It is unclear to me why the termination set of every option is the set of all states satisfying Q(s, a)\\\\leq 0. Such a definition seems very adhoc.\\n\\nThe aim of each of each option from SR-options is to reach a state $s\\u2019$ such that $s\\u2019 = argmax_{k} \\\\phi(k) \\\\cdot SR(cluster-center)$. The reward function naturally encodes the same, by assigning a reward of r(s_i, s_j) = SR(goal) [ \\\\phi(s_j) - \\\\phi(s_i) ] i.e. we attempt to move to states with higher values of the component of SR. Q(s,a)<=0 for a point where $\\\\phi(s\\u2019) \\\\cdot. SR(cluster-center)$ is the highest possible value and hence the option is set to terminate here. Hence we use this condition. In the tabular case, $\\\\phi(s\\u2019)$ is a one-hot vector. Hence $\\\\phi(s\\u2019) \\\\cdot. SR(cluster-center)$ would correspond the state which has highest visitation count starting from \\u201ccluster-center\\u201d. Hence Q(s,a)<=0 encodes the intuition that we terminate the option once we land in a state with the highest discounted visitation count (dictated by SR) or in other words, a \\u201clandmark state\\u201d. This is precisely the reason that we claim that SR-options naturally direct you towards landmark states. \\n\\n4) Regarding the comparison to eigenoptions, are they learned from SR as well? It seems that the cited paper for eigenoptions is based on the graph laplacian. If that is true, a more recent paper learns eigenoptions based on SR should be discussed and included for the comparison.\\n\\nWe learn the eigenoptions based on the graph laplacian and not based on the SR. However. Machado et al. [2] show that the eigen-vectors of the SR (uniformly random policy) and that of the graph Laplacian are identical. Hence the learnt options are exactly identical in nature (for both methods). Therefore, the more recent paper will result in no differences in terms of qualitative and quantitative evaluation. This also adds to the point that our work is fundamentally different from [2]. Although Machado et al. [2] work with the SR, the discovered options are still the results of using the eigen-vectors of the graph laplacian and don\\u2019t yield different options. \\n\\nWe hope the above response clarifies some of the concerns and queries raised by the reviewer and we would be happy to further clarify any more details. We hope the reviewer takes the above rebuttal into account and modifies the score appropriately.\\n\\n[1] Machado, Marlos C., Marc G. Bellemare, and Michael Bowling. \\\"A laplacian framework for option discovery in reinforcement learning.\\\" arXiv preprint arXiv:1703.00956 (2017).\\n[2] Machado, Marlos C., et al. \\\"Eigenoption Discovery through the Deep Successor Representation.\\\" arXiv preprint arXiv:1710.11089 (2017).\"}", "{\"title\": \"Interesting ideas but experimental evaluation lacking\", \"review\": \"The authors propose a method based on successor representations to discover options in a reward-agnostic fashion. The method suggests to accumulate a set of options by (1) collecting experience according to a random policy, (2) approximating successor representation of or states, (3) clustering the successor representations to yield \\u201cproto-typical\\u201d cluster centers, and (4) defining new options which are the result of policies learned to \\u201cclimb\\u201d the successor representation of the proto-typical state. The authors provide a few qualitative and quantitative evaluations of the discovered options.\\n\\nI found the method for discovering options reasonable and interesting. The authors largely motivate the method by appealing to intuition rather than mathematical theory, although I understand that many of the related works on option discovery also largely appeal to intuition. The visualizations in Figure 5 (left) and Figure 7 are quite convincing.\", \"my_concerns_focus_on_the_remaining_evaluations\": \"-- Figure 5 (right) is difficult to understand. How exactly do you convert eigenoptions to sub-goals? Is there a better way to visualize this?\\n\\n-- The evaluation method in Figure 6 seems highly non-standard; the acronym AUC is usually not used in this setting. Why not just plot the line plots themselves?\\n\\n-- Figure 8 is very difficult to interpret. For (a), what exactly is being plotted? SR of all tasks or just cluster centers? What should the reader notice in this visualization? For (b), I don\\u2019t understand the options presented. Is there a better way to visualize the goal or option policy?\\n\\n-- Overall, the evaluation is heavy on qualitative results (many of them on simple gridworld tasks) and light on quantitative results (the only quantitative result being on a simple gridworld which is poorly explained). I would like to see more quantitative results showing the the discovered options actually help solve difficult tasks.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"A sound technique, but incremental comparing to previous methods learning eigenoption and learning bottleneck states based on SRs.\", \"review\": \"This paper studies of the problem of HRL. It proposed an option discovery algorithm based on successor representations (SR). It considers both tabular and function approximation settings. The main idea is to first learn SR representation, and then based on the leant SR representation to clusters states using kmeans++ to identify subgoals. It iterate between this two steps to incrementally learn the SR options. The technique is sound, but incremental comparing to previous methods learning eigenoption and learning bottleneck states based on SRs.\", \"here_are_the_comments_for_this_manuscript\": \"How sensitive is the performance to the number of subgoals? How is the number of option determined?\\n\\nIt is unclear to me why the termination set of every option is the set of all states satisfying Q(s, a)\\\\leq 0. Such a definition seems very adhoc.\\n\\nRegarding the comparison to eigenoptions, arethey learned from SR as well? It seems that the cited paper for eigenoptions is based on the graph laplacian. If that is true, a more recent paper learns eigenoptions based on SR should be discussed and included for the comparison.\\n\\nMachado et al., Eigenoption Discovery through the Deep Successor Representation, ICLR2018\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Interesting idea and direction, but both the method and the derived insights need more work and understanding.\", \"review\": \"\", \"summary\": \"This paper tries to tackle the option discovery problem, by building on recent work on successor representation and eigenoptions. Although this is an extreme important problem, I feel the paper fails to deliver on its promise. The authors propose a way of clustering states via their SR/SF representation and they argue that this would lead to the discovery of subgoals that are fundamentally different from the popular choices in literature, like bottleneck states. They argue that this discovery procedure would lead to states \\u201cbetter for exploration\\u201d, \\u201cprovide greater accessibility to a larger number of states\\u201d. Both of which sound promising, but I felt the actual evaluation fails to show or even assess either of these rigorously. Overall, after going through the paper, it is not clear what are the properties of these discovered subgoal states and why they would be better for exploration and/or control.\", \"clarity\": \"Can be improved significantly! It requires several reads to get some of the important details. See detailed comments.\", \"originality_and_significance\": \"Very limited, at least in this version. The quantitative, and in some cases qualitative, evaluation lacks considerably. The comparison with the, probably most related, method (eigenoption) yield some slight improvement. But unfortunately, I was not conceived that this grain of empirical evidence would transfer to other scenarios. I can\\u2019t see why that would that be the case, or in which scenarios this might happen. At least those insights seem to be missing from the discussion of the results.\", \"detailed_comments_and_questions\": \"1) Section 3.1: Latent Learning. There are a couple of design choices here that should have been more well explained or motivated:\\ni) The SR were built under the uniformly random policy. This is a design choice that might work well for gridworld/navigation type of domains but there are MDP where the evaluation under this particular policy can yield uninformative evaluations. Nevertheless this is an interesting choice that I think deserved more discussion, especially the connection to previous work on proto-value functions and eigenoptions. For instance, if both of these representations -- eigenoptions and the proposed successor option model -- aim to represent the SR under the uniform policy, why does know do (slightly) better than the other? Moreover, how would these compare under a different policy (non-uniform). \\nii) The choice of reward. The notation is a bit confusing here, as it\\u2019s somewhat inconsistent with the definitions (2-4). Also, more generally, It is not clear throughout if we are using the discounted or undiscounted version of SR/SFs -- (2-3) introduce the discounted version, (4) seems to be undiscounted. Not clear if (5) refers to the discounted or undiscounted version. Nevertheless, I am guessing this was meant as a shaping reward, thus \\\\gamma=1 for (5). But if that\\u2019s the case, according to eq. (2), most of the time I would expect \\\\psi_s(s_{t+1}) and \\\\psi_s(s_{t}) to have the same value. Could you explain why that is not true (at least in your examples)?\\niii) Termination set: Q(s,a)<=0. This again seems a bit of an arbitrary choice and it\\u2019s not clear which reward this value function takes into account. \\n\\n2) Figure 2: The first 2 figures representing the SRs for the two room domain: the values for one of the rooms seems to be zero, although one would expect a smoother transition around the \\u2018doorway\\u2019, otherwise the shaping won\\u2019t point in the right direction for progression. Again, this might suggest that more informative(control) policy might give you more signal. \\n\\n3) Section 3.2: \\u2018The policy used for learning the SR is augmented with the previously learnt options\\u2018. Can you be more precise about how this is done? Which options used? How many of them? And in which way are they used? This seems like a very important detail. Also is this augmented policy used only for exploration? \\n\\n4) SRmin < \\\\sum_{s\\u2019} \\u03c8(s, :) < SRmax. Is this meant to be an expectation over all reachable next states or all states in the environment? How is this determined or translated in a non-tabular setting. Not sure why this is a proxy to how \\u2018developed\\u2019 this learning problem or approximation is. Can you please expand on your intuition here?\\n\\n5) Section 3.3. The reward definition seems to represent how much the progress between \\\\phi(s_t+1) - \\\\phi(s) aligns with the direction of the goal. This is very reminest of FuN [2] -- probably a connect worth mentioning and exploring.\\n\\n6) Figure 4: Can you explain what rho is? It seems to be an intermediate representation for shared representation \\\\phi. Where is this used?\\n\\n7) Experiments:\\n\\u201ca uniformly random policy among the options and actions (typically used in exploration) will result in the agent spending a large fraction of it\\u2019s time near these sub-goals\\u201d. Surely this is closely linked to the termination condition of the option and the option policy. How is this assessed?\\n\\n\\u201cin order to effectively explore the environment using the exploration policy, it is important to sample actions and options non-uniformly\\u201d. It would be good to include such a comparison, or give a reason why this is the case. It\\u2019s also not clear how many of the options we are considering in this policy and how extensive their horizons will be. This comes back to the termination condition in Section 3.1 which could use an interpretation. \\n\\n\\u201cIn all our experiments, we fix the ratio of sampling an option to an action as 1:19.\\u201d This seems to be somewhat contradictory to the assumption that primitive actions are not enough to explore effectively this environment. \\n\\nFigure 8. I think this experiment could use some a lot more details. Also it would be good to guide the reader through the t-SNE plot in Figure 8a. What\\u2019s the observed pattern? How does this compare to the eigenoption counterpart.\\n\\n8) General comment on the experiments: There seems to be several stages in learning, with non-trivial dependencies. I think the exposition would improve a lot if you were more explicit about these: for instance, if the representation continually refined throughout the process; when the new cluster centers are inferred are the option policies learnt from scratch? Or do they build on the previous ones? Does this procedure converge -- aka do the clusters stabilize?\\n\\n9) Quantitative performance evaluation was done only for the gridworld scenarios and felt somewhat weak. The proposed tasks (navigation to a goal location) is exactly what SFs are trained to approximate. No composition of (sub)tasks, nor tradeoff-s of goals were studied [1,3] -- although they seem natural scenario of option planning and have been studied in previous SFs work. Moreover, if the SFs are built properly, in these gridworlds acting greedily with respect to the SFs (under the uniformly random policy) should be enough to get you to the goal. Also, probably this should be a baseline to begin with.\", \"references\": \"[1] Andre Barreto, Will Dabney, Remi Munos, Jonathan J Hunt, Tom Schaul, Hado P van Hasselt, and \\u00b4 David Silver. Successor features for transfer in reinforcement learning. In Advances in Neural Information Processing Systems, pp. 4055\\u20134065, 2017.\\n\\n[2] Vezhnevets, A.S., Osindero, S., Schaul, T., Heess, N., Jaderberg, M., Silver, D. and Kavukcuoglu, K., 2017, July. FeUdal Networks for Hierarchical Reinforcement Learning. In International Conference on Machine Learning (pp. 3540-3549).\\n\\n[3] Barreto, A., Borsa, D., Quan, J., Schaul, T., Silver, D., Hessel, M., Mankowitz, D., Zidek, A. and Munos, R., 2018, July. Transfer in deep reinforcement learning using successor features and generalised policy improvement. In International Conference on Machine Learning (pp. 510-519).\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"comment\": \"Thanks a lot for the reply, really interesting work and a really good read!\", \"title\": \"Thanks for the reply!\"}", "{\"title\": \"Response\", \"comment\": \"Thank you for spending time reading our work.\\nThe incremental successor options algorithm (3.2) addresses the tabular setting. Each option does have it's own option head (or more precisely it's own Q-value table). Hence, when the clusters change, the Q-value tables (and hence intra-option-policy) are all learnt from scratch for the newly assigned clusters. As a result, we do not have the problem of mapping options to appropriate option heads. This is because we do not re-use the value functions/policies from earlier options and the mapping is irrelevant. \\n\\nFor the case of the function approximators, where each option has it's own option head, this problem needs to be addressed. This work does not deal with incremental successor options in a function approximation scenario. One possible solution would be to re-initialize all the option-policy head weights and consequently randomly assign the new options to option-policy heads. These option-policies are hence again learnt from scratch in a manner identical to the tabular setting. \\n\\nHope this answers the question raised.\"}", "{\"comment\": \"Hi, one quick question, do you have one option policy head per option? And if so, then in the incremental successor options algorithm (3.2), when the cluster centers change on the go, how do you assign which option head should go to which cluster centers?\\nRegards.\", \"title\": \"Intra-option policy heads\"}" ] }
HyeS73ActX
Multi-Objective Value Iteration with Parameterized Threshold-Based Safety Constraints
[ "Hussein Sibai", "Sayan Mitra" ]
We consider an environment with multiple reward functions. One of them represents goal achievement and the others represent instantaneous safety conditions. We consider a scenario where the safety rewards should always be above some thresholds. The thresholds are parameters with values that differ between users. %The thresholds are not known at the time the policy is being designed. We efficiently compute a family of policies that cover all threshold-based constraints and maximize the goal achievement reward. We introduce a new parameterized threshold-based scalarization method of the reward vector that encodes our objective. We present novel data structures to store the value functions of the Bellman equation that allow their efficient computation using the value iteration algorithm. We present results for both discrete and continuous state spaces.
[ "reinforcement learning", "Markov decision processes", "safety constraints", "multi-objective optimization", "geometric analysis" ]
https://openreview.net/pdf?id=HyeS73ActX
https://openreview.net/forum?id=HyeS73ActX
ICLR.cc/2019/Conference
2019
{ "note_id": [ "SkxR7jIhyV", "SkgO4p1qCQ", "Hyl_NqRB0m", "rkx3U3TkRX", "rJxDJ8Sbp7", "Bkgi1BrbaQ", "Hke844Sb6X", "Bkg-5-ht2m", "SyxwYxTMnQ" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "official_review" ], "note_created": [ 1544477478282, 1543269679563, 1543002671704, 1542605907717, 1541653982919, 1541653731304, 1541653549905, 1541157256747, 1540702334806 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1353/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1353/Authors" ], [ "ICLR.cc/2019/Conference/Paper1353/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1353/AnonReviewer4" ], [ "ICLR.cc/2019/Conference/Paper1353/Authors" ], [ "ICLR.cc/2019/Conference/Paper1353/Authors" ], [ "ICLR.cc/2019/Conference/Paper1353/Authors" ], [ "ICLR.cc/2019/Conference/Paper1353/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1353/AnonReviewer2" ] ], "structured_content_str": [ "{\"metareview\": \"The main issue with the work in its current form is a lack of motivation and some clarity issues. The paper presents some interesting ideas, and will be much stronger when it incorporates a more clear discussion on motivation, both for the problem setting and the proposed solutions. The writing itself could also be significantly improved.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Clarity and motivation need improvement\"}", "{\"title\": \"Thank you for your time reading the paper and giving feedback\", \"comment\": [\"To avoid confusion, d is the dimension of the parameter space and is not related to the number of users. But yes, for a single patient, the parameters will be fixed and the reward vector will be scalarized. We added a comment below about the motivation of the work after a question from reviewer1. I hope that would show why this work is important.\", \"Remember that we compute the sum by assigning the intersection of every pair of hyper rectangles, from the two functions being added in the space [0,1]^d, the sum of the values associated with these hyper rectangles. The parts of the domain that do not belong to the intersection will have negative infinity value since at least one of the rectangles will have negative infinity value there. Also, recall that the value of a function at a point in the parameter space is the maximum of the values associated with the enclosing rectangles. At any point in the parameter space, the sum of the max values of the hyper rectangles enclosing the point from both functions will be the max of the sums of any two pairs of values of enclosing hyper rectangles since the values are all non-negative. Hence, every point in the domain/parameter space will get a value equal to the sum of the values of the two functions at that point.\", \"We acknowledge that experimental evaluations are going to be important; they are in the works; however, we believe that they are somewhat orthogonal to the contributions claimed in the current submission.\"]}", "{\"title\": \"I better understand the motivation now\", \"comment\": \"The author's response was useful to understanding what they view as important about the problem. I think its an interesting problem, but the authors should revise to make a clear case for the significance and offer examples of contexts where the approach is practical and serves an important need. Such a context needs to be one where the complexity of the algorithm is not prohibitive while that of solving for each agent individually would render the trivial alternative impractical.\"}", "{\"title\": \"Interesting direction, but clarity should be improved\", \"review\": \"I generally like the paper. The paper discussed a constrained value iteration setting where the safety contraints must be greater some threshold, and thresholds \\\\delta are parameters. The paper attempts to develop an value iteration algorithm to compute a class of optimal polices with such a parameter. The algorithm is mainly based on a special design of representation/data structure of PWC function, which can be used to store value functions and allows to efficiently compute several relevant operations in bellman equation. A graph-based data structure is developed for continuous state domains and hence value iteration can be extended to such cases.\\n\\nIn general, the paper presents an interesting direction which can potentially help solve RL problems with the proposed constraint setting. However, the paper spends lots of effort explaining representations, but only a few sentences explaining about how the proposed representations/data structures can help find a somehow generic value iteration solution, which allows to efficiently compute/retrieve a particular solution once a \\\\delta vector is specified. The paper should show in detail (or at least give some intuitive explanations) that using the proposed method can be more efficient than solving a value iteration for each individual constraint given that the constraints are independent. Specifically, the author uses the patient case to motivate the paper, saying that different patients may have different preferred thresholds and it is good to find class of policies so that any one of those policies can be retrieved once a threshold is specified. However, in this case, when dealing with only one patient, the dimension of reward is reduced to 1 (d = 1), while the computation of the algorithm is exponential in d, plus that the retrieval time is not intuitive to be better, so it is unsure whether computing such a class of policies worth.\\n\\nIn terms of novelty, the scalarization method of the vector-valued reward seems intuitive, since breaking a constraint means a infeasible solution. Furthermore, it is also unclear why the representation of PWC in discrete case is novel. A partial order on a high-dimensional space is naturally to be based on dominance relation, as a result, it seems natural to store value function by using right-top coordinates of a (hyper)rectangle.\\n\\nAs for the clarity, though the author made the effort to explain clearly by using examples after almost every definition/notation, some important explanations are missing. I would think the really interesting things are the operations based on those representations. For example, the part of computing summation of two PWC function representation is not justified. Why the summation can be calculated in that way? Though the maximum operation is intuitive, however, the summation should have some justification. I think a better way to explain those things is to redefine a new bellman operator, which can operate on those defined representations of PWC function. \\n\\nI think it could be a nice work if the author can improve the motivation and presentation. Experiments on some simple domains can be also helpful.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Thank you for your time reading the paper and giving feedback\", \"comment\": \"We replied to the general audience above as this is an important question and we thought it should be a general comment.\"}", "{\"title\": \"Motivation of the work and main contributions\", \"comment\": \"Recall, the problem addressed in our paper is to compute the optimal value functions for a family of constraints/thresholds. Recall also for the following discussion S is the state space, A is the action set, d is the dimension of the reward vector, and T is the time horizon.\\n\\n1. Why this is an important problem? Two reasons:\\n\\n(a) Computing optimal policy for a family: A family of users with different preferences need to be served according to their respective optimal policies, with a (parameterized) policy that is computed once and for all. This arises when the computation cost of for the optimal policy is greater than the retrieval cost from parameterized policy computed offline. \\n (i) Computing a single policy takes O((d + |S|)|S||A| T) time in the discrete time case (|S| |A| Q-functions to compute at each of the T time steps. Computing each Q-function takes O(d) time to compute the reward and O(|S|) time to compute the expectation). Retrieving a policy takes O(T log^{d-1} (d(|S||A|e/d)^{2d})) time as shown in Section 4.5.\\n (ii) Computing a single policy in the continuous case takes O((c d |A|)^T)), where c is the dimension of the state feature space. On the other hand, our retrieval time of a policy is O(T log^d (2 d|A|^{4T})). \\n\\n(b) Sensitivity analysis: Consider a common situation where the user\\u2019s is not absolutely sure about her preferences as represented by a specific threshold. She needs to know how sensitive is the optimal action to her current preference. If the optimal action changes and have significantly better reward with a small change of preference, she might changes her preference.\\n\\n\\nA concrete instance. Revisiting the example from paper, different patients have different preferences on the thresholds on side effects. Accurately choosing a threshold based for an individual would be hard (this depends on many personal factors). Our algorithm provides the range of thresholds that makes an action (medication) an optimal choice. Our method will give more information to the patient on how sensitive the optimal action is to the particular threshold she choses.\\n\\nMoreover, it is more computationally efficient (when d is small relative to T, |S| and |A|) to compute the family of policies for all preferences and only retrieve the optimal one based on an incoming patient preference rather than computing one for every incoming patient. \\n\\n2. Exponential improvements in scalability.\\nThe complexity bounds for synthesizing the family of policies achieved is exponential in d and polynomial in |S| and |A| and linear in T; In (Lizotte et al 2012), for the discrete case, only the case analyzed was for d=2, and the bound is T(|S||A|)^T, which is exponential in T. For the continuous case, our bound is exponential in T while that of (Lizotte et al 2012) for learning in the continuous case is doubly exponential in T.\"}", "{\"title\": \"Thank you for your time reading the paper and giving feedback\", \"comment\": [\"This paper makes theoretical contributions with (a) developing value iteration algorithms for known MDPs with discrete and continuous state spaces that generate policies for all parameters for a parameterized reward and (b) in providing complexity bounds (see the discussion above regarding bounds).\", \"For the discrete part, the Pareto front computations can be done using off-the-shelf Pareto front computation algorithm as mentioned Section 4.2. Is there anything specific about the method that is unclear?\", \"We will work on simplifying the discussion of the bounds. The state space size only affects the bound by the dimension of the feature space c (first line of the last paragraph of Section 5.4). Moreover, the last paragraph (starting from \\u201cHowever, for a fixed state,\\u2026\\\") discusses the complexity for a fixed state. We will try to simplify presentation in general. If you can point to specific parts that are unclear, please let us know, that would help too.\", \"For the continuous part, we have an efficient method now that would appear in a later work.\", \"The RL extension of the work is planned future work. This would go roughly like this: start with offline data/trajectories as in (Lizotte et al. 2010 and 2012); learn the reward functions for the reward vector components and the transition probabilities of the MDP from data and then apply our algorithm for this MDP.\", \"We acknowledge that experimental evaluations are going to be important; they are in the works; however, we believe that they are somewhat orthogonal to the contributions claimed in the current submission.\", \"Please if you have any further comments or questions let us know.\"]}", "{\"title\": \"why is this an important problem?\", \"review\": \"The authors provide an algorithm that aims to compute optimal value functions and policies as a function of a set of constraints. The ideas used for designing the algorithm seem reasonable. However, I don't fully understand the motivation here. Given a set of constraints, one can simply carry out value iteration with what the authors call the scalarized reward in order to generate an optimal policy. Why go through the effort to compute things in a manner parameterized by the constraints? Perhaps the intention is to use this for sensitivity analysis, though the authors do not discuss that?\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}", "{\"title\": \"Interesting first step, but hard to follow and no practical demonstrations.\", \"review\": \"Summary\\n\\nThe authors consider RL with safety constraints, which is framed as a multi-reward problem. At a high-level, this involves finding the Pareto front, which optimally trades off objectives. The paper primarily introduces and discusses a discretization scheme and methods to model the Q-value function as a NIPWC (non-increasing piecewise constant function). NIPWC are stored as values over discrete partitions of state-action spaces. To do so, the authors introduce two data structures DecRect and ContDecRect to store Q function values over geometric combinations of subsets of state-action space.\\n\\nThe authors discuss how to execute elementary operations on these data structures, such as computing max(f(x), g(x)), weighted sums, etc. The goal is to use these operations to compute Bellman-type updates to compute optimal value/policy functions for multi-reward problems. The authors also present complexity analysis for these operations. \\n\\nPro\\n- Extensive discussion and analysis of discrete representations of Q-functions as NIPWCs. \\n\\nCon\\n- A major issue with this work is that it is very densely written and spends a lot of time on developing the discretization framework and operations on NIPWC. However: \\n- There is no clear practical algorithm to solve (simple) multi-reward RL problems with the authors' approach.\\n- No experiments to demonstrate a simple implementation of these techniques.\\n- Even though multi-reward settings are the stated problem of interest, authors don't discuss Pareto front computations in much detail, e.g., section 4.3 computing non-dominated actions is too short to be useful.\\n- The discussion around complexity upper bounds is too dense and uninsightful. For instance, the bounds in section 5 all concern bounds on the Q-value as a function of the action, which results in upper bounds as a function of |A|. But in practice, the action is space is often small, but the state space is high-dimensional. Hence, these considerations seem less relevant. \\n\\nOverall, this work seems to present an interesting computational scheme, but it is hard to see how this is a scalable alternative. Practical demonstrations would benefit this work significantly.\\n\\nReproducibility\\nN/A\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
H1V4QhAqYQ
Augment your batch: better training with larger batches
[ "Elad Hoffer", "Itay Hubara", "Niv Giladi", "Daniel Soudry" ]
Recently, there is regained interest in large batch training of neural networks, both of theory and practice. New insights and methods allowed certain models to be trained using large batches with no adverse impact on performance. Most works focused on accelerating wall clock training time by modifying the learning rate schedule, without introducing accuracy degradation. We propose to use large batch training to boost accuracy and accelerate convergence by combining it with data augmentation. Our method, "batch augmentation", suggests using multiple instances of each sample at the same large batch. We show empirically that this simple yet effective method improves convergence and final generalization accuracy. We further suggest possible reasons for its success.
[ "Large Batch Training", "Augmentation", "Deep Learning" ]
https://openreview.net/pdf?id=H1V4QhAqYQ
https://openreview.net/forum?id=H1V4QhAqYQ
ICLR.cc/2019/Conference
2019
{ "note_id": [ "Bkg7WLfWgN", "H1e8jGltRm", "BygBHGetAX", "rkxiJGxFRQ", "SygfJScCh7", "HyxXFQr93Q", "HJxBD9g52X", "Bkxz6RFD27", "r1xfC5BZ57", "Hyx56MEb9X", "B1gG-GNW5m", "HygOo-VbqQ", "ByxmMenAFX", "SyxdsQZAKm", "Bkgdfp03FX", "BylUSs_3Y7", "Syg9ycvsY7" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "comment", "official_review", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "comment", "comment", "comment", "official_comment", "comment" ], "note_created": [ 1544787450928, 1543205533666, 1543205436645, 1543205347137, 1541477593710, 1541194618798, 1541175901341, 1541017273916, 1538509513744, 1538503361840, 1538503161651, 1538503072367, 1538338827174, 1538294687847, 1538219280382, 1538194237984, 1538124257672 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1352/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1352/Authors" ], [ "ICLR.cc/2019/Conference/Paper1352/Authors" ], [ "ICLR.cc/2019/Conference/Paper1352/Authors" ], [ "~Alex_Matthew_Lamb1" ], [ "ICLR.cc/2019/Conference/Paper1352/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1352/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1352/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1352/Authors" ], [ "ICLR.cc/2019/Conference/Paper1352/Authors" ], [ "ICLR.cc/2019/Conference/Paper1352/Authors" ], [ "ICLR.cc/2019/Conference/Paper1352/Authors" ], [ "(anonymous)" ], [ "~Eddie_Smolyansky1" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper1352/Authors" ], [ "~Eddie_Smolyansky1" ] ], "structured_content_str": [ "{\"metareview\": \"The authors propose to use large batch training of neural networks, where each batch contains multiple augmentations of each sample. The experiments demonstrate that this leads to better performance compared to training with small batches. However, as noted by Reviewers 2 and 3, the experiments do not convincingly show where the improvement comes from. Considering that the described technique is very simplistic, having an extensive ablation study and comparison to the strong baselines is essential. The rebuttal didn\\u2019t address the reviewers' concerns, and they argue for rejection.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"evaluation should be improved\"}", "{\"title\": \"Reply to AnonReviewer3\", \"comment\": \"1. There may have been a misunderstanding: we compared our method to the baseline, and both had the same type of data augmentation (e.g. in ResNet we did this comparison for both normal augmentation and cutout separately). We are therefore certain that the generalization improvements stem from batch-augment method, as it appears for all augmentation schemes we've tried.\\n\\n2. Both ensemble methods and \\\"Distributed knowledge distillation\\\" result in models larger then the original model, and therefore requires additional resources at run time (after training). In contrast, our method does not have this issue as the final trained method is identical to the original. Moreover, our method is much simpler and does not require any change of settings - as we used the original training regime without modifications. Lastly, in many cases it is possible to increase the batch size without affecting the wall clock time, due to surplus compute power (e.g., Table 2). In those cases, our BA method can be easily used to take advantage of this surplus large batch size, and improve the final model accuracy, as we demonstrated. \\n\\n3.We kindly disagree, as we feel that results on various models and datasets show a consistent and (mostly) non-trivial improvement on baseline results. As others have shown before, a batch size of 64*32=2048 is not small, and often yields noticeably decreased in accuracy when training regime is not adapted [1].\\nWe note that similar experiments to the ones the reviewer asked for were previously done in [1, Table 1&2] for several datasets and models. For example, for a baseline of 93.07% (Resnet44 on cifar10 dataset), we improved to 93.65%. However:\\n(i) On large batch without adapting the training regime for the same number of steps: accuracy drops to 86.10% [1].\\n(ii+iii) When using large batch for the same number of steps and learning rate is increased, accuracy returns to 93.07%. For experiment (ii) we note that accuracy is marginally worse. We've added this experiment to the paper along with convergence graphs (Figure 5, Appendix).\\n\\n4. \\\"For theorem 1, it is hard to say how much the theoretical analysis based on linear approximation near global minimizer would help understand the behavior of SGD.\\\" \\nOur theoretical analysis is focused on how SGD selects stationary points, using stability analysis. Such stability analysis requires linearization.\\n\\\"I fail to understand the the authors\\u2019 augmentation. Following the author\\u2019s logic, normal large batch training decrease the variability of <H>_k and \\\\lambda_max, which converges to \\u2018\\u2019flat\\u2019\\u2019 minima. It contradicts with the authors\\u2019 other explanation.\\\"\", \"there_may_have_been_a_misunderstanding\": \"increasing batch size will not decrease flatness, i.e. the maximal eigenvalue of the Hessian (as defined in Keskar et al.), which is different from \\\\lambda_max. To clarify, we suggested in section 4 that BA works well since enables the model to observe more augmentations, with only a small effect on the variance (since most of the samples in the mini-batch are highly correlated). This is in contrast to standard large-batch training, which works less well since it has a larger effect on the variance.\\n\\n5. As we explained in section 4.2, batch-augmentation causes each batch to have correlated samples (different instances of the same image) . When computing gradients on this batch we accumulate multiple gradient instances -- leading to smaller variance, and hence, smaller norm.\\nThis reduction is less than the reduction in large-batch training, since the batch instances are much more highly correlated. We've added an additional figure (Figure 6, Appendix) that demonstrates this point.\\n\\n[1] \\\"Train Longer Generalize Better\\\" - Hoffer et al (NIPS 2017).\"}", "{\"title\": \"Reply to AnonReviewer1\", \"comment\": \"We thank the reviewer for his remarks and positive assessment of our work.\"}", "{\"title\": \"Reply to AnonReviewer2\", \"comment\": \"1. Regarding mixup: mixup requires a mixed input from two separate labels as well as a mixed target by same amount. It does not deal with data augmentations as BA (multiple instances of same sample). Therefore, Mixup approach is orthogonal to ours and both can be combined.\\n\\n2. We used different augmentation techniques to emphasize the improvement of batch-augment upon them all. We commonly used augmentation for each network (for example, cutout is common for modern cifar-10 based models, but not for ImageNet). We agree that applying BA with other augmentation techniques would make an interesting experiments, that can further improve accuracy, but we argue that this is not the essence of our work. \\n\\n3. We've added an additional experiment regarding training for longer with M*B batch size (accounting for same number of examples) in Appendix Figure 5. We wish to clarify that in each experiment we've performed (including baseline) the same augmentation technique was used (according to original paper, or explicitly stated as in the case of cutout).\\n\\n4. We stress that in this work, we do not suggest a new type of augmentation technique but rather a method that utilize any type of augmentation. Thus, we argue that BA should be as robust to adversarial attacks as the augmentation technique it utilize (e.g., cutout, random cropping, flipping, etc.). Nonetheless, we thank the reviewer for his suggestions and encourage researchers to use BA with different augmentation strategies.\"}", "{\"comment\": \"\\\"1.\\tIs the regularized model robust to adversarial attacks as suggested in Mixup and Manifold Mixup?\\\"\\n\\nBoth mixup and manifold mixup have improved robustness to the single-step FGSM attack. Neither leads to robustness on multi-step attacks like PGD.\", \"title\": \"Note on Manifold Mixup and Adv. Robustness\"}", "{\"title\": \"Interesting idea with insufficient support\", \"review\": \"This paper tested a very simple idea: when we do large batch training, instead of sampling more training data for each minibatch, we use data augmentation techniques to generate training data from a small minibatch. The authors claim the proposed method has better generalization performance.\\n\\nI think it is an interesting idea, but the current draft does not provide sufficient support.\\n\\n1. The proposed method is very simple. In this case, I would expect the authors provide more intuitive explanations. It looks to me the better generalization comes from more complicated data augmentation, not from the proposed large batch training. \\n\\n2. It is unclear to me what is the benefit of the proposed method. Even provided more computing resources, the proposed method is not faster than small batch training. The improvement on test errors does not look significant. If given more computing resources, and under same timing constraint, we have many other methods to improve performance. For example, a simple thing to do is t0 separately train networks with standard setting and then ensemble trained networks. Or apply distributed knowledge distillation like in (Anil 2018 \\nLarge scale distributed neural network training through online distillation)\\n\\n3. The experiments are not strong. The largest batch considered is 64*32, which is relatively small. In figure 1 (b), the results of M=4,8,16,32 are very similar, and it looks unstable. It is unclear what is the default batchsize for Imagenet. In Table 1, the proposed method tuned M as a hyperparameter. The baselines are fairly weak, the authors did not compare with any other method. I would expect at least the following baselines:\\ni) use normal large batch training and complicated data augmentation, train the model for same number of epochs\\nii) use normal large batch training and complicated data augmentation, train the model for same number of iterations\\nii) use normal large batch training and complicated data augmentation, scale the learning rate up as in Goyal et al. 2017\\n\\n4. For theorem 1, it is hard to say how much the theoretical analysis based on linear approximation near global minimizer would help understand the behavior of SGD. I fail to understand the the authors\\u2019 augmentation. Following the author\\u2019s logic, normal large batch training decrease the variability of <H>_k and \\\\lambda_max, which converges to \\u2018\\u2019flat\\u2019\\u2019 minima. It contradicts with the authors\\u2019 other explanation. \\n\\n5. In section 4.2, I fail to understand why the proposed method can affect the norm of gradient. \\n\\n\\n6. Related works:\\nSmith et al. 2018 Don't Decay the Learning Rate, Increase the Batch Size. \\n\\n\\n=============== after rebuttal ====================\\nI appreciate the authors' response, but I do not think the rebuttal addressed my concerns. I will keep my score and argue for the rejection of this paper. \\n\\nMy main concern is that the benefit of this method is unclear. The main baseline that has been compared is the standard small-batch training. However, the proposed method use a N times larger batch and same number of iterations, and hence N times more computation resources. Moreover, the proposed method also use N times more augmented samples. Like the authors said, they did not propose new data augmentation method, and their contribution is how to combine data augmentation with large-batch training. However, I am not convinced by the experiments that the good performance is from the proposed method, not from the N times more augmented samples. I have suggested the authors to compare with stronger baselines to demonstrate the benefits. However, the authors quote a previous paper that use different data augmentation and (potentially) other experimental settings. \\n\\nThe proposed method looks unstable. Moreover, instead of showing the consistent benefits of large batch, the authors tune the batchsize as a hyperparameter for different experiments. \\n\\nRegarding the theoretical part, I still do not follow the authors' explanation. I think it could at least be improved for clarity.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"simple idea that works along with some theory to support it\", \"review\": \"This paper describes a new method for data augmentation which is called batch augmentation. The idea is very simple -- include in your batch M augmentations of the each training sample, effectively this will increase the size of the batch by M. I have not seen a similar idea to this proposed before. As the authors show this simple technique has the potential to increase training convergence and final accuracy. Several experiments support the paper's claims illustrating the effectiveness of the technique on a variety of datasets (e.g. CIFAR, ImageNet, PTB) and architectures (ResNet, Wide-ResNet, DenseNet, MobileNets). Following that there's a more theoretical section which provides some analysis on why the method works, and seems also reasonable. Overall simple idea, well written-paper with clear practical application and of potential great interest to many researchers\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"review\", \"review\": \"The paper shows that training with large batch size (e.g., with MxB samples) serves as an effective regularization method for deep networks, thus improving the convergence and generalization accuracy of the models. The enlarged batch of MxB consists of multiple (i.e., B) transforms of each of the M samples from the given batch; the transform is executed by a data augmentation method such as Cutout or Dropout. The authors also provide a theoretical explanation for the working of the method, suggesting that the enlarged batch training decreases the gradient variance during the training of the networks.\\n\\nThe paper is well written and easy to follow. Also, some interesting results are experimentally obtained such as the figures presented in Figure4. Nevertheless, the experimental studies are not very satisfactory in its current form.\", \"major_remarks\": \"1.\\tIn terms of regularization with transformed data in a given batch, the proposed method is related to MixUp (Zhang et al., mixup: Beyond empirical risk minimization), AdaMixUp (Guo et al., MixUp as Locally Linear Out-Of-Manifold Regularization), Manifold Mixup (Verma et al., Manifold Mixup: Learning Better Representations by Interpolating Hidden States), and AgrLearn (Guo et al. Aggregated Learning: A Vector Quantization Approach to Learning with Neural Networks). It would be useful for the authors to discuss how the proposed strategy differs from them or empirically show how the proposed regularization method compares to them in terms of regularization effect. For example, in MixUp, AdaMixup and Manifold Mixup, the samples in a given batch will be linearly interpolated with randomly reshuffled samples of the same batch. In these sense, using them as baselines would make the contribution of the proposed method much significant. \\n2.\\tIn the experiments, it seems the authors use different data augmentation methods for different datasets (except for Cifar10 and Cifar100), it would be useful to stick with a particular data augmentation method for all the datasets, for example, it would be interesting to see the performance of also using Cutout for the MobileNet and ResNet50 on the ImageNet data set. \\n3.\\tRegarding the experimental study, I wonder if it would be beneficial to include three variations of the proposed method. First, use baseline with the same batch size, namely BxM, but with sampling with replacement. That is, using the same batchsize as that in Batch Augmentation but with repeated samples. In this way, the contribution of the data augmentation in the proposed method would be much clearer. Second, as suggested from the results in the PTB data in Table1, using only Dropout obtains very minor improvement over the baseline method. In this sense, using other data augmentation methods instead of Cutout for the image tasks would make the contribution of the paper much clear. Third, training the networks with the batchsize of BxM, but excluding the original data samples in the given batch would be another interesting experiment. That is, all samples of the batch in the batch augmentation are synthetic samples.\", \"minor_remarks\": \"1.\\tIs the regularized model robust to adversarial attacks as suggested in Mixup and Manifold Mixup?\\n2.\\tWould it be beneficial to include various data augmentation methods for the same batch? That is, each transformed sample may come from a different data augmentation strategy.\\n\\n==========after rebuttal===========\\n\\nMy main concern is that the paper did not clearly show where the performance improvement comes from. It may simply come from the larger batch size instead of the added augmented samples as claimed by the paper. I think the current comparison baseline in the paper is insufficient. I did propose three comparison baselines in my initial review, but I am not satisfied with the authors' rebuttal on that.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Mixup\", \"comment\": \"Thanks for your interest. Notice that mixup requires a mixed input from two separate labels as well as a mixed target by same amount. It does not deal with data augmentations as BA (multiple instances of same sample). Therefore, Mixup approach is orthogonal to ours and both can be combined. We welcome our readers to try and incorporate our ideas in this setting.\"}", "{\"title\": \"answer\", \"comment\": \"Thanks.\\n1. Indeed, that is a typo. It should be 93.07%, we will fix it in the next revision.\\n2. In the graph we posted, RA was measured with B=640, see the clarification we posted for the case B=64.\"}", "{\"title\": \"clarification\", \"comment\": \"Thanks for your interest. Please see the clarification we added to address your question.\"}", "{\"title\": \"clarifications\", \"comment\": \"We would like to clarify the main points of our paper:\\n1. In many cases, it is possible to increase the batch size without affecting the wall clock time, due to surplus compute power (e.g., see Table 2). In those cases, our BA method can be used to take advantage of this large batch size, and improve the final model accuracy, as we demonstrated.\\n2. We suggested in section 4 that the reason that BA works well is that it enables the model to observe more augmentations, with only a small effect on the variance (since most of the samples in the mini-batch are highly correlated). This is in contrast to standard large-batch + augmentation (i.e., Regime Adaptation from [1]) which works less well since it has a larger effect on the variance.\\n\\nThe comments here suggested that instead of using BA with M=10, we simply increase the number of iterations x10 (keeping a small batch size of B=64). This results with the same accuracy gain as doing BA with the same M. This is not surprising since we observe more augmentations, while not changing the mini-batch variance. However, we would not gain from the computational benefits of a larger batch as the training time can be roughly x10 times longer.\", \"we_would_also_take_the_opportunity_to_report_a_result_obtained_after_the_paper_was_submitted\": \"Using the AlexNet model, with [B=512, M=8], we obtained a top-1 accuracy of 62.308% (up from baseline 57% and from 60% obtained by [2]). This echoes our messages clearly given table 2 in the paper -- using the same wall-clock time you can increase your model accuracy significantly by using BA.\\n\\n[1] \\\"Train longer generalize better\\\" (2017) - Hoffer, Hubara, Soudry\\n[2] \\\"Scaling SGD Batch Size to 32K for ImageNet Training\\\" - You, Gitman, Ginsburg\"}", "{\"comment\": \"Haven't had a chance to read the whole paper yet but was surprised to not see any mention of mixup. Seems to me like it would make sense to send 1-2 augmented copies of each sample to the device then expand the size of the batch by using mixup on multiple combinations of those samples.\", \"title\": \"Mixup?\"}", "{\"comment\": \"Thank you for the response, what a great platform to have a discussion. I believe you have understood my question correctly and addressed it.\", \"small_comments\": \"1. There might be a typo in this paper. In the final paragraph of section 3.1 you report the results of [1] are 93.04 but in the paper itself I find 93.07. Is it a coincidence that it's the same number as the baseline for \\\"ResNet44 (He et al., 2016)\\\" in this paper (shouldn't it be higher)?\\n2. I'm assuming in (3) above you mean B=64. Also, I can't access the gdrive link but I believe your report.\", \"title\": \"Followup\"}", "{\"comment\": \"Would you please show the results of \\\"RA (B=64, M=1, Epochs=1000)\\\"?\\nIt may be a more proper comparison with BA than \\\"RA (B=640, M=1, Epochs=1000)\\\".\", \"title\": \"Comparison\"}", "{\"title\": \"answer - comparing same number of instances\", \"comment\": \"We thank you for the interest and question.\\nTraining with large batch was noted in previous works to cause degradation in validation accuracy. While previous works focused on reducing the wall clock time without suffering from this degradation (\\\"generalization gap\\\"), our method is first to suggest significant improvement with large batches. \\nIf we understand you correctly, you are interested to see if our improvements can also be gained with training using larger batches for the same number of iterations (called \\\"regime adaptation\\\" in [1]). \\nThat way, the same number of image instances is seen by the model as in our method (but with a larger number of epochs). This kind of comparison is described in the last paragraph of section 3.1.\", \"the_full_training_results_are_available_at_https\": \"//drive.google.com/file/d/1mcHSnIx_dxjwTeYuUIrJmmcaLKQ-jDU5/view?usp=drivesdk\\nhere you can see a comparison between \\n(1) baseline B=64 training\\n(2) our batch augmentation (BA) method with M=10\\n(3) regime adaptation (RA) with B=640 and 10x more epochs\\n\\nIn the validation accuracy graph, you can observe that although the same number of sample instances were seen for both (2) and (3), our BA method still achieved a considerable improvement.\\nWe hope we answered your concerns.\\n\\n[1] \\\"Train longer generalize better\\\" (2017) - Hoffer, Hubara, Soudry\"}", "{\"title\": \"Are the comparisons done with the same amount of image instances?\", \"comment\": \"Hi, it seems to me that due to the augmentation, there are now M times more image instances per epoch of training, right?\\nPerhaps I missed it in the paper, but have your comparisons (in Fig1, Fig2 for example) been done with the same amount of image instances per epoch? A clarification could be to show Fig1, Fig2 with x axis being \\\"image instances\\\" and not \\\"epoches\\\".\", \"my_question_boils_down_to\": \"how have you demonstrated that the improved accuracy is due to the augmentations, as opposed to simply having a larger batch size?\"}" ] }
HkzNXhC9KQ
Adaptive Sample-space & Adaptive Probability coding: a neural-network based approach for compression
[ "Ken Nakanishi", "Shin-ichi Maeda", "Takeru Miyato", "Masanori Koyama" ]
We propose Adaptive Sample-space & Adaptive Probability (ASAP) coding, an efficient neural-network based method for lossy data compression. Our ASAP coding distinguishes itself from the conventional method based on adaptive arithmetic coding in that it models the probability distribution for the quantization process in such a way that one can conduct back-propagation for the quantization width that determines the support of the distribution. Our ASAP also trains the model with a novel, hyper-parameter free multiplicative loss for the rate-distortion tradeoff. With our ASAP encoder, we are able to compress the image files in the Kodak dataset to as low as one fifth the size of the JPEG-compressed image without compromising their visual quality, and achieved the state-of-the-art result in terms of MS-SSIM based rate-distortion tradeoff.
[ "Data compression", "Image compression", "Deep Learning", "Convolutional neural networks" ]
https://openreview.net/pdf?id=HkzNXhC9KQ
https://openreview.net/forum?id=HkzNXhC9KQ
ICLR.cc/2019/Conference
2019
{ "note_id": [ "rJlI4Cirl4", "Hkl-uw_m1N", "BJe0-zhTRQ", "BkeJntYY0Q", "S1xac_YtAm", "SygOmuttCX", "ByxvpTVyTm", "BJgY5e1337", "r1xWuPcOh7" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1545088558376, 1543894889037, 1543516677889, 1543244198557, 1543243924814, 1543243808326, 1541520831290, 1541300369277, 1541085032707 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1350/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1350/Authors" ], [ "ICLR.cc/2019/Conference/Paper1350/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1350/Authors" ], [ "ICLR.cc/2019/Conference/Paper1350/Authors" ], [ "ICLR.cc/2019/Conference/Paper1350/Authors" ], [ "ICLR.cc/2019/Conference/Paper1350/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1350/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1350/AnonReviewer2" ] ], "structured_content_str": [ "{\"metareview\": \"This paper presents an interesting approach to image compression, as recognized by all reviewers. However, important concerns about evaluating the contribution remains: as noted by reviewers, evaluating the contribution requires disentangling what part of the improvement is due to the proposed approach and what part is due to the loss chosen and evaluation methods. While authors have done a valuable effort adding experiments to incorporate reviewers suggestions with ablation studies, it does not convincingly show that the proposed approach truly improves over existing ones like Balle et al. Authors are encouraged to strengthen their work for future submission by putting particular emphasis on those questions.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Interesting work but important evaluation concerns remain\"}", "{\"title\": \"Thank you!\", \"comment\": \"In addition to the added analysis and observations we stated in the revision, we are also inferring from our results that the energy landscape of MS-SSIM contains multiple local extrama and it is, at least for the dataset we have studied, difficult to optimize. In fact, the model optimized for MS-SSIM is worse in terms of MS-SSIM than the model optimized for proposed multiplicative loss not only on test set, but notably also on training set.\\n\\nproposed multiplicative loss (bpp x (1 - msssim) x mse):\\ntrain/bpp train/msssim train/mse val/bpp val/msssim val/mse\\n0.939190 0.006595 57.649879 0.953790 0.007047 64.067696\\n\\nbpp x (1 - msssim):\\ntrain/bpp train/msssim train/mse val/bpp val/msssim val/mse\\n0.913846 0.008405 106.999641 0.926681 0.008613 99.586456\\n0.955831 0.008065 96.597839 0.974577 0.008393 100.409378\\n\\nbpp + lmb x (1 - msssim): \\u203b C=32\\ntrain/bpp train/msssim train/mse val/bpp val/msssim val/mse\\n1.077895 0.006823 85.435677 1.091407 0.007135 88.454208\\n0.859573 0.008996 109.393616 0.860719 0.009333 108.734512\\n(evaluation for a training sample and a validation sample of ImageNet )\\n\\nThe evaluations on validation set are identical to the ones shown in Fig.7.\\nNote that, MS-SSIM score on the validation set is not only smaller in general for the model optimized for proposed multiplicative loss,\\nthe difference between the training and validation is not also larger for the model optimized for proposed multiplicative loss than the model optimized for MS-SSIM. This suggests that the model is not doing well in the direct optimization about MS-SSIM score on the training set; that is, MS-SSIM is on its own a difficult cost function to train with.\\n\\nOur observations and claims are also supported by the fact that the MS-SSIM does not increase smoothly with bpp (Fig 7). As discussed in https://arxiv.org/pdf/1511.08861.pdf, MS-SSIM loss alone is a tricky energy, and the mentioned paper also introduces a mixture L1 and MS-SSIM. We do admit that there are some room left to study for the loss, and we plan addressing it further in the future works.\"}", "{\"title\": \"Response to rebuttal\", \"comment\": \"Thanks for the detailed response. It is interesting to see that the proposed quantization helps on the small network. However, I still think there are some aspects that should be explored more like the role of the loss terms.\"}", "{\"title\": \"Thank you very much for comments and suggestions!\", \"comment\": \"Thank you very much for comments and suggestions. We made revisions to reflect the suggestions made and to resolve the concerns raised. We would also like to provide responses to the comments below:\\n\\n\\\"A major issue is, however, that it is unclear from the results whether the gains are due to the novel quantization system, or due to the novel loss. From Fig. 7 it looks like the loss BPP + \\\\lambda (1-MS-SSIM) (assuming the formula in (6)) is correct, and the legend in Fig. 7 incorrect) that is used in most other works performs essentially on par with Rippel & Bourdev 2017, Nakanishi et al. 2018. For example at 1 bpp, this loss yields an MS-SSIM of 0.992 which is essentially the same as Rippel & Bourdev 2017, Nakanishi et al. 2018 obtain, cf. Fig. 2. To show that the improvement is due to the learned quantization and not just >because of the loss (8) an ablation experiment should be done. One could e.g. train the proposed method with the same predictor, but without the employing the learned quantization scheme, and compare to the results obtained for the proposed method.\\\"\\n\\nThis was in fact a concern for the other reviewers too, and we conducted an ablation study to assess the effect of \\u2018dropping\\u2019 the adaptive quantization width. That is, we conducted a set of experiments in which we used a fixed quantization width for all latent features. As we can see in the newly added figure, the algorithm with adaptive quantization width was able to perform equally well with the \\u2018best\\u2019 fixed-width quantization in terms of MS-SSIM. We also conducted a same set of ablation studies with a smaller model (no resblock, smaller # channels). For this ablation study, adaptive quantization width worked much better than the fixed quantization. \\n\\n\\n\\\"Furthermore, a better understanding of the loss (8) would be desirable. How is the MSE factor justified?\\\"\\n\\nUnfortunately, we are unable to provide a solid answer to this question yet. \\nAlthough not too intuitive, the training with the MSE-included multiplicative loss was in fact smoother than the training with MS-SSIM loss. Empirically, the presence of MSE often seemed to help the training process evade the local minima in the MS-SSIM landscape. As we can see in the figure added in the Appendix, the rate-distortion curve is much smoother for the compression results produced by the model trained with the MSE-included multiplicative loss. However, when we checked the effect of the multiplicative loss on the difference between the test result and the training result, it seemed that we can at least say that the inclusion of MSE does not have a `regularization effect`.\\n\\n\\n\\\"Also, it would be good to present visual examples at rates 0.1-1.0 bpp. All visual examples are at rates below 0.08 bpp, and the proposed method is shown to also outperform other methods at much higher rates.\\\"\\n\\nWe added a set of visual results for the medium-high bpp (0.1~) compressions in the revision.\"}", "{\"title\": \"Thank you very much for the comments!\", \"comment\": \"Thank you very much for the comments! We would like to respond to each one of them below, in order.\\n\\n\\n\\\"What is the difference on the architecture used in the proposed method and other compared methods? Since various numbers of layers or neurons lead to very big differences on the resulting performance.\\\"\\n\\nPractically, the most convoluted part of our algorithm is in our structure for the recursive construction of the latent variable z (Figure 11). We have this structure for every k, which ranges from 1 to 10. We do have to admit that our model is much deeper than the one used in Balle et al, which uses practically 7 layers for both encoder and decoder. \\n\\n\\n\\\"Besides, it seems that the proposed method for compressing images includes more\\ncomplex calculations. Is it faster or slower than others? \\\"\\n\\nWhen compared to the computation with fixed quantization width, we expect approximately 50% increase, because we are merely changing the parameters subject to the optimization from (mu, sig) to (mu, sig, q). Also, much of the computation in the quantization process can be parallelized (Figure 5), and in the light of that, the additional computational burden of concern shall not be much of a challenge.\"}", "{\"title\": \"Thank you very much for a thorough review!\", \"comment\": \"Thank you very much for the comments and suggestions. We reflected the suggestions on the revisions.\\nBelow, we would like to provide responses to the concerns raised: \\n\\n\\n\\\"There is an inconsistency between section 2.2.2 and Fig. 7. I would expect in Fig. 7 also the result for the objective \\nfunction from eq (6). In Fig. 7 could be added also a reference approach (such as BPG or a learned method).\\\"\\n\\nThank you very much for pointing this typo. The caption in our original submission was wrong; we meant BPP + lambda*(1 - MSSSIM) in all places we wrote BP + lambda * MSE. \\nWe fixed the legend in all graphs.\\n\\n\\n\\\"By comparing Fig.7 and Fig.2 and Nakanishi et al., I suspect that the improvement of ASAP over Nakanishi et al., comes mainly from the change in the objective function and not from the proposed ASAP core?\\\"\\n\\nWe shall first note that, as we state in the main article, the result of our model in Fig.7 was produced by training the model over 0.1M, which is significantly less than the number of iterations used to produce the results for Fig 2 (0.3M). \\nThat being said, to make an assessment for this concern, we conducted an ablation study in which we compared the compression performance of our algorithm against those of the algorithm with fixed quantization width (all trained with the new objective function), and added a new figure illustrating the result. On the model we trained for the benchmark study, our ASAP was able to perform equally well as the compression with the \\u2018best\\u2019 fixed quantization width (best in terms of MS-SSIM). For the assessment on the benchmark dataset, the benefit of our study turned out to be a relief from the burden of grid search. We also conducted a separate ablation study with a smaller version of the model we used for a benchmark dataset. For this second set of comparative experiments, we were able to confirm the benefit of the adaptive quantization size in terms of the rate-distortion tradeoff measured in MS-SSIM. Adaptive width performed better than all choices of fixed quantization width.\\n\\n\\n\\\"The paper should include/discuss also the paper of Balle et al., \\\"Variational image compression with a scale hyperprior\\\", ICLR 2018\\\"\\n\\\"The authors target only MS-SSIM, however it is unclear to me why. According to Balle et al, ICLR 2018 and also to the recent CLIC challenge at CVPR18, learning for MS-SSIM or for PSNR / MSE leads to results that are not necessarily strongly correlated with the perceptual quality. MS-SSIM does not strongly correlate to the perceptual quality, while PSNR / MSE is a measure of fidelity / accuracy towards a ground truth. I would like to see a comparison in PSNR / MSE terms with BPG and/or Balle et al., ICLR 2018.\\\"\\n\\nWe mentioned the work of Balle et al in the script, and added their rate-distortion tradeoff curve in the figures. We also evaluated the performance of our method with PSNR scores, and compare our method against Balle et al as well. In terms of PSNR, our method(optimized for the novel loss) was not able to perform better than Balle et al\\u2019s model optimized for PSNR. In terms of MS-SSIM, our method performed better than Balle et al\\u2019s model optimized for MS-SSIM. \\n\\n\\n\\\"I would like to see a discussion on the complexity, design, runtime, and memory >requirements for the proposed approach in comparison with the other learned methods.\\\"\\n\\nIndeed, making the quantization width adaptive will increase computational cost to a certain extent. \\nRoughty speaking, we do not expect much more than 50% increase in the sheer cost (mu, sig) \\u2192 (mu, sig, q). Also, as shown in Fig. 5, our computation admits parallel computing (the grids with same color can be computed in parallel). With the aid of GPU, this increase in computational burden shall not be much of a challenge. \\n\\n\\n\\\"Also, it would be good to see more visual results, also for higher bpp.\\\"\\n\\nWe added the visual results for higher bpp.\"}", "{\"title\": \"interesting approach; good MS-SSIM results; lacking insights / MSE evaluation\", \"review\": \"Adaptive Sample-space & Adaptive Probability (ASAP) lossy compressor is proposed. ASAP is based on neural networks.\\nASAP jointly learns the quantization width and a corresponding adaptive quantization scheme. Some steps are similar to Nakanishi et al. 2018.\\nASAP with the bpp x (1-MS-SSIM) x MSE loss improves in terms of MS-SSIM over Rippel & Bourdev 2017 and Nakanishi et al. 2018 on Kodak and RAISE-1k datasets.\\n\\nThe idea of jointly learning the quantization width and the adaptive quantization is interesting and the MS-SSIM results are good.\\n\\nThere is an inconsistency between section 2.2.2 and Fig. 7. I would expect in Fig. 7 also the result for the objective function from eq (6). In Fig. 7 could be added also a reference approach (such as BPG or a learned method).\\n\\nBy comparing Fig.7 and Fig.2 and Nakanishi et al., I suspect that the improvement of ASAP over Nakanishi et al., comes mainly from the change in the objective function and not from the proposed ASAP core? \\n\\nThe paper should include/discuss also the paper of \\nBalle et al., \\\"Variational image compression with a scale hyperprior\\\", ICLR 2018\\n\\nThe authors target only MS-SSIM, however it is unclear to me why.\\nAccording to Balle et al, ICLR 2018 and also to the recent CLIC challenge at CVPR18, learning for MS-SSIM or for PSNR / MSE leads to results that are not necessarily strongly correlated with the perceptual quality. MS-SSIM does not strongly correlate to the perceptual quality, while PSNR / MSE is a measure of fidelity / accuracy towards a ground truth. \\n\\nI would like to see a comparison in PSNR / MSE terms with BPG and/or Balle et al., ICLR 2018.\\n\\nI would like to see a discussion on the complexity, design, runtime, and memory requirements for the proposed approach in comparison with the other learned methods.\\n\\nAlso, it would be good to see more visual results, also for higher bpp.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Overall score 7\", \"review\": \"1. An ASPA coding method was proposed in this paper for lossy compression, which achieves the state-of-the-art performance on Kodak dataset and RAISE-1k dataset.\\n\\n2. What is the difference on the architecture used in the proposed method and other compared methods? Since various numbers of layers or neurons lead to very big differences on the resulting performance.\\n\\n3. Besides, it seems that the proposed method for compressing images includes more complex calculations. Is it faster or slower than others?\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Is it the loss or the quantization that matters?\", \"review\": \"The paper proposes Adaptive Sample-space & Adaptive Probability (ASAP) coding for image compression based on neural networks. In contrast to most prior methods, which adhere to a fixed quantization scheme (i.e. with fixed number of quantization levels, and fixing the level themselves), the proposed method jointly learns a probability model of the quantized representation (the bottleneck of an autoencoder model) for coding and a corresponding adaptive quantization scheme. The distribution of each entry in the bottleneck before quantization is modeled as a Gaussian, whose mean and variance are predicted by a neural network conditionally on bottleneck entries on a grid at different scales (similar as in Nakanishi et al. 2018). The same network also predicts quantization intervals to adaptively quantize the respective entry of the bottleneck. Together, the predicted means, variances, and quantization intervals are used to obtain an estimate of the code length. The proposed compression networks are trained with a novel multiplicative loss, showing clear improvements over prior methods Rippel & Bourdev 2017, Nakanishi et al. 2018 on the Kodak and Raise1k data sets in terms of MS-SSIM.\", \"pros\": \"The results presented in this paper seem to be state-of-the-art, and innovation on quantization, which has not attracted a lot of attention in the context of neural network-based image compression is a welcome contribution. The method also seems to outperform the recent method [1], which should be included for comparison.\", \"questions\": \"A major issue is, however, that it is unclear from the results whether the gains are due to the novel quantization system, or due to the novel loss. From Fig. 7 it looks like the loss BPP + \\\\lambda (1-MS-SSIM) (assuming the formula in (6)) is correct, and the legend in Fig. 7 incorrect) that is used in most other works performs essentially on par with Rippel & Bourdev 2017, Nakanishi et al. 2018. For example at 1 bpp, this loss yields an MS-SSIM of 0.992 which is essentially the same as Rippel & Bourdev 2017, Nakanishi et al. 2018 obtain, cf. Fig. 2. To show that the improvement is due to the learned quantization and not just because of the loss (8) an ablation experiment should be done. One could e.g. train the proposed method with the same predictor, but without the employing the learned quantization scheme, and compare to the results obtained for the proposed method.\\n\\nFurthermore, a better understanding of the loss (8) would be desirable. How is the MSE factor justified?\\n\\nAlso, it would be good to present visual examples at rates 0.1-1.0 bpp. All visual examples are at rates below 0.08 bpp, and the proposed method is shown to also outperform other methods at much higher rates.\\n\\n\\n[1] Ball\\u00e9, J., Minnen, D., Singh, S., Hwang, S.J. and Johnston, N. Variational image compression with a scale hyperprior. ICLR 2018.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
BJGVX3CqYm
Mixed Precision Quantization of ConvNets via Differentiable Neural Architecture Search
[ "Bichen Wu", "Yanghan Wang", "Peizhao Zhang", "Yuandong Tian", "Peter Vajda", "Kurt Keutzer" ]
Recent work in network quantization has substantially reduced the time and space complexity of neural network inference, enabling their deployment on embedded and mobile devices with limited computational and memory resources. However, existing quantization methods often represent all weights and activations with the same precision (bit-width). In this paper, we explore a new dimension of the design space: quantizing different layers with different bit-widths. We formulate this problem as a neural architecture search problem and propose a novel differentiable neural architecture search (DNAS) framework to efficiently explore its exponential search space with gradient-based optimization. Experiments show we surpass the state-of-the-art compression of ResNet on CIFAR-10 and ImageNet. Our quantized models with 21.1x smaller model size or 103.9x lower computational cost can still outperform baseline quantized or even full precision models.
[ "Neural Net Quantization", "Neural Architecture Search" ]
https://openreview.net/pdf?id=BJGVX3CqYm
https://openreview.net/forum?id=BJGVX3CqYm
ICLR.cc/2019/Conference
2019
{ "note_id": [ "BkxzQhSNeV", "S1en76o8JV", "Bkg-reU814", "HygvY3NL14", "S1x1RsXUyE", "SJgKsz7UkE", "SkxtLPZIyE", "ryx7F8ZUy4", "BJxyhZZ81E", "SJllVhg81E", "BylDxFgU14", "r1ghBj1LyN", "SJxDRhw1k4", "B1g2pMvyyE", "HklEISzkyN", "SkgBcEzyyN", "HkxOBWTQRX", "BkgipyamCm", "r1xrlPh7Cm", "S1gQ08hQRm", "S1lyG-h7A7", "H1guYU4w67", "SygQwnxPaX", "H1gK_Y3J6Q", "rkeBRN_AhX", "Byxe-jVOhQ" ], "note_type": [ "meta_review", "comment", "official_comment", "comment", "official_comment", "comment", "official_comment", "official_comment", "comment", "comment", "official_comment", "comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review", "official_comment", "official_review" ], "note_created": [ 1544997913662, 1544105251539, 1544081465359, 1544076414861, 1544072135272, 1544069792871, 1544062800908, 1544062587043, 1544061351378, 1544059943602, 1544059119506, 1544055620419, 1543630031473, 1543627459607, 1543607627929, 1543607437490, 1542865215995, 1542864835326, 1542862573164, 1542862538529, 1542861063114, 1542043263585, 1542028378859, 1541552496774, 1541469389487, 1541061367701 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1349/Area_Chair1" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper1349/Authors" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper1349/Authors" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper1349/Authors" ], [ "ICLR.cc/2019/Conference/Paper1349/Authors" ], [ "(anonymous)" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper1349/Authors" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper1349/Authors" ], [ "ICLR.cc/2019/Conference/Paper1349/Authors" ], [ "ICLR.cc/2019/Conference/Paper1349/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1349/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1349/Authors" ], [ "ICLR.cc/2019/Conference/Paper1349/Authors" ], [ "ICLR.cc/2019/Conference/Paper1349/Authors" ], [ "ICLR.cc/2019/Conference/Paper1349/Authors" ], [ "ICLR.cc/2019/Conference/Paper1349/Authors" ], [ "ICLR.cc/2019/Conference/Paper1349/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1349/AnonReviewer4" ], [ "ICLR.cc/2019/Conference/Paper1349/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1349/Authors" ], [ "ICLR.cc/2019/Conference/Paper1349/AnonReviewer2" ] ], "structured_content_str": [ "{\"metareview\": \"The paper proposes a quantization framework that learns a different bit width per layer. It is based on a differentiable objective where the Gumbel softmax approach is used with an annealing procedure. The objective trades off accuracy and model size.\\n\\nThe reviewers generally thought the idea has merit. Quoting from discussion comments (R4): \\\"The paper cited by AnonReviewer 3 is indeed close to the current submission, but in my opinion the strongest contribution of this paper is the formulation from architecture search perspective.\\\"\\nThe approach is general, and seems to be reasonably efficient (ResNet 18 took \\\"less than 5 hours\\\")\\n\\nThe main negatives are the comparison to other methods. In the rebuttal, the authors suggested in multiple places that they would update the submission with additional experiments in response to reviewer comments. As of the decision deadline, these experiments do not appear to have been added to the document.\", \"in_the_discussion\": \"R4: \\\"This paper seems novel enough to me, but I agree that the prior work should at least be cited and compared to. This is a general weakness in the paper, the comparison to relevant prior works is not sufficient.\\\" R3: \\\"Not only novel, but more general han the prior work mentioned, but the discussion / experiments do not seem to capture this.\\\"\\n\\nWith a range of scores around the borderline threshold for acceptance at ICLR, this is a difficult case. On the balance, it appears that shortcomings in the experimental results are not resolved in time for ICLR 2019. The missing results include ablation studies (promised to R4) and a comparison to DARTS (promised to R3): \\\"We plan to perform the suggested experiments of comparing with exhaustive search and DARTS. The results will be hopefully updated before the revision deadline and the camera-ready if the paper is accepted.\\\" These results are not present and could not be evaluated during the review/discussion phase.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Area chair recommendation\"}", "{\"comment\": \"I only meant I agree that the computational cost is not a linear function of precision. Sorry about the confusion and thanks for the reply.\", \"title\": \"Clarification about agreement\"}", "{\"title\": \"Response\", \"comment\": \"First of all, the \\\"on-the-fly\\\" memory consumption you mentioned is at most bi-linear (weight-bit x activation-bit), instead of exponential, with respect to precisions. A simple fact is, to multiply an M-bit weight with an N-bit activation, it involves M*N bit-wise multiplications (M*N bit memory footprint), and the result can be stored in a (M+N)-bit number. Unless your fixed point arithmetic is implemented by a LUT, exponential cost is by all means an over-estimation.\\n\\nSecond, just to re-state my clarification in the previous response. We used two metrics in this work: \\n\\n1/ We use a linear cost for MODEL SIZE (storage space) reduction. In fact, this is widely adopted in many previous works such as [1, 2]. Also, it is equivalent to the \\\"representational cost\\\" in [3], as the anonymous-2 pointed out. \\n\\n2/ We use a bi-linear cost (weight-bit x activation-bit) for \\\"COMPUTATIONAL COST\\\" reduction. If the weight-bit is the same as the activation-bit, the metric is equivalent to the quadratic cost as in [3, 4]. It is also equivalent to your description of \\\"on-the-fly\\\" memory consumption. \\n\\nI agree that BOPs in [5] is a more precise bit-wise operation count by considering not only multiplications (weight-bit x activation), but also additions (weight-bit + activation-bit + constant), but that's dependent on the hardware implementation. \\n\\nAs I stated in the previous response, we used the same metric to compute the model size and computational cost reduction rate for our method and previous baselines, so the results are directly comparable. You can also convert results from other methods to the same metric as ours, or our result to other metrics for comparison. \\n\\nFinally, I want to point out that a sensible choice of metric depends on the hardware implementation. The NAS framework proposed in our paper can easily adopt different metrics for different hardwares and search for different mixed precision strategies. \\n\\n[1] https://arxiv.org/abs/1602.07360\\n[2] https://arxiv.org/abs/1510.00149\\n[3] Analytical Guarantees on Numerical Precision of Deep Neural Networks: http://proceedings.mlr.press/v70/sakr17a.html\\n[4] DoReFaNet: https://arxiv.org/abs/1606.06160\\n[5] https://arxiv.org/pdf/1804.10969.pdf\"}", "{\"comment\": \"Dear authors,\", \"1\": \"\\\"On-the-fly\\\" means the RUN-TIME memory consumption during the inference stage, where you need to consider the numerical precision. For example, you have 2-bit weights and 2-bit activations, a possible choice is in range {0,1,2,4}. In this 2-bit multiplication, when both numbers are 4, it outputs 4 \\u00d7 4 = 16, which is not within the range.\\n\\\"Offline\\\" means the memory space you need for saving the model on the hardware, which corresponds to the definition in the paper. The memory consumption for these two cases are different.\", \"2\": \"Actually, I just want to claim that using the \\\"linear\\\" metric in your paper is \\\"definitely\\\" not appropriate. Of course, my \\\"exponential\\\" claim is also not accurate. According to another anonymous, he provided a better metric [1].\\nAnd I also recommend another bitwise metric (BOPs) which can be referred to [2].\", \"3\": \"After all, the best way to justify your conclusion is to test your mixed-precision model on hardware platforms.\\n\\n[1]: Analytical Guarantees on Numerical Precision of Deep Neural Networks: http://proceedings.mlr.press/v70/sakr17a.html\\n[2]: Uniq: Uniform noise injection for the quantization of neural networks.\", \"https\": \"//arxiv.org/pdf/1804.10969.pdf\", \"title\": \"Clarification with respect to \\\"On-the-fly\\\" and \\\"exponential w.r.t. precisions\\\"\"}", "{\"title\": \"On-the-fly model size\", \"comment\": \"Glad to see that things got clarified. However, I don't quite understand what you mean by \\\"on-the-fly\\\" model size and why it's exponential w.r.t. precisions. Could you clarify?\"}", "{\"comment\": \"Dear authors,\\n\\nThanks for your kindly reply. I think the problem locates at the different understanding of model size. My answer is based on the \\\"on the fly\\\" model size, which means how much memory and accumulator bandwidth you need during the inference stage. However, your answer is based on the \\\"offline\\\" storage definition, which means how much memory you need for storaging the model on a hard disk. But I think the \\\"on the fly\\\" memory consumption is more meaningful and the issue becomes clear now.\", \"title\": \"Different understanding of model size\"}", "{\"title\": \"Thank you for your suggestion.\", \"comment\": \"I agree with your post that the \\\"computational cost\\\" is quadratic (if the activation and weights are quantized to the same precision) and the representational cost is linear. But the anonymous-1's post claims the cost is exponential. And your post said \\\"I agree with anonymous\\\", which actually contradicts your claim. Can you clarify?\\n\\nI think it is a norm that the term \\\"model size\\\" really just mean \\\"representational cost\\\" or the storage space of an NN model. We also defined the \\\"computational cost\\\" in the paper.\\n\\nHowever, I think your suggestion is very good and I will update the paper to better clarify these basic definitions.\"}", "{\"title\": \"Value range != model size (storage space)\", \"comment\": \"As the title suggests, value range is not equivalent to model size.\\n\\nBy model size, we mean the storage space for a model. It is simply computed by #parameters x bit-width. It is true that the value range for an N-bit weight is [0, 2^N-1], but it still takes N-bit to store the weight and therefore, the model size (storage space) of a layer is computed by #params x bit-width. \\n\\n> And you need to explain that the compression rate means computational cost in the paper clearly in order to avoid misunderstanding, since model size compression and computational cost reduction are two different things.\\n\\nYes, we put two experiments (on for model size, another for computational cost) results in two tables. In the experiment section and the cost function definition, we also explain the differences. We are happy to adopt suggestions how we can make this clearer. \\n\\nThanks.\"}", "{\"comment\": \"I agree with anonymous, you can check this ICML 2017 paper where computational and representational costs are defined. One is a quadratic function of precision while the other is a linear function of precision. Perhaps the authors should use this reference to clarify matters.\\n\\nAnalytical Guarantees on Numerical Precision of Deep Neural Networks - by Sakr et al. http://proceedings.mlr.press/v70/sakr17a.html\", \"title\": \"Agree with anonymous\"}", "{\"comment\": \"Dear authors,\\n\\nThanks for your response. Assume we only consider the model size compression here. And I am still a little confusing with respect to \\\"in this setting, our goal is to reduce the model size (parameter size * bit-width)\\\". According to DOREFA-NET, during inference, the value range for 4-bit weights should be [-2^4, 2^4] (only assume uniform quantization, and use XNOR bitwise operation.). Similarly, the representation range for 2-bit weights is [-2^2, 2^2]. So model size = parameter size * bit-width is not accurate. \\n\\nAnd you need to explain that the compression rate means computational cost in the paper clearly in order to avoid misunderstanding, since model size compression and computational cost reduction are two different things.\", \"title\": \"Better, but still unclear to me.\"}", "{\"title\": \"Thank you for your question, but the computational cost is not exponential with respect to the bit-width.\", \"comment\": \"Thank you for your question. In our paper, we conduct mixed precision quantization in two different settings: model size compression and computational cost reduction.\", \"model_size_compression\": \"In this setting, our goal is to reduce the model size (parameter size * bit-width), and we only quantize weights. I think there's no doubt here that the compression rate can be directly computed by 32/k.\", \"computational_cost_reduction\": \"In this setting, we quantize both weights and activations. Depending on the implementation details, if we adopt the bit-wise operation by equation (3, 4) from [1] (DoReFaNet), the computational cost is proportional to (k_w*k_a), where k_w is the weight's bit-width, and k_a is the activation's bit-width. This is exactly how we compute the computational cost (and compression rate) in our paper.\\n\\nIn addition, for all the baseline methods in our paper, we compute the reduction rate in the same manner to make sure that our results are consistent and comparable. \\n\\nHope this clarifies your question. \\n\\n[1]: https://arxiv.org/abs/1606.06160\"}", "{\"comment\": \"Dear authors and reviewers,\\n\\nThis paper proposes to use an improved version of DARTs for searching the precision for each convolutional layer. Since different layers have different precisions, the complexity calculation is extremely important for fair comparision.\\nBut I find the compression rate metric for $k$-bit you use in this paper is simply $32 / k$. However, this does not make sense. \\nAccording to the bitwise operations for fixed-point approaches[1, 2], the compression ratio of $k$-bit quantization is proportional to $2^k$ rather than $k$. For example, the compression rate reduction of 2-bit compared to 4-bit should be $2^4 / 2^2 = 4$ rather than $4 / 2 = 2$. I think this is a big issue and you really need to clarify this, otherwise the experimental results are not comparable at all. \\n\\n[1]: https://arxiv.org/abs/1606.06160\\n[2]:http://openaccess.thecvf.com/content_ECCV_2018/papers/Dongqing_Zhang_Optimized_Quantization_for_ECCV_2018_paper.pdf\", \"title\": \"The compression rate metric is wrong, so the experimental results are not comparable.\"}", "{\"title\": \"Response\", \"comment\": \"| Q4 is still unsatisfactory to me. I would at least like to see the two results in comparison. It seems you are scaling the gradient of one term with the second term.\\n\\nWe are happy to add some ablation studies on this in the appendix once we can update the paper. However, I think this is a minor point -- using summation or multiplication in the loss function does not make a big difference to the technical contribution of this paper.\"}", "{\"title\": \"Response\", \"comment\": \"Thank you for your reply.\\n\\n| The point about \\\"bit assigned to layer\\\" vs \\\"assigning precision per layer\\\" is a minor one, but I appreciate that the approach is slightly different. \\n\\nWe believe that our approach is *fundamentally*, instead of slightly, different from [1]. In the previous response, we have listed four significant differences. At the top level, [1] proposes a smart quantization function that supports quantizing full-precision numbers to NON-INTEGER precisions. This allows [1] to \\\"softly\\\" assign each bit from a pool of bits to different layers with fixed operators. Our method, however, focuses on determining layer-wise operators, and the operators happen to be convolutions with different INTEGER precisions, but they can also be other types of operators such as max pooling. Our formulation is more general, therefore our method can be applied to general neural architecture search problems.\\n\\n| Re 4. In your paper, this is not clear AT ALL and I think it is a very important point. I had asked this question in my original review as well. How are the latent 32-bit weights handled? How do you do gradient descent e.g. on the corresponding latent weights? Are latent 32-bit weights even mentioned in the paper? I had to go to the appendix to find the quantization function. This needs to be addressed. \\n\\nThis is a good advice. we hope point-4 in the previous response clarified your question: at a given layer, each candidate operator has independent full-precision latent weight and activation. Latent weights are quantized following DoReFa-Net[2] and activations are quantized following PACT [3]. Due to the page limit, we moved this part to the appendix since we quantize weights and activations using the existing methods [2, 3]. We will add more explanations to this part to make it clearer.\"}", "{\"title\": \"reply to response\", \"comment\": \"Q4 is still unsatisfactory to me. I would at least like to see the two results in comparison. It seems you are scaling the gradient of one term with the second term.\\n\\nBased on the feedback, and the existing experimental deficiencies, and the remaining questions above, I am happy to increase my score.\"}", "{\"title\": \"reply to response\", \"comment\": \"The point about \\\"bit assigned to layer\\\" vs \\\"assigning precision per layer\\\" is a minor one, but I appreciate that the approach is slightly different.\\n\\nRe 4. In your paper, this is not clear AT ALL and I think it is a very important point. I had asked this question in my original review as well. How are the latent 32-bit weights handled? How do you do gradient descent e.g. on the corresponding latent weights? Are latent 32-bit weights even mentioned in the paper? I had to go to the appendix to find the quantization function. This needs to be addressed.\"}", "{\"title\": \"Reply to suggestions on more experiments\", \"comment\": \"We want to thank reviewer#4 for your review. Your summary correctly reflects the content of our paper.\", \"we_want_to_comment_on_the_suggestions_for_new_experiments\": \"\", \"comparing_with_exhaustive_search\": \"This is a good idea. However, one concern is that since the search space is combinatorial, even a shallow network (e.g., 5) with a smaller number of precisions (e.g., 32, 8, 1) can have a large search space of (e.g., 3^5 = 243 architectures) for which exhaustive search is intractable.\", \"comparing_with_darts_and_enas\": \"ENAS is not open-sourced, so a direct comparison is difficult. A more detailed analysis comparing DNAS with DARTS is discussed in the reply to reviewer#1, minor concern #2: https://openreview.net/forum?id=BJGVX3CqYm&noteId=S1lyG-h7A7\\n\\nWe plan to perform the suggested experiments of comparing with exhaustive search and DARTS. The results will be hopefully updated before the revision deadline and the camera-ready if the paper is accepted.\"}", "{\"title\": \"Thank you for your question.\", \"comment\": \"I will release precision-assignments of more architectures in the appendix. Thank you for your suggestions!\"}", "{\"title\": \"Continued discussion\", \"comment\": \"To address the reviewer's additional questions:\", \"question_1\": \"How are weights updated\\nFollowing [2,3], we use equation (14, 15) in appendix A to quantize each candidate operators' weight and activations for both super net and searched architectures. We do not treat ultra-low precision weights and activations differently. The gradient update w.r.t. full-precision weights and activations are well described in [2, 3], and we use the same approach.\", \"question_2\": \"Why alternatively train model weights and architecture parameters\\nThis ensures the operator choices do not overfit the training set and can be generalized to the validation set. This is a widely adopted technique in neural architecture search literature such as [4,5]. As we described in Appendix B, we randomly sample 80% of the training set to train the weights and 20% to train the architecture parameters.\", \"question_3\": \"Is the edge probability conditioned on the input\\nNo, the edge probability is not conditioned on the input. Although this is an interesting idea (dynamic neural networks conditioned on the input), it is not the scope of this paper.\", \"question_4\": \"Why loss function multiply instead of sum two components\\nOur neural architecture search problem can be seen as a multi-objective optimization problem, and we use the weighted product model to construct a loss function by multiplying the cross-entropy term with the log-cost term. This is also used in [6,7]. In our experiments, we tried summing or multiplying the two terms, and we found that multiplying works better.\", \"question_5\": \"Why not compare [1]'s result on AlexNet\\n[1]'s experiment is on AlexNet. In the NN quantization research, AlexNet is known to be redundant, and many methods can drastically quantize AlexNet without accuracy loss. Specifically, SqueezeNet [8] shows it can reduce the model size of AlexNet by 500x without accuracy loss. Therefore, we do not think quantizing AlexNet is still a good benchmark to show the effectiveness of new quantization methods. TTQ [9] shows it can quantize AlexNet weights to 2 bit (ternary) without accuracy loss, but on ResNet18, TTQ shows 3% accuracy loss, proving that quantizing ResNet is more difficult. When quantizing both activations and weights, [1]'s best result on AlexNet is 52.54% top-1 accuracy and each layer on average can have 8 bits. This accuracy is worse than the 2-bit quantization of DoReFa-Net[2] (53.6%) and PACT[3] (55.0%). As a result, comparing our method with DoReFa-Net and PACT and showing better performance is sufficient to prove the effectiveness of our method.\\n\\n[1] https://arxiv.org/pdf/1807.00942.pdf\\n[2] https://arxiv.org/abs/1606.06160\\n[3] https://arxiv.org/abs/1805.06085\\n[4] https://arxiv.org/abs/1806.09055\\n[5] https://arxiv.org/abs/1802.03268\\n[6] https://arxiv.org/pdf/1802.03494.pdf\\n[7] https://arxiv.org/pdf/1807.11626.pdf\\n[8] https://arxiv.org/abs/1602.07360\\n[9] https://arxiv.org/abs/1612.01064\"}", "{\"title\": \"Thank you for your review, but your understanding of our paper or the previous work is wrong.\", \"comment\": \"First, we would like to thank the reviewer for pointing to previous works on mixed precision quantization. We were not aware of them and are happy to acknowledge these prior works in our paper. However, we strongly disagree with the reviewer's opinion that our method is the same as, or covered by [1].\\n\\n[1] introduces an interesting technique that uses Gumbel Softmax to determine the precision allocation for each layer of a neural network. It proposes a precision allocation process to assign each bit from a \\\"bit pool\\\" to different layers of a network. For each bit, it uses Gumbel Softmax to compute a \\\"soft-allocation\\\" to determine where the bit is assigned to. The number of bit for a layer is the sum of all the bits assigned to the layer. It modifies the quantization function to allow non-integer bit quantization and uses STE to compute the gradient of the bit allocation. \\n\\nOur approach is fundamentally different from [1] in the following aspects:\\n1. Problem formulation and scope: We formulate the problem in a more general way that we support arbitrary layer-wise operator selection. Under our framework, the operator can be convolution with different precisions or any other operators such as max pooling. So our method can be applied to more general neural architecture search problems. In comparison, [1] only works for mixed precision quantization since the formulation of [1] is to assign a pool of bits to layers of a network.\\n2. Algorithm procedure: we conduct architecture search by training a stochastic super net to determine the layer-wise operator type. In comparison, [1] starts with a pool of bits, and assign each bit from the pool to a layer. \\n3. Gumbel Softmax: Our method lets each layer choose a different precision. The Gumbel Softmax function controls a probability that for each layer, which operator (precision) to choose. [1] allocates a bit to a different layer, and the Gumbel softmax function determines which layer should the bit be assigned to.\\n4. Performance: Each candidate operator in our super net has independent weights and activations. Our method allows the weights and activations for different precisions to have different \\\"latent\\\" full-precision values, which is the key to a good quantization performance. In [1] however, each layer has only one weight/activation and is quantized to different precisions as the training proceed. As also mentioned by the reviewer, directly mapping a higher precision weights/activations to lower precisions can lead to performance degradation. \\n\\nGiven such obvious and fundamental differences, we cannot agree that [1] covered the technical contribution of our paper.\"}", "{\"title\": \"Thank you for your review.\", \"comment\": \"We want to thank the reviewer#1 for your feedback. Your summary correctly reflects the content of our paper. We hope this rebuttal can address your concerns.\", \"major_concern\": \"Trained sampling vs random sampling\\nWe sample architectures every a few epochs, mainly because in our experiments, we want to analyze the behavior of the architecture distribution at different super net training epochs. This analysis is illustrated in figure 3 of our paper. We can see that at epoch-0, where the architecture distribution is trained for only one epoch (close to random sampling), the sampled architectures have much lower compression rate. Similarly, for epoch-9, architectures also have relatively low compression rate. In comparison, at epoch-79 and epoch-89, architectures have higher compression rates and accuracy. The difference between epoch-79 vs. epoch-89 is small since the distribution has converged. \\n\\nAs the reviewer#2 suggests, we can train the super net until the last epoch, then sample and train architectures from this distribution. Figure 3 shows that the five architectures sampled at epoch-89 are much better than the five architectures at epoch-0, which are essentially drawn from random sampling. Also, note that for CIFAR10-ResNet-110 experiments, the search space contains 7^54 = 4x10^45 possible architectures, 45 sampled architectures are tiny compared with the search space.\\n\\nReviewer #2 suggests comparing with a \\u201ccost-aware\\u201d random sampling policy. We tried a simple baseline that at each layer, we sample a conv operator with b-bit precision with probability\\n prob(precision=b) ~ 1/(1 + b)\\nThe performance of this policy is much worse since for a conv operator with precision-0 (in our notation, bit-0 denotes we skip the layer), the sampling probability is 33x higher than full-precision convolution, 2x higher than 1-bit, 3x higher than 2-bit, and so on. Architectures sampled from this distribution are extremely small but with much worse accuracy. We understand this might not be the best \\u201ccost-aware\\u201d sampling policy. If reviewer#1 has better suggestions, we are happy to try.\\n\\nMinor concern #1: Value of the Gumbel Softmax function\\nYes. We agree with the comments that the advantages of the Gumbel Softmax technique are two-fold:\\n 1. It makes the loss function differentiable with respect to the architecture parameter \\\\theta.\\n 2. Compared with other gradient estimation techniques such as Reinforce, Gumbel Softmax balances the variance/bias of the gradient estimation with respects to weights. \\n\\nMinor concern #2: Comparison with non-stochastic method such as DARTS\\nDARTS [1] does not really sample candidate operators during the forward pass. Outputs of candidate operators are multiplied with some coefficients and summed together. For the problem of mixed precision quantization, this can be problematic. Let's consider a simplified scenario\\n y = alpha_1 * y_1 + alpha_2 * y_2\\nLet's assume both y_1 and y_2 are in binary and are in {0, 1}. Assuming alpha_1=0.5 and alpha_2=0.25, then the possible values of y are {0, 0.25, 0.5, 0.75}, which essentially extend the effective bit-width to 2 bit. This is good for the super net's accuracy, but the performance of the super net cannot transfer to the searched architectures in which we have to pick only one operator per layer. Using our method, however, the sampling ensures that the super net only picks one operator at a time and the behavior can transfer to the searched architectures.\\n\\nMinor concern #3: Warmup training\\nWe use warmup training since in our ImageNet experiments. We observe that at the beginning of the super net training, the operators are not sufficiently trained, and their contributions to the overall accuracy are not clear, but their cost differences are always significant. As a result, the search always picks low-cost operators. To prevent this, we use warmup training to ensure all the candidate operators are sufficiently trained before we optimize architecture parameters. In our ImageNet experiments, we found that ten warmup epochs are good enough. In CIFAR-10 experiments, warmup training is not needed.\"}", "{\"title\": \"Contribution not significant; Potentially covered by prior work\", \"review\": \"The paper approaches the bit quantization problem from the perspective of neural architecture search, by treating each possible precision as a different type of neural layer. They estimate the proportion of each layer using a gumbel-softmax reparametrization. Training updates parameters and these proportions alternately.\\n\\nThe authors claim that prior work has only dealt with uniform bit precision. This is clearly false e.g.\", \"https\": \"//arxiv.org/pdf/1705.08665.pdf\\n\\nIn particular, https://arxiv.org/pdf/1807.00942.pdf uses the same approach, using gumbel-softmax to estimate the best number of bits. In the least, the authors needs to mention and contrast their approach, e.g. they can handle a budget constraint, but they use a fixed quantization function.\\n\\nThere is an inherent strength in this approach that the authors have not fully explored. The most recent key discovery in low precision networks is that the optimal parameters take very different values depending on the precision, ie beyond simple clipping/snapping based on quantization error. The DNAS approach can capture this, because the parameters of different precisions need not be constrained via a fixed quantization/activation function (appendix B). Therefore the following questions become important to understand.\\n\\n1. How are the weights w updated for low precision. I understand that you first sample an architecture but there is no explanation of how the low bit (e.g. 1-bit) weights are updated. Do you update the 32-bit weights, then use the functions in Appendix B to derive the low bit parameters? This is much less interesting than the power of the DNAS idea. Do you directly update them using STE?\\n2. Why is it important to train in an alternating fashion? How did you split the training set in to two for each ? Why not use a single training set?\\n3. Are the \\\"edge probabilities\\\" over different precision in any way the function of the input (image)? It seems your approach is able to distinguish \\\"easy\\\" and \\\"hard\\\" images by increasing the precision of parameters. If so, this should be explained and demonstrated. \\n4. In Eq (10), it is unusual to take the product of network performance and penalty term for parsimony. This needs to be explained vs. taking a sum of the two terms which has the nice interpretation of being the lagrangian of a constrained optimization problem. Do you treat these as instance level weights? \\n5. Experiments only show ResNet architecture, whereas prior work showed a broaded set of results. Only TTQ and ADMM is compared, where the most relevant work is https://arxiv.org/pdf/1807.00942.pdf. It is not clear if the good performance comes due to the block connectivity structure with skip connections, combined with the fact that the first and last layers are not quantized.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Inetresting approach to quantization and interesting experimental results\", \"review\": \"The authors propose a network quantization approach with adaptive per layer bit-width. The approach is based on a network architecture search (NAS) method. The authors aim to solve the NAS problem through SGD. Therefore, they propose to first reprametrize the the discrete random variable determining if an edge is computed or not to make it differentiable and then use Gumbel Softmax function as a way to effectively control the variance of the obtained unbiased estimator. This variance can indeed make the convergence of the procedure hard. The procedure is then adapted to the problem of network quantization with different band-widths.\\n\\nThe proposed approach is interesting. The differerentiable NAS procedure is particularly important and can have an important impact. The idea of having an adaptive per layer precision is also well motivated, and shows competitive (if not better) results empirically.\", \"some_additional_experiments_can_make_the_paper_stronger\": [\"Compare the result of the procedure to an exhaustive search in a setting where the latter is feasible (shallow architecture on an easy task with few possible bit widths)\", \"Compare the procedure to other state of the art NAS procedures (DARTS and ENAS) with the same search space adapted to the quantization problem, to empirically show that the proposed procedure is a compromise between these two methods as claimed by the authors.\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Neural Architecture Search Approach to Network Quantization\", \"review\": \"In this work the authors introduce a new method for neural architecture search (NAS) and use it in the context of network compression. Specifically, the NAS method is used to select the precision quantization of the weights at each layer of the neural network. Briefly, this is done by first defining a super network, which is a DAG where for each pair of nodes, the output node is the linear combination of the outputs of all possible operations (i.e., layers with different precision quantizations). Following [1], the weights of the linear combination are regarded as the probabilities of having certain operations (i.e., precision quantization), which allows for learning a probability distribution over the considered operations. Differently from [1], however, the authors bridge the soft sampling in [1] (where all operations are considered together but weighted accordingly to the corresponding probabilities) to a hard sampling (where a single operation is considered with the corresponding probability) through an annealing procedure based on the Gumbel Softmax technique. Through the proposed NAS algorithm, one can learn a probability distribution on the operations by minimizing a loss that accounts for both accuracy and model size. The final output of this search phase is a set of sampled architectures (containing a single operation at each connection between nodes), which are then retrained from scratch. In applications to CIFAR-10 and ImageNet, the authors achieve (and sometime surpass) state-of-the-art performance in model compression.\\n\\nThe two contributions of this work are\\n1)\\tA new approach to weight quantization using principles of NAS that is novel and promising;\\n2)\\tNew insights/technical improvements in the broader field of NAS. While the utility of the method in the more general context of NAS has not been shown, this work will likely be of interest to the NAS community.\\n\\nI only have one major concern. The architectures are sampled from the learnt probability distribution every certain number of epochs while training the supernet. Why? If we are learning the distribution, would not it make sense to sample all architectures only after training the supernet at our best?\\nThis reasoning leads me to a second question. In the CIFAR-10 experiments, the authors sample 5 architecture every 10 epochs, which means 45 architectures (90 epochs were considered). This is a lot of architectures, which makes me wonder: how would a \\u201ccost-aware\\u201d random sampling perform with the same number of sampled architectures?\\n\\nAlso, I have some more questions/minor concerns:\\n\\n1)\\tThe authors say that the expectation of the loss function is not directly differentiable with respect to the architecture parameters because of the discrete random variable. For this reason, they introduce a Gumbel Softmax technique, which makes the mask soft, and thus the loss becomes differentiable with respect to the architecture parameters. However, subsequently in the manuscript, they write that Eq 6 provides an unbiased estimate for the gradients. Do they here refer to the gradients with respect to the weights ONLY? Could we say that the advantage of the Gumbel Softmax technique is two-fold? i) make the loss differentiable with respect to the arch parameters; ii) reduce the variance of the estimate of the loss gradients with respect to the network weights.\\n\\n2)\\tCan the author discuss why the soft sampling procedure in [1] is not enough? I have an intuitive understanding of this, but I think this should be clearly discussed in the manuscript as this is a central aspect of the paper.\\n\\n3)\\tThe authors use a certain number of warmup steps to train the network weights without updating the architecture parameters to ensure that \\u201cthe weights are sufficiently trained\\u201d. Can the authors discuss the choice on the number of warmup epochs?\\n\\nI gave this paper a 5, but I am overall supportive. Happy to change my score if the authors can address my major concern.\\n\\n[1] Liu H, Simonyan K, Yang Y. Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055. 2018 Jun 24.\\n\\n-----------------------------------------------------------\\nPost-Rebuttal\\n---------------------------------------------------------\\nThe authors have fully addressed my concerns. I changed the rating to a 7.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Thank you for your review\", \"comment\": \"Thank you for your review.\\n\\nYour summary correctly and comprehensively reflects the gist of our paper. One minor correction we would like to make is that our experiments are not only conducted on the Cifar-10 dataset. On ImageNet dataset, we were able to compress ResNet models with no or little accuracy loss, but reduce the model size by up to 21.1x and computational cost by up to 103.5x, better than previous baselines. \\n\\nPlease let us know if you have further questions or concerns that we can help clarify.\"}", "{\"title\": \"An interesting topic with promising experiment results.\", \"review\": \"This paper presents a new approach in network quantization. The key insights of this paper is quantizing different layers with different bit-widths, instead of using fixed 32-bit width for all layer weights and activation in previous works. At the same time, this paper adopted the idea form both DARTS and ENAS with parameter sharing, and introduces a new differentiable neural architecture search framework. As the authors proposed, this DNAS framework is able to search efficiently and effective through a large search space. As demonstrated in the Experiment section of the paper, it achieves better validation accuracy than ResNet with much smaller model size and lower computational cost.\\n\\n1. An improved gradient method in updating the network architecture and parameters compared to DARTS and ENAS. It applies the Gumbel softmax to refine the sub-graph structure without training the entire super-net through the whole process. The work is able to obtain the same level of validation accuracy on Cifar-10 as ResNet while reduce the model parameters by a large margin. \\n2. The work is in the middle ground of two previous works: ENAS by Pham et al. (2018) and DARTS by Liu et al. (2018). However, there is no comparison with ENAS and DARTS in experiments. ENAS samples child networks from the super net to be trained independently while DARTS trains the entire super net together without decoupling child networks from the super net. By using Gumbel Softmax with an annealing temperature, The proposed DNAS pipeline behaves more like DARTS at the beginning of the search and behaves more like ENAS at the end.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}" ] }
S1eEmn05tQ
Uncertainty in Multitask Transfer Learning
[ "Alexandre Lacoste", "Boris Oreshkin", "Wonchang Chung", "Thomas Boquet", "Negar Rostamzadeh", "David Krueger" ]
Using variational Bayes neural networks, we develop an algorithm capable of accumulating knowledge into a prior from multiple different tasks. This results in a rich prior capable of few-shot learning on new tasks. The posterior can go beyond the mean field approximation and yields good uncertainty on the performed experiments. Analysis on toy tasks show that it can learn from significantly different tasks while finding similarities among them. Experiments on Mini-Imagenet reach state of the art with 74.5% accuracy on 5 shot learning. Finally, we provide two new benchmarks, each showing a failure mode of existing meta learning algorithms such as MAML and prototypical Networks.
[ "Multi Task", "Transfer Learning", "Hierarchical Bayes", "Variational Bayes", "Meta Learning", "Few Shot learning" ]
https://openreview.net/pdf?id=S1eEmn05tQ
https://openreview.net/forum?id=S1eEmn05tQ
ICLR.cc/2019/Conference
2019
{ "note_id": [ "rygnSaeNx4", "HkxmQr9ACm", "HJxoLQm5AQ", "BylIl6z5C7", "HygDbeg50X", "HJlrvD5Gp7", "H1eeyvnq2Q", "SklZMRiEjQ" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544977732476, 1543574811451, 1543283538704, 1543281901965, 1543270399237, 1541740381344, 1541224152290, 1539780105467 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1348/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1348/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1348/Authors" ], [ "ICLR.cc/2019/Conference/Paper1348/Authors" ], [ "ICLR.cc/2019/Conference/Paper1348/Authors" ], [ "ICLR.cc/2019/Conference/Paper1348/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1348/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1348/AnonReviewer1" ] ], "structured_content_str": [ "{\"metareview\": \"This paper presents a meta-learning approach which relies on a learned prior over neural networks for different tasks.\\n\\nThe reviewers found this work to be well-motivated and timely. While there are some concerns regarding experiments, the results in the miniImageNet one seem to have impressed some reviewers. \\n\\nHowever, all reviewers found the presentation to be inaccurate in more than one points. R1 points out to \\\"issues with presentation\\\" for the hierarchical Bayes motivation, R2 mentions that the motivation and derivation in Section 2 is \\\"misleading\\\" and R3 talks about \\\"short presentation shortcomings\\\".\\n\\nR3 also raises important concerns about correctness of the derivation. The authors have replied to the correctness critique by explaining that the paper has been proofread by strong mathematicians, however they do not specifically rebut R3's points. The authors requested R3 to more specifically point to the location of the error, however it seems that R3 had already explained in a very detailed manner the source of the concern, including detailed equations.\\n \\nThere have been other raised issues, such as concerns about experimental evaluation. However, the reviewers' almost complete agreement in the presentation issue is a clear signal that this paper needs to be substantially re-worked.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Presentation shortcomings\"}", "{\"title\": \"Thanks for the comment\", \"comment\": \"Following and the other reviewers I would indeed suggest the whole rewrite of section 2.2 and not talking about Hierarchical Bayes, but rather the VI framework that you actually use in practice. I do feel this is currently a major issue and needs significant addressing to make the paper publication ready.\\n\\n\\\"Being principled and sound should not be considered a negative aspect.\\\" - I might have not explained my point well here, but it is exactly for the fact that you are talking about a hierarchical bayes over the weights of the network, which is not what you end up using. This is my main critisism, otherwise I agree principled motivation is not to be discouraged, even if you do approximate versions of it later.\\n\\nThanks for clarifying the rolve of the latent variable v.\"}", "{\"title\": \"Addressing Reviewer's Concerns\", \"comment\": \"We thank the reviewer for the constructive comments.\", \"in_response_to\": \"\\u201c... with a further latent variable \\\"v\\\", which is unclear fundamentally why you need it as it can be subsumed inside \\\"z\\\"...\\u201d\\n\\n$z$ is a latent variable representing the whole task and $v$ encodes a given image instance. They do not contain the same information. While the notion of representation learning is powerful, the experiments on synbols shows that it lacks adaptability. Instead, we learn a \\u201cfamily\\u201d of representation conditioned on $z$ and we adapt the representation for the given task.\"}", "{\"title\": \"Addressing Reviewer's Concerns\", \"comment\": \"We thank the reviewer for the constructive comments.\", \"in_response_to\": \"\\u201cMinor points\\u201d\\nCorrections were added to the document.\"}", "{\"title\": \"Addressing Reviewers Concerns\", \"comment\": \"This review contains false accusations and strong opinions. We kindly ask the reviewer to stay factual for the rest of this discussion.\\n\\nThis paper was proofread by strong mathematicians. All of whom agreed that the derivation in section 2.2 is sound. If the reviewer believes that there is a mathematical error please be more precise on its location.\\n\\nWe agree that some of the phrasing in the paper may bring confusion. What appears to be at the center of the confusion is the difference between a prior over models and a prior over weights of a neural network. After rephrasing some sentences, we made clear that the resulting algorithms learn a prior over models and not a prior over the weights of a network. More specifically, we modified the introduction of Section 2 as follow:\\n\\nBy leveraging the variational Bayesian approach, we show how we can learn a prior over models with neural networks. We start our analysis with the goal of learning a prior $p(w|\\\\alpha)$ over the weights $w$ of neural networks across multiple tasks. We then provide a reduction of the Evidence Lower BOund (ELBO) showing that it is not necessary to explicitly model a distribution in the very high dimension of the weight space of neural networks. Instead the algorithm learns a subspace suitable for expressing model uncertainty within the distributions of tasks considered in the multi-task environment.\", \"anonreviewer3_wrote\": \"\\u201c...which is exactly why it is a valid research avenue to begin with that should not be trivially subsumed by work such as this.\\u201d\\n\\nWe do not seek to undermine the research on posterior over weight uncertainty. Our result only addresses model uncertainty in multi-task environments. Also, any progress on better posterior over weight uncertainty can be applied over $\\\\alpha$, the weights of the \\u201cmain\\u201d network. The updated version is more clear about that. \\n\\nAn extensive review of multi-task learning, hierarchical Bayes, meta-learning, few-shot learning and weight uncertainty goes beyond the scope of this paper. We did a significant effort to cover these domains with two pages of citations. The new version now includes all recommended citations and more.\"}", "{\"title\": \"The work proposes a variational approach to meta-learning that employs latent variables corresponding to task-specific datasets, but is presented in a misleading and imprecise manner. The experimental improvements are not well-motivated by the methodology introduced in the paper.\", \"review\": [\"Strengths:\", \"A variational approach to meta-learning is timely in light of recent approaches to solving meta-learning problems using a probabilistic framework.\", \"The experimental result on a standard meta-learning benchmark, miniImageNet, is a significant improvement.\"], \"weaknesses\": [\"The paper is motivated in a confusing manner and neglects to thoroughly review the literature on weight uncertainty in neural networks.\", \"The SotA result miniImageNet is the result of a bag-of-tricks approach that is not well motivated by the main methodology of the paper in Section 2.\"], \"major_points\": [\"The motivation for and derivation of the approach in Section 2 is misleading, as the resulting algorithm does not model uncertainty over the weights of a neural network, but instead a latent code z corresponding to the task data S. Moreover, the approach is not fully Bayesian as a point estimate of the hyperparameter \\\\alpha is computed; instead, the approach is more similar to empirical Bayes. The submission needs significant rewriting to clarify these issues. I also suggest more thoroughly reviewing work on explicit weight uncertainty (e.g., https://arxiv.org/abs/1505.05424 , http://proceedings.mlr.press/v54/sun17b.html , https://arxiv.org/abs/1712.02390 ).\", \"Section 3, which motivates a combination of the variational approach and prototypical networks, is quite out-of-place and unmotivated from a probabilistic perspective. The motivation is deferred to Section 5 but this makes Section 3 quite unreadable. Why was this extraneous component introduced, besides as a way to bump performance on miniImageNet?\", \"The model for the sinusoidal data seems heavily overparameterized (12 layers * 128 units), and the model for the miniImageNet experiment (a ResNet) has significantly more parameters than models used in Prototypical Networks and MAML.\", \"The training and test set sampling procedure yields a different dataset than the one used in e.g., MAML or Prototypical Networks. Did the authors reproduce the results reported in Table 1 using their dataset?\"], \"minor_points\": [\"abstract: \\\"variational Bayes neural networks\\\" -> variational Bayesian neural networks, but also this mixes an inference procedure with just being Bayesian\", \"pg. 1: \\\"but an RBF kernel constitute a prior that is too generic for many tasks\\\" give some details as to why?\", \"pg. 2: \\\"we extend to three level of hierarchies and obtain a model more suited for classification\\\" This is not clear.\", \"pg. 2: \\\" variational Bayes approach\\\" -> variational Bayesian approach OR approach of variational Bayes\", \"pg. 2: \\\"scalable algorithm, which we refer to as deep prior\\\" This phrasing is strange to me. A prior is an object, not an algorithm, and moreover, the word \\\"deep\\\" is overloaded in this setting.\", \"pg. 3: \\\"the normalization factor implied by the \\\"\\u221d\\\" sign is still intractable.\\\" This is not good technical presentation.\", \"pg. 3: \\\"we use a single IAF for all tasks and we condition on an additional task specific context cj\\\" It might be nice to explore or mention that sharing parameters might be helpful in the multitask setting...\", \"Section 2.4 describes Robbins & Munro style estimation. Why call this the \\\"mini-batch\\\" principle?\"], \"rating\": \"3: Clear rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"The authors propose a generative model for multitask learning using task-specific latent variables. Unfortunately, the paper has strong technical and presentational shortcomings.\", \"review\": \"The authors state that their goal with this paper is manifold:\\nThey want to learn a prior over neural networks for multiple tasks. The posterior should go beyond mean field inference and yield good results. The authors claim in their paper that they learn an 'expressive transferable prior over the weights of a network' for multi-task settings, which they denote with the unfortunate term 'deep prior'.\\n\\nIn sec. 2.1 the authors introduce the idea of a hierarchical probabilistic model of weights for a neural network p(W|a) conditioned on task latent variables p(a). They realize that one might want to generate those weights with a function which conditions on variable \\\"z\\\" and has parameters \\\"a\\\". They continue their argument in Sec 2.2 that since the weight scoring can be canceled out in the ELBO, the score of the model does not depend on weights \\\"w\\\" explicitly anymore.\\nThis, of course, is wrong, since the likelihood term in the ELBO still is an expectation over the posterior of q(w|z)q(z). \\nHowever, the authors also realize this and continue their argumentation as follows:\\nIn this case -according to the authors- one may drop the entire idea about learning distributions over weights entirely.\", \"the_math_says\": \"p(y|x ; a) = int_z p(z) int_w p(w|z ; a) p(y|x, w)dw dz.\\nSo the authors claim that a model p(y|x, z) which only conditions on 'z' is the same as the full Bayesian Model with marginalized weights. They then suggest to just use any neural network with parameters \\\"a\\\" to model this p(y|x, z ;a) directly with z being used as an auxiliary input variable to the network with parameters \\\"a\\\" and claim this is doing the same. This is of course utterly misleading, as the parameter \\\"a\\\" in the original model indicated a model mapping a low dimensional latent variable to weights, but now a maps to a neural network mapping a latent variable and an input vector x to an output vector y. As such, these quantities are different and the argument does not hold. Also a point estimate of said mapping will not be comparable to the marginalized p(y|x).\\n\\nWhat is more concerning is that the authors claim this procedure is equivalent to learning a distribution over weights and call the whole thing a deep prior, while this paper contains no work on trying to perform the hard task of successfully parametrizing a high-dimensional conditional distribution over weights p(w|z) (apart from a trivial experiment generating all of them at once from a neural network for a single layer in a failed experiment) but claims to succeed in doing so by circumventing it entirely. \\n\\nIn their experiments, the authors also do not actually successfully try to really learn a full distribution over the weights of a neural network. This alone suffices to realize that the paper appears to be purposefully positioned in a highly misleading way and makes claims about weight priors that are superficially discussed in various sections but never actually executed on properly in the paper.\\nThis is a disservice to the hard work many recent and older papers are doing in actually trying to derive structured hierarchical weight distributions for deep networks, which this paper claims is a problem they find to be 'high dimensional and noisy', which is exactly why it is a valid research avenue to begin with that should not be trivially subsumed by work such as this.\\n\\nWhen reducing this paper to the actual components it provides, it is a simple object: A deterministic neural network with an auxiliary, task-dependent latent variable which provides extra inputs to model conditional densities.\\nSuch ideas have been around for a while and the authors do not do a good job of surveying the landscape of such networks with additional stochastic input variables.\\nOne example is \\\"Learning Stochastic Feedforward Neural Networks\\\" by Tang and Salakhutdinov, NIPS 2013, a more recent one is \\\"Uncertainty Decomposition in Bayesian Neural Networks with Latent Variables\\\" by Depeweg et al 2017.\\nAn obvious recent example of multi-task/meta/continual learning comparators would be \\\"VARIATIONAL CONTINUAL LEARNING\\\" by Nguyen et al. and other work from the Cambridge group that deals with multi-task and meta-learning and priors for neural networks.\\n\\nAnother weakness of the paper is that the main driver of success in the paper's experiment regarding classification is the prototypical network idea, rather than anything else regarding weight uncertainty which seems entirely disentangled from the core theoretical statements of the paper.\\n\\nAll in all, I find this paper unacceptably phrased with promises it simply does not even attempt to keep and a misleading technical section that would distort the machine learning literature without actually contributing to a solution to the technical problems it claims to tackle (in relation to modeling weight uncertainty/priors on NN). Paired with the apparent disinterest of the authors to cite recent and older literature executing strongly related underlying ideas combining neural networks with auxiliary latent variables, I can only recommend that the authors significantly change the writing and the attribution of ideas in this paper for a potential next submission focusing on multi-task learning and clarify and align the core ideas in the theory sections and the experiment sections.\", \"rating\": \"2: Strong rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Method that seem to work in practice, but needs better comparison and has issues with presentation\", \"review\": \"The paper presents a method for training a probabilistic model for Multitask Transfer Learning. The key idea is to introduce a latent variable \\\"z\\\" per task which to capture the commonality in the task instances. Since this leads to an intractable likelihood the authors use the standard ELBO with a Variational Distribution over \\\"z\\\" defined as a Gaussian + Inverse Autoregressive Flow. For classification, the authors also show that they can combine the model with the main idea in Prototypical Networks.\\n\\nThe experiments evaluate on three different task, the comparison against MAML on the toy problem is quite interesting. However, the results on the Mini-Imagenet suggest that the main contributors to the better performance are the Prototypical Networks idea and the improved ResNet. Additionally, the authors compare against MAML only on the toy task and not on their synthetic dataset. I think that the experiments need better comparisons (there have been published an improved version of MAML, or even just add results from your own implementation of MAML with the same ResNet on the 3rd task as well). \\n\\nA major issue is that the model presented is not really a Hierarchical Bayesian model as being strongly presented. It is much more a practical variational algorithm, which is not bad by no means, but I find its \\\"interpretation\\\" as a Hierarchical Bayesian method as totally unnecessary and making the paper significantly harder to read and follow than it needs to be. This is true for both the base model and the model + ProtoNet. I think that the manuscript itself requires more work as well as a better comparison of the method to baseline algorithms.\\n\\n\\nSection 2.2:\\n\\nThe authors start by introducing a \\\"Hierarchical Bayes\\\" model over the parameters of a Neural Network for multi-task learning. By defining the model parameters to be an implicit function of some low-dimensional noise and the hyper-parameter they shift the inference to the noise variable \\\"z\\\". One issue, which I won't discuss further, is that this defines a degenerate distribution over the parameters (a fact well known in the GAN literature), which seem counter-intuitive to call \\\"Bayesian\\\". Later, since the parameters \\\"w\\\" has vanished from the equation the authors conclude that now they can change the whole graphical models such that there is actually no distribution over the parameters of a Neural Network, while the hyper-parameter IS now the parameters of a Neural Network and the latent variable is an input to it. Mathematically, the transformation is valid, however, this no longer corresponds to the original graphical model that was described earlier. The procedure described here is essentially a Variational Model with latent variable \\\"z\\\" for each task and the method performs a MAP estimation of the parameters of the Generative Model by doing Variational Inference (VAE to be exact) on the latent \\\"z\\\". There is nothing bad about this model, however, the whole point of using a \\\"Hierarchical Bayes\\\" for the parameters of the Network serves no purpose and is significantly different to the actual model that is proposed. \\n\\nIn section 2, the prior term p(a) in equation 7 and Algorithm 1 is missing.\", \"section_3\": \"The authors argue that they add yet another level of hierarchy in the Graphical Model with a further latent variable \\\"v\\\", which is unclear fundamentally why you need it as it can be subsumed inside \\\"z\\\" (from a probabilistic modelling perspective they play similar roles). Additionally, they either do not include a prior or on \\\"v\\\" or there is a mistake in the equation for p(S|z) at the bottom of page 4. The main motivation for this comes from the literature where for instance if we have a linear regression and \\\"v\\\" represents the weights of the last linear layer with a Gaussian Prior than the posterior over \\\"v\\\" has an analytical form. After this whole introduction into the special latent variable \\\"v\\\", the authors actually use the idea from Prototypical Networks. They introduce a valid leave-one producer for training. However, the connection to the latent variable \\\"v\\\" which was argued to be the third level of a Hierarchical Bayes model is now lost, as the context c_k is no longer a separate latent variable (it has no prior and in the original Prototypical Network although the idea can be interpreted in a probabilistic framework it is never presented as a Hierarchical Bayes).\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
HkgEQnRqYQ
RotatE: Knowledge Graph Embedding by Relational Rotation in Complex Space
[ "Zhiqing Sun", "Zhi-Hong Deng", "Jian-Yun Nie", "Jian Tang" ]
We study the problem of learning representations of entities and relations in knowledge graphs for predicting missing links. The success of such a task heavily relies on the ability of modeling and inferring the patterns of (or between) the relations. In this paper, we present a new approach for knowledge graph embedding called RotatE, which is able to model and infer various relation patterns including: symmetry/antisymmetry, inversion, and composition. Specifically, the RotatE model defines each relation as a rotation from the source entity to the target entity in the complex vector space. In addition, we propose a novel self-adversarial negative sampling technique for efficiently and effectively training the RotatE model. Experimental results on multiple benchmark knowledge graphs show that the proposed RotatE model is not only scalable, but also able to infer and model various relation patterns and significantly outperform existing state-of-the-art models for link prediction.
[ "knowledge graph embedding", "knowledge graph completion", "adversarial sampling" ]
https://openreview.net/pdf?id=HkgEQnRqYQ
https://openreview.net/forum?id=HkgEQnRqYQ
ICLR.cc/2019/Conference
2019
{ "note_id": [ "kRNjhwAE51", "KvxKo-6Xmqz", "HJxJOMP-jr", "SkeaA0NWsr", "HJxiR1c9TV", "SJlXT2Yc64", "BkewKmXwH4", "r1eMUblYGN", "HyxmHhCOMV", "S1lX0Hps-V", "B1lmFrmcbV", "ryxb31UJZ4", "HkgBAAN1ZV", "ryxJBxM5xE", "H1g1OxZ5l4", "HJeMTnkBe4", "HyeIocuQxN", "rkxa4KkyJ4", "rJg9ItjCCm", "ByxtN4yRR7", "rkgm5j1aAQ", "Sye6N04n0m", "B1gmLa420X", "SJluiXkhRm", "Hkxa6j_oAQ", "H1lMravo07", "SJxD-H5t0m", "r1xF475K0m", "Bkxn0G5YR7", "Skl2_GqtAm", "BJx75b5FCX", "rkgeGrg_Tm", "HJlFFR7167", "HJlYlIhn2X", "HkxYw_tn3Q", "HyekBjPinm", "HJlUq8jq3X", "HJxnXzWVnX", "HklyVUAX2m", "H1e_OITis7", "rkerutP2tQ" ], "note_type": [ "comment", "comment", "official_comment", "comment", "official_comment", "comment", "official_comment", "comment", "comment", "comment", "comment", "comment", "comment", "comment", "comment", "meta_review", "comment", "comment", "comment", "comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "comment", "comment", "official_review", "comment", "official_comment", "official_review", "comment", "official_comment", "official_review", "comment" ], "note_created": [ 1665890683340, 1652537473988, 1573118567126, 1573109460989, 1559039955373, 1559039162604, 1550427006958, 1547399497555, 1547394106949, 1546536394819, 1546429818775, 1545719720935, 1545715405402, 1545375799205, 1545371750615, 1545039034022, 1544944286294, 1543596341333, 1543579985995, 1543529520887, 1543465866819, 1543421493265, 1543421258559, 1543398304292, 1543371716709, 1543367994309, 1543247103132, 1543246641507, 1543246547797, 1543246451697, 1543246219225, 1542092039641, 1541516929051, 1541354993060, 1541343328962, 1541270327008, 1541219982109, 1540784676455, 1540773414536, 1540245104399, 1538189677281 ], "note_signatures": [ [ "(anonymous)" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper1347/Authors" ], [ "~Rajiv_Teja_Nagipogu1" ], [ "ICLR.cc/2019/Conference/Paper1347/Authors" ], [ "~Apoorv_Umang_Saxena1" ], [ "ICLR.cc/2019/Conference/Paper1347/Authors" ], [ "(anonymous)" ], [ "(anonymous)" ], [ "(anonymous)" ], [ "(anonymous)" ], [ "(anonymous)" ], [ "(anonymous)" ], [ "(anonymous)" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper1347/Area_Chair1" ], [ "(anonymous)" ], [ "(anonymous)" ], [ "(anonymous)" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper1347/Authors" ], [ "ICLR.cc/2019/Conference/Paper1347/Authors" ], [ "ICLR.cc/2019/Conference/Paper1347/Authors" ], [ "ICLR.cc/2019/Conference/Paper1347/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1347/Area_Chair1" ], [ "~Dai_Quoc_Nguyen1" ], [ "ICLR.cc/2019/Conference/Paper1347/Authors" ], [ "ICLR.cc/2019/Conference/Paper1347/Authors" ], [ "ICLR.cc/2019/Conference/Paper1347/Authors" ], [ "ICLR.cc/2019/Conference/Paper1347/Authors" ], [ "ICLR.cc/2019/Conference/Paper1347/Authors" ], [ "(anonymous)" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper1347/AnonReviewer3" ], [ "~Dai_Quoc_Nguyen1" ], [ "ICLR.cc/2019/Conference/Paper1347/Authors" ], [ "ICLR.cc/2019/Conference/Paper1347/AnonReviewer1" ], [ "~Dai_Quoc_Nguyen1" ], [ "ICLR.cc/2019/Conference/Paper1347/Authors" ], [ "ICLR.cc/2019/Conference/Paper1347/AnonReviewer2" ], [ "(anonymous)" ] ], "structured_content_str": [ "{\"comment\": \"The idea you think may be the same with the following paper, which is slightly earlier than this paper.\\nLei, J., Ouyang, D., & Liu, Y. (2019). Adversarial Knowledge Representation Learning Without External Model. IEEE Access, 7, 3512-3524.\", \"https\": \"//ieeexplore.ieee.org/document/8599182\\n\\nA slightly difference between the paper self-adaptive and self-adv. is that fixed number of negative samples (negative samples n = 64/128/256/1024...) are exploited for one positive case instead of 1 negative case for one positive case from fixed number of random selected cases (Ns = 20).\\nUniformly negative sampling is cheap, but negative sampling from the set of all entities by argmax-loss is expensive. It may work but may be not necessary in training.\", \"title\": \"sample negative sampling from the set of all entities by argmax-loss is expensive\"}", "{\"comment\": \"I think self-adversarial negative sampling is the same as KBGAN but instead of two models , it is just one model and specific way to draw the negative samples\", \"as_i_think\": \"1- sample negative sampling from the set of all entities depend on the softmax function for all entities with adversarial temperature to adjust the sampling to be not like Uniform or argmax (is this right?) \\n2- take the negative sample with the highest probability from the previous function and then use in the loss function?\", \"also_in_the_function_why\": \"hi Numerator and then hj in denominator expfr(hi,ti) / sum (expfr(hj,tj))\", \"title\": \"more explanation for the Self-adversarial negative sampling step\"}", "{\"title\": \"This is an implementation for the modulus constraint for the relation embeddings\", \"comment\": \"Hi Rajiv,\\n\\nThis is an implementation choice for the modulus constraint for the relation embeddings. In this repository, we use these real-valued vectors to represent the phases of the relation embeddings, while use doubled real-valued vectors to represent complex-valued embeddings.\", \"a_relevant_discussion_can_be_found_at_https\": \"//github.com/DeepGraphLearning/KnowledgeGraphEmbedding/issues/7\"}", "{\"comment\": \"I am relatively new to the KG embedding space, so forgive me if this is something trivial.\\n\\nI ran the code specified in the paper, with the same parameters as the README. I can see that -de option is specified but not -dr which makes the dimensions of entity and relation embeddings different. However, you are assuming tail as the Hadamard product of these embeddings which require them to have same dimensions. What am I missing? Thanks. Good work!\", \"title\": \"Why are the dimensions of entity embedding and relation embedding different for RotatE in the code?\"}", "{\"title\": \"Their embedding dimensions are different\", \"comment\": \"````\\\"\\\"\\nWe re-implement a 50-dimension TransE model with the margin-based ranking criterion that was used in (Cai & Wang, 2017), and evaluate its performance on FB15k-237, WN18RR and WN18 with self-adversarial negative sampling.\\n\\\"\\\"\"}", "{\"comment\": \"In table 7: self-adversarial, FB15K-237, H@10 is 0.465\", \"in_table_8\": \"Transe, FB15K-237, H@10 is 0.531\\n\\nShouldn't they be same?\", \"title\": \"Why are these 2 numbers different?\"}", "{\"title\": \"Yes, RotatE can perfectly fit the training set\", \"comment\": \"Here is our results on FB15k training set:\\n\\nTask Prediction Head (MRR) Prediction Tail (MRR) MRR\\nRelation Category 1-to-1 1-to-N N-to-1 N-to-N 1-to-1 1-to-N N-to-1 N-to-N Overall\\nRotatE 0.998 1.000 0.969 0.999 0.998 0.961 1.000 0.999 0.995\"}", "{\"comment\": \"I would like to recommend the original GAN paper [1] to resolve your misunderstanding on adversarial fake sample generation. Sampling (equivalent to sample from a fake \\\"distribution\\\" here, not only the most misleading sample) is the key to the success in GAN.\\n\\nIf the generator only generates the most misleading sample, it's easy to see that the Global Optimality is not \\n\\np_g = p_data. \\n\\nBecause the optimal discriminator D now is \\n\\nD\\u2217_G(x) = 0 if (p_g(x) >= p_g(*)) else 1\\n\\n[1] Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y. Generative adversarial nets. NIPS 2014\", \"title\": \"Recommending a paper\"}", "{\"comment\": \"I think you may have misread or misunderstood the above comment and the Wieting et al. paper. Wieting et al.'s method is self-adversarial negative sampling. It seems to me that their method is more advanced because they identified and addressed the false negative problem. I would be happy to see a constructive comparison.\\n\\nOn the other hand, like other comments said, reported results are buried under many optimization techniques and tunings. As a fellow researcher I would like to see more direct results.\", \"title\": \"Acknowledging and comparison with related work\"}", "{\"comment\": \"Because Wieting et al. only used the most difficult example, which is highly possible to be false-negative. In contrast, KBGAN and self-adversarial sampling do not seem to suffer from the false-negative problem.\", \"title\": \"Then this is why negative examples in Wieting et al. are not effective\"}", "{\"comment\": \"Sampling and selecting examples are equivalent terms here.\\n\\nThe method to choose $t_1$ is essentially \\\"self-adversarial\\\" negative sampling. Notice that by $t_1 = argmax cos(g(x_1), g(t))$, $t_1$ is the most difficult example regarding the model itself. Wieting et al. went even further with addressing the false-negative problem.\\n\\nThe current work should focus on showing how they apply it effectively.\", \"title\": \"Wieting et al. do self-adversarial negative sampling\"}", "{\"comment\": \"I don't think the \\\"SELECTING NEGATIVE EXAMPLES\\\" process in [1] is sampling.\\n\\n'''\\nTo select t1 and t2 in Eq. 1, we tune the choice between two approaches. The first, MAX, simply chooses the most similar phrase in some set of phrases (other than those in the given phrase pair). For simplicity and to reduce the number of tunable parameters, we use the mini-batch for this set, but it could be a separate set. Formally, MAX corresponds to choosing t1 for a given hx1, x2i as follows:\\nt1 = argmax cos(g(x1), g(t))\\nwhere Xb \\u2286 X is the current mini-batch. That is, we want to choose a negative example ti that is similar to xi according to the current model parameters. The downside of this approach is that we may occasionally choose a phrase ti that is actually a true paraphrase of xi.\\n'''\\n\\nTo me, self-adversarial sampling is like a self-adversarial variant of KBGAN [2] and seems more elegant than [1].\\n\\n[1] Wieting, J., Bansal, M., Gimpel, K., & Livescu, K. (2015). Towards universal paraphrastic sentence embeddings. arXiv preprint arXiv:1511.08198. ICLR '16.\\n[2] Liwei Cai, & William Yang Wang. (2017). KBGAN: Adversarial Learning for Knowledge Graph Embeddings. NAACL '18\", \"title\": \"Do you think [1] is sampling?\"}", "{\"comment\": \"I think the performance of RotatE without self-adversarial sampling should be extensively reported.\\nAlso, there is no convincing explanation as to why self-adversarial training helps RotatE. If we do extensive hyper-parameter search on self-adversarial sampling, maybe ComplEx can be better. Self-adversarial sampling is really a complementary method that is not tailored to RotatE.\", \"title\": \"Then, RotatE without self-adversarial training is the real contribution.\"}", "{\"comment\": \"I am concerned with the style of presentation in this paper.\\n\\n1. \\\"rotation in complex plane\\\": in knowledge graph embedding community, the ComplEx model [1] is very well established. It involves complex number product, which is \\\"rotation in complex plane\\\". The authors failed to compare to existing work.\\n\\n2. \\\"self-adversarial negative sampling\\\": this technique was used at least in 2016 to train sentence embedding [2]. The authors also failed to compare to existing work.\\n\\n3. Reporting of result: for fair comparison to future work (and also past work), the paper should include results of RotatE on standard setting.\\n\\nThe result is worthy, nevertheless the writing's getting on my nerve. This is only my ranting, but that's what gets to people working closely with this topic. This is in a comment to AC because the conference can choose which practice to endorse. I urge the authors to rewrite in a more straightforward and fair style.\\n\\n[1] Trouillon, Th\\u00e9o, et al. \\\"Complex embeddings for simple link prediction.\\\" ICML '16.\\n[2] Wieting, John, et al. \\\"Towards universal paraphrastic sentence embeddings.\\\" ICLR '16.\", \"title\": \"Thanks for your hard work, and a few comments\"}", "{\"comment\": \"The \\\"self-adversarial\\\" sampling is not a new technique. It dated back to at least 2016, as used in training sentence embedding [1]. Using this technique usually helps, but sometimes it is tricky because of more false-negative. How exactly the author implemented this technique in their model to avoid false-negative is an interesting question.\\n\\n[1] Wieting, J., Bansal, M., Gimpel, K., & Livescu, K. (2015). Towards universal paraphrastic sentence embeddings. arXiv preprint arXiv:1511.08198. ICLR '16.\", \"title\": \"Self-adversarial sampling in the literature\"}", "{\"metareview\": \"This paper proposes a knowledge graph completion approach that represents relations as rotations in a complex space; an idea that the reviewers found quite interesting and novel. The authors provide analysis to show how this model can capture symmetry/assymmetry, inversions, and composition. The authors also introduce a separate contribution of self-adversarial negative sampling, which, combined with complex rotational embeddings, obtains state of the art results on the benchmarks for this task.\", \"the_reviewers_and_the_ac_identified_a_number_of_potential_weaknesses_in_the_initial_paper\": \"(1) the evaluation only showed the final performance of the approach, and thus it was not clear how much benefit was obtained from adversarial sampling vs the scoring model, or further, how good the results would be for the baselines if the same sampling was used, (2) citation and comparison to a closely related approach (TorusE), and (3) a number of presentation issues early on in the paper.\\n\\nThe reviewers appreciated the author's comments and the revision, which addressed all of the concerns by including (1) additional experiments to performance with and without self-adversarial sampling, and comparisons to TorusE, (2) improved presentation.\\n\\nWith the revision, the reviewers agreed that this is a worthy paper to include in the conference.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Interesting idea, solid results, good analysis\"}", "{\"comment\": \"This is a great paper with strong empirical performance!!\\n\\nI suppose you have also tried RotatE without self-adversarial training. Was it still better than all the other baselines (without self-adversarial training)? Or is it the combination of RotatE and self-adversarial that is crucial?\\n\\nI think it is also necessary to put extensive results of all the baselines with self-adversarial training on *ALL* the datasets. When proposing two complementary methods, it is crucial to clearly separate the contribution. To me, it is surprising that self-adversarial training alone can significantly boost the performance of all the methods, and the training strategy is already a great contribution.\", \"title\": \"Great paper! Results of RotatE without self-adversarial training?\"}", "{\"comment\": \"> This issue should be irrelevant if the model give different scores for almost all triples.\\n\\nIt's likely to cause troubles if the model is likely to give the same scores to different triples, which happens e.g. when you have saturating non-linearities.\\n\\nOne way of dealing with this is to just fall back to the original entity ordering, like in [1, 2]\\n\\n[1] https://github.com/uclmr/inferbeddings/blob/master/inferbeddings/evaluation/metrics.py#L82\\n[2] https://github.com/glorotxa/SME/blob/master/model.py#L1691\", \"title\": \"Evaluation issue\"}", "{\"comment\": \"Thanks for your response. However, I think your example of the ComplEx model missed the point. Moreover, it is not a proof that ComplEx cannot model composition. In fact, the example has reasoning error. I can always start from picking r1 \\\\circ r2 != alpha r3 then picking x, y, z that satisfies <r1, x, \\\\bar{y}>, <r2, y, \\\\bar{z}>, and <r3, x, \\\\bar{z}>. One example does not make a proof.\\n\\nThe point is, as you have many strong claims, I expect to see the proofs, either mathematical proof, or clear empirical evidences. For example, showing ComplEx fails miserably on synthetic data with composition pattern.\", \"update\": \"We should focus on main points. Please justify your claim about composition pattern. Thanks again.\", \"title\": \"The claim about composition pattern\"}", "{\"comment\": \"Thanks for the great answer! It makes sense to me!\\nProbably, when N is large (in 1-to-N relation), it is better for TransE-type model to down-weight the corresponding loss term, so that those N entities will not be forced to have very similar embeddings.\", \"another_related_question\": \"how does the training loss behave for your model? Does it perfectly fit the training set?\", \"title\": \"Thanks for the answer!\"}", "{\"title\": \"You are right!!\", \"comment\": \"Thanks for your understanding! You are right! \\u2018ordinal\\u2019 is not sufficient in the case when the true triple comes earlier in the list, especially when the true triplet is put in the beginning of the list. The ConvKB\\u2019s updated new eval.py [1] suffers this problem by always putting the true triplet in the first position (see the codes below).\\n\\n#thus, insert the valid test triple again, to the beginning of the array\\nnew_x_batch = np.insert(new_x_batch, 0, x_batch[i], axis=0)\\nnew_y_batch = np.insert(new_y_batch, 0, y_batch[i], axis=0)\\n\\nIn this case, \\u2018ordinal\\u2019 is essentially equivalent to \\u2018min\\u2019, so it\\u2019s not sufficient. However, this problem can be easily addressed by randomly shuffling the list. \\n\\n[1] https://github.com/daiquocnguyen/ConvKB/commit/c7ee60526ee81b46c2b0075cca2e387b0dbc6e90\"}", "{\"title\": \"Please refer to our reply to Reviewer2\", \"comment\": \"Thanks for such a good question! We have provided some theoretical analysis to show that the RotatE model can also somehow model the 1-to-N relations. Please refer to our response to Reviewer2.\"}", "{\"title\": \"The RotatE model is somehow able to model 1-to-N relations.\", \"comment\": \"We first would like to provide some theoretical analysis to show that the RotatE model can also somehow model the 1-to-N relations. Taking a 1-to-N relation r as an example. The triplets having the head entity x and relation r are denoted as: r(x, y1), r(x, y2) \\u2026. r(x, yn). When the optimization converges, it could be easily to find out that the embeddings of y1, y2, \\u2026, yn will be evenly distributed on the surface of a hypercube (or a hypersphere in the case of L-2 norm) centered at rx. In other words, ||rx - y1|| = ||rx - y2|| = .. = ||rx - yn||. This phenomenon is the same as in semantic matching models, like ComplEx, where the scores <r,x,\\\\bar{y1}>=<r,x,\\\\bar{y2}>=..=<r,x,\\\\bar{yn}>. Therefore, the RotatE model can somehow deal with 1-to-N relations just like ComplEx, as well as TransE.\\n\\nA more elegant and rigorous approach to model the 1-to-N, N-to-1, and N-to-N relations is to leverage a probabilistic framework to model the uncertainties of the entities, where each predicted entity is represented as a Gaussian distribution. This has been proved quite effective in [1]. Our RotatE model can easily leverage this framework to mitigate this issue. \\n\\nAnother thing to note is that the focus of this paper is to model and infer the different types of relation patterns, but not the 1-to-N, N-to-1, and N-to-N relationships. However, we will conduct further experiments to compare the performance of different methods (TransE, ComplEx and RotatE) on the 1-1, 1-to-N, N-to-1, and N-to-N relationships. \\n\\n[1] Shizhu He, Kang Liu, Guoliang Ji and Jun Zhao, Learning to Represent Knowledge Graphs with Gaussian Embedding\"}", "{\"title\": \"1-to-N, N-to-1, N-to-N?\", \"comment\": \"Thanks a lot for the response and updating the paper.\\n\\nWhat is your response to the public comment above?\", \"https\": \"//openreview.net/forum?id=HkgEQnRqYQ&noteId=rkgeGrg_Tm\\n\\nSpecifically, if TransE and RotatE suffer from not being able to model 1-to-N, N-to-1, N-to-N relations, what is your take on why this is not reflected in the experimental results for RotatE? Is this a limitation of the used datasets?\\n\\n\\n-- R2\"}", "{\"title\": \"Clarify why 'ordinal' is not sufficient?\", \"comment\": \"This will likely not be taken into account for the decision, so I don't want to discuss this too much.\\n\\nBut it is an important issue for the field, and I understand the concern raised by the authors: triples with the same score should get random (or ideally, max) ranking, not min. With min, the MRR ranking will be inflated, incorrectly, and benefits methods that tend to produce tied scores.\\n\\nI have a quick question for the authors though. Can you verify, and explain, why rankdata(results, method=\\u2019ordinal\\u2019) is not sufficient? Is it because the true triple comes earlier in the list (somehow)?\"}", "{\"comment\": \"1. \\\" results_with_id = rankdata(results, method='min') \\\": I used this last year because I simply want to give the valid test triple and its replicated triples a same rank (since I used a batch size).\\n\\n2. \\\" A simple ... wrong\\\": your example is not real since a model tends to give high scores to valid triples and low score to invalid triples. None of existing models can give a MRR score of 1.\", \"i_have_another_question_for_you\": \"Assume that a valid test triple and some of its corrupted triples have a same score. Why must you think it is wrong if assigning them a same rank?\\n\\n3. \\\" For the previous codes, We opened ... /pull/4). \\\": I keep to maintain my code and do not accept any pull request before/without opening an issue in my ConvKB github for a discussion. As I said in my previous reply, it could be much better if you created an open issue in my github with your official account.\\n\\n4. \\\" results_with_id = rankdata(results, method=\\u2019ordinal\\u2019) \\\": I just updated my code using \\\"ordinal\\\" and still get a same results (with a quick test using pre-trained TransE embeddings). You can check and test it for ConvKB. No bug.\\n\\n5. I will not discuss about the implementation of my evaluation further, here. If you still have other problems, you can create an open issue in my ConvKB github.\\n\\nThank you for your time and discussion.\", \"title\": \"Still no bug in our ConvKB evaluation!\"}", "{\"title\": \"Thanks for your comments!!\", \"comment\": \"Thanks for your comments!! The difference between RotatE and ComplEx can be summarized as follows:\\n\\n(1)ComplEx belongs to the semantic matching model while RotatE belongs to the distance-based model. Most of existing knowledge graph embedding models can be roughly classified into two categories: Translational(Transformational) Distance Models and Semantic Matching Models [1]. The former measure the plausibility of a fact as a translation(transformation) between two entities, while the latter measure the plausibility of facts by matching latent semantics of entities and relations. RotatE and ComplEx are in different categories. Actually, we can find that the relation between ComplEx and RotatE is in analogy to the relation between TransF [2] and TransE, where the former can be regarded as a slack version of the latter.\\n\\n(2) As a result, the biggest difference between ComplEx and RotatE addressed in this paper is that, the RotatE model can infer the composition pattern of relations, while the ComplEx model cannot. A simple counterexample could illustrate this point.\\n\\nLet\\u2019s assume r1(x, y), r2(y, z) and r3(x, z) hold, and then according to ComplEx we have\\n\\nRe(<r1, x, \\\\bar{y}>) > Re(<r1, x\\u2019, \\\\bar{y\\u2019}>)\\nRe(<r2, y, \\\\bar{z}>) > Re(<r2, y\\u2019, \\\\bar{z\\u2019}>)\\nRe(<r3, x, \\\\bar{z}>) > Re(<r3, x\\u2019, \\\\bar{z\\u2019}>)\\n\\nwhere r1(x\\u2019, y\\u2019), r2(y\\u2019,z\\u2019) and r3(x\\u2019, z\\u2019) are negative triplets.\\n\\nFrom the above equations, we can find that the ComplEx model does not model a bijection mapping from h to t via relation r. For example, let x=-1+i, y=1, z=1+i, r1=-1-0.8i, r2= 0.2+i, r3=-0.8-i, we have r1(x, y), r2(y, z) and r3(x, z) hold, because\\n\\n<r1, x, \\\\bar{y}> = 1.8 - 0.2i\\n<r2, y, \\\\bar{z}> = 1.2 + 0.8i\\n<r3, x, \\\\bar{z}> = 2 - 1.6i\\n\\nHowever, r1 * r2 = 0.6 - 1.16i, r3= - 0.8 - i do not show the supposed pattern r1 \\\\circ r2 = \\\\alpha r3 here.\\n\\nAs for the comparison with TransE, the rotation in the RotatE model is in the complex plane of each embedding vector element, as the same as TransE. This is different from the rotation is in the whole embedding space by matrix multiplication.\\n\\n\\u201cAbout experiments, for fair comparisons, results should be reported on common and standard settings, especially with and without new negative sampling method\\u2026.\\u201d\\n\\nWe have added the results of TransE and ComplEx with the new adversarial negative sampling technique on three datasets in Table 8. \\n\\n\\u201cThe authors should also address how they estimate/or approximate the softmax in Equation 4 of negative sampling method to scale to large datasets, because it is very costly due to the normalization term. ...\\u201d\\n\\np(h\\u2019_j , r, t\\u2019_j |{(h_i , r_i , t_i)}) is defined as the probability that we sample (h\\u2019_j , r, t\\u2019_j) from a sampled set {(h_i , r_i , t_i)}, so we calculate the softmax function only on the sampled triplets. This is very efficient.\\n\\n\\u201c It's also not clear what $ f_r $ refers to in Equation 4.\\u201d\\n\\n $f_r$ is the score function introduced in Table 1, which equals to $- d_r$.\\n\\n[1] Knowledge Graph Embedding: A Survey of Approaches and Applications\\n[2] Knowledge graph embedding by flexible translation\"}", "{\"title\": \"Your updated codes still have the same problem.\", \"comment\": \"Thanks for your verification for your model. We do agree that the implementation of your model is correct. However, what we pointed out is that your evaluation is problematic!!\\n\\nFor your updated eval.py, we find that you used the following code to get the rank for each triplets:\\n\\nresults_with_id = rankdata(results, method='min')\\n\\nwhere \\u2018min\\u2019 represents \\u201cThe minimum of the ranks that would have been assigned to all the tied values is assigned to each value. (This is also referred to as \\u201ccompetition\\u201d ranking.)\\u201d according to the official document [1].\\n\\nHowever, such \\\"a specific ranking procedure\\\" tends to rank the true positive triplets in a high position, if there are many triplets with the same score.\\n\\nA simple example is that a model produce score=b for all triplets, then results_with_id = rankdata(results, method='min') will return the results that all the triplets are ranked in the first position. In other words, in this case MRR = 1, which is definitely wrong.\\n\\nMoreover, as mentioned in [2], we have fixed the bug in your previous codes and reported the true performance of your model on FB15k-237. We provided the checkpoint file, where you can check that MRR = 40 by your original eval.py, but 24 by our bug-fixed eval.py.\\n\\nAs for your updated codes, we suggest that you should replace the \\u201crankdata\\u201d part by:\\n\\nresults_with_id = rankdata(results, method=\\u2019ordinal\\u2019)\\n\\nwhere \\u2018ordinal\\u2019 represents \\u201cAll values are given a distinct rank, corresponding to the order that the values occur in a.\\u201d according to the official document [1]. Although the results may be a little different from the results of our released bug-fixed eval.py [2] (We used quicksort ranking by following you), it would also provide a valid evaluation for your model.\\n\\nFor the previous codes, We opened a pull that fix the bug (https://github.com/daiquocnguyen/ConvKB/pull/3), but it was closed. For your new codes, we also opened a pull to fix the bug (https://github.com/daiquocnguyen/ConvKB/pull/4).\\n\\nFinally, we want to emphasize again that we did not intend any offence to your work. The truth is that we found a problem, and we want to make it right. \\n\\n[1]: https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.rankdata.html\\n[2]: https://github.com/KnowledgeBaseCompleter/eval-ConvKB\"}", "{\"title\": \"Thanks for your appreciation and great suggestions!\", \"comment\": \"Thanks for your appreciation to our work and the great comments. We\\u2019ve revised the introduction part on the representations in complex domain.\\n\\n\\u201cThe optimization section does not mention how constraints are imposed.\\u201d\\n\\nSince each relation is modeled as a rotation in the complex vector space, we represent each relation r according to its polar form with its modulus as 1, i.e., \\nRe(r) = sine(\\\\theta), and Im(r) = cosine(\\\\theta), where \\\\theta is the phase of relation r. With the polar form representation, the constraints can be easily satisfied.\\n\\n\\u201cIn experiments, how does the effective number of parameters that are used to express representations compare when the representations are a complex vs a real number \\u2026.\\u201d\\n\\nIf the same number of dimension is used for both the real and imaginary parts of the complex number as the real number, the number of parameters for the complex embedding would be twice the number of parameters for the embeddings in the real space. To make a fair comparison, in the process of grid search for finding the optimum embedding dimension, we double the range of the search space for models represented in real space such as TransE. \\n\\n\\u201cSince the method is reported to beat several number of competitors, it is useful to provide the code.\\u201d\\n\\nYes, we will definitely release our code and share it with the entire community.\"}", "{\"title\": \"Thanks for the great comments and suggestions!\", \"comment\": \"\\u201cParticularly, I want to see results of a stronger baseline, ComplEx, equipped with the adversarial sampling approach\\u2026.\\u201d\\n\\nWe have added the experimental results of TransE and ComplEx on three datasets in our paper (Table 8). We can see that our proposed approach still outperforms ComplEx with the new adversarial approach, especially on the data set FB15k-237 and Countries. The reason is that FB15k-237 and Countries contain many composition patterns, which cannot be modeled by ComplEx but can be effectively modeled by RotatE.\\n\\n\\u201cIdeally, I would also like to see multiple repeats of the experiments to get a sense of the variance of the results...\\u201d\\n\\nWe also added the variance of the results of our model on different data sets, which are summarized into Table 12 in the appendix. We can see that the variance of the results are very small, 0.001 at maximum. \\n\\n\\u201cTable 6: How many repeats were used for estimating the standard deviation?\\u201d\\n\\nOnly 3 are used. Since the variance are very small, the same results are obtained with more repeats.\\n\\n\\u201cWhile I understand that this paper focuses on knowledge graph embeddings, I believe the large body of other relational AI approaches should be mention\\u2026.\\u201d\\n\\nWe have added some discussion on these methods in the related work section.\"}", "{\"title\": \"Thanks for your appreciation to our work and mentioning another relevant work!\", \"comment\": \"Thanks for your appreciation to our work and your great comments on improving the paper. We have added the experimental results of TransE and ComplEx with self-adversarial negative sampling on three datasets in our paper (Table 8). We have also added the contribution of the self-adversarial negative sampling into both the abstract and introduction.\\n\\nRegarding TorusE, thanks again for bringing it to our attention, which we did not notice before. It is indeed relevant to our model, which is a concurrent work. We have discussed this model in the related work section. The difference between TorusE and RotatE can be summarized as below:\\n\\n(1) The TorusE model constraints the embedding of objects on a torus, and models relations as translations, while the RotatE model embeds objects on the entire complex vector space, and models relations as rotations.\\n\\n(2) The TorusE model requires embedding objects on a compact Lie group [2] while the RotatE model allows embedding objects on a non-compact Lie group, which has much more representation capacity. The TorusE model is actually very close to a special case of our model, i.e., pRotatE, which constraints the modulus of the head and entity embeddings fixed. As shown in Table 5, it is very important for modeling and inferring the composition patterns by embedding the entities on a non-compact Lie group. We can also compare the results of TorusE and RotatE on the FB15k and WN18 data sets (Table 3 in the TorusE paper and Table 4 in our paper), we can see that our RotatE model significantly outperforms TorusE on the two data sets.\\n\\n(3) The motivations of the TorusE paper and this paper are quite different. The TorusE paper aims to solve the regularization problem of TransE, while our paper focuses on inferring and modeling three important and popular relation patterns.\\n\\n[1] Ebisu, Takuma, and Ryutaro Ichise. \\\"Toruse: Knowledge graph embedding on a lie group.\\\" arXiv preprint arXiv:1711.05435 (2017).\\\"\\n[2] https://en.wikipedia.org/wiki/Compact_group#Compact_Lie_groups\"}", "{\"comment\": \"This paper argues that the advantage of the proposed method against ComplEx is its ability to model composition. While this is true, the disadvantage of the TransE-type model (which includes RotatE) is its inability to deal with 1-to-N, N-to-1, N-to-N relations. It seems to me that the composition and modeling of these complicated relations are intrinsically at odds with each other. The author should make this clear, especially in Table 2; ComplEX can handle 1-to-N, N-to-1, N-to-N relations, while RotatE cannot.\", \"title\": \"Composition v.s. modelling 1-to-N, N-to-1, N-to-N relations in Table 2\"}", "{\"comment\": \"The reported results are high, which raise my interest. But, it also raises attention to some important issues that need to be addressed.\\n\\nThe proposed model is very similar to the ComplEx embedding model [1]. In fact, in the ComplEx model, the score function is $ real(<r, h, \\\\bar{t}>) $, which includes the element-wise product between $ r \\\\circ h $. Because the ComplEx model uses complex-value embeddings, this product is essentially rotation in the complex plane, thus the same as the idea in this paper.\\n\\nThe authors should clarify and emphasize how their model could provide advantage over the ComplEx model, which is currently one of the SOTA. The authors should provide convincing theoretical arguments because many researches have shown that excessive hyper-parameter tuning and optimization techniques can change benchmark results a lot [2]. The authors also need to provide proof that the ComplEx model cannot model \\\"composition\\\" as in Table 2, given the two models are essentially similar.\\n\\nAdditionally, the comparison with TransE is ambiguous. The authors should make clear that the rotation is in the complex plane of each embedding vector element, thus different from rotation in the embedding space; and check that their arguments and analyses regarding TransE still stand.\\n\\nAbout experiments, for fair comparisons, results should be reported on common and standard settings. An example practice could be seen in [3].\", \"ref\": \"[1] Trouillon, Theo, et al. Complex Embeddings for Simple Link Prediction. ICML 2016.\\n[2] Kadlec, Rudolf, Ondrej Bajgar, and Jan Kleindienst. \\\"Knowledge base completion: Baselines strike back.\\\" arXiv preprint arXiv:1705.10744 (2017).\\n[3] Lacroix, Timoth\\u00e9e, Nicolas Usunier, and Guillaume Obozinski. \\\"Canonical Tensor Decomposition for Knowledge Base Completion.\\\" ICML 2018.\", \"title\": \"What is the difference compared to the ComplEx embedding model?\"}", "{\"title\": \"This paper is an important new contribution to the field. The results should be compared to TorusE.\", \"review\": \"The authors propose to model the relations as a rotation in the complex vector space. They show that this way one can model symmetry/antisymmetry, inversion and composition. Another contribution is the so-called self-adversarial negative sampling.\", \"pros\": \"The problem that they raise is important and the solution is relevant. The results considering the simplicity of the proposed model are impressive. The experiments, proof of lemmas and general overview are easy to follow, well-written and well-organized. The improvement given the negative sampling approach is also noteworthy.\", \"cons\": \"Nevertheless, this approach is very similar to TorusE [1], since the element-wise rotation on the complex plane is somehow related to transformation on high-dimensional Torus. Therefore, it is expected from the authors to investigate the differences between these two approaches.\", \"suggestions\": \"Also, it is important to note the result of ablation study on Table 10 in supplementary materials, since part of the improvement does not come only from how the authors model the relation but also from the negative sampling(which could improve the results of other works as well). Maybe it is even better if Table 10 is presented in the main paper. \\nAnother suggestion is to mention the negative sampling contribution also in the abstract.\\n\\n\\n[1] Ebisu, Takuma, and Ryutaro Ichise. \\\"Toruse: Knowledge graph embedding on a lie group.\\\" arXiv preprint arXiv:1711.05435 (2017).\\\"\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"comment\": \"How many valid test triples and their corrupted triples have the same score. And what are they and their ranks on WN18RR and FB15k237? You had mentioned \\\"equal to 0\\\" (the same score) in your first reply. It seems that you actually did not run my code before. I do not want to discuss our model in details as my code was based on Denny Britz's implementation for employing a CNN to text classification.\\n\\nThere is nothing called \\\"a specific ranking procedure\\\" in my evaluation. I do not know why you must pay much attention to \\\"replicating the valid test triples\\\". Again, this is straightforward and does not matter when ranking, because each valid test triple and its replicated triples have a same score and a same rank.\\n\\nAs I said in my first reply, it would be nice if you created an open issue in my ConvKB github for further discussions. So I could tell you that we also had another version to evaluate the model without replicating the valid test triples, for which the experimental results are still same for with and without replicating the triples. This obviously helps to save time for both of us. \\n\\nThe \\\"without replicating\\\" version ran slower than the version in the github, thus I did not update it last year. But now, I have just updated it to my ConvKB github. You can check and test it.\\n\\nYour approach and results are great. And you do not need to beat all scores on all datasets to have an accepted paper. I appreciate if you can also include our published results.\", \"title\": \"No bug in our ConvKB evaluation!\"}", "{\"title\": \"Sorry we meant that many triples have the same score, and an open evaluation code is now available.\", \"comment\": \"Hi Dai,\\n Thanks for the verification. In the above comment, sorry we meant that many triplets have the same score, which equals to the bias of your model, i.e., b = tf.Variable(tf.constant(0.0, shape=[num_classes]), name=\\\"b\\\") in your model.py code. The reasons is that in many cases all the nonlinear RELU units are not activated. In addition, we found that this problem would only occur when the nonlinear activation Relu is used in the model. This explains why the evaluation of other models, including TransE, TransR, TransH and STransE, are correct. \\n\\nWe suggest you to re-evaluate your model without replicating the true triplets. We\\u2019ve fixed this debug in your code and put the updated codes in https://github.com/KnowledgeBaseCompleter/eval-ConvKB . \\n\\nBy the way, we appreciate your work, which we find is really interesting. We did not intend any offence to your work. We hope we can push forward this exciting direction together. We look forward to your feedback.\"}", "{\"title\": \"Solid work\", \"review\": \"The paper proposes a method for graph embedding to be used for link prediction, in which each entity is represented as a vector in complex space and each relation is modeled as a rotation from the head entity to the tale entity.\\nFrom the modeling perspective, the proposed model is rich as many type of relations can be modeled with it. In particular, symmetric and anti-symmetric relations can be modeled. It is also possible to model the inverse of a relation and the composition of two relations with this setup. Empirical evaluation demonstrates that method is effective and beats a number of well known competitors.\\n\\nThis is a solid work and could be of interest in the community. Modeling is elegant and experimental results are strong.\\nI have not seen it proposed before.\\n\\n- The presentation of paper could be improved, in particular the first paragraph of page 2 where the representation in complex domain is introduced is hard to follow and could be improved by inserting formulations instead of merely text. \\nIt would be nice to explicitly mention the number of real and imaginary dimensions of the complex vectors and provide explicit formulation for the Hadamard product on the complex domain, since the term elementwise could be ambiguous.\\n- The optimization section does not mention how constraints are imposed. This is an important technicality and should be clarified.\\n- In experiments, how does the effective number of parameters that are used to express representations compare when the representations are a complex vs a real number? Each complex number is presented with two parameters and each real number with one parameter. How is that taken into account in experiments\\n- Since the method is reported to beat several number of competitors, it is useful to provide the code.\\n\\n \\nBased on the results above, I vote for the paper to be accepted.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"comment\": \"Disclose: I am the author of ConvKB. I had re-run my ConvKB implementation. And there is not a single triple having score at 0 on FB15k-237.\\n\\nIt would be nice if you can create an open issue in my ConvKB github before discussing any information made in public.\", \"update_for_a_clarification\": \"It is important to note that our implementation can work with other score functions. Last year, I verified my \\u201ceval.py\\u201d implementation by using the same output vector and matrix embeddings produced by other models (such as TransE, TransR, TransH and STransE) to prove that our \\\"eval.py\\\" implementation is correct and can produce the exact same scores as produced by those models. \\n\\nFor each correct test triple, I just replicated this correct test triple several times to add to its set of corrupted triples, in order to work with a batch size (as shown in Lines 188-190 in \\u201ceval.py\\u201d). This is straightforward and does not matter when ranking the correct test triple. I thought that \\\"the same score\\\" you mentioned is actually for the correct test triple because of replicating. You should have a careful look at this point and then edit your comment above to have a reasonable reply.\\n\\nI just read your paper. This is nice work. Your experimental results are still great even you add negative results from other papers.\", \"title\": \"No bug in our ConvKB evaluation!\"}", "{\"title\": \"Thanks for pointing this out!\", \"comment\": \"Thanks for pointing this out! We\\u2019re aware of the result of ConvKB, which achieves a very high MRR on FB15k (0.396). The reason that we did compare with ConvKB [1] is that there is a bug in ConvKB\\u2019s evaluation.\\n\\nWe tried to reproduce their results from their published code [2], but found that the ConvKB tends to assign the same score, i.e., 0, to many triplets. The reason is that the RELU activation function is used in the convolution layers, which tends to have very sparse output, i.e., the output of many neurons are zero. This brings a big problem in the evaluation.\\n\\nFor evaluation, given a query (h,r, ?), the goals is to identify the rank of the true positive triplets (h, r, t) among all the possible (h, r, t\\u2019) triplets. Since the scores of many triplets given by ConvKB equal to 0 (typo, should be \\\"the same score\\\" or \\\"bias\\\"), the true positive triplets and many other false triplets are all ranked the first position at the same time. A reasonable solution would be to randomly pick a triplet among those triplets as the first ranked triplet, and so on. However, we find that a specific ranking procedure is used by ConvKB, which tends to rank the true positive triplets in a high position. As a result, the performance evaluated in this way is really high, which is not true in reality. We strongly suggest the authors of ConvKB to take a look at this issue and fix their results.\\n\\nFor the results of Reciprocal ComplEx-N3, thanks again for pointing this out, which we are not aware of before the submission. However, note that the focus of the Reciprocal ComplEx-N3 and this paper is different. Our paper proposes a new distance function for learning knowledge graph embedding, and our proposed RotatE is able to infer three relation patterns including composition, symmetry/asymmetry, and inversion, which offers good model interpretability. The focus of Reciprocal ComplEx-N3, however, is on different regularization techniques, which could be potentially applied to our proposed RotatE model. For example, on the FB15k data set, the performance of RotatE increases from 0.797 to 0.815 with the N3-regularizer, which outperforms the performance of ComplEx-N3 on FB15k (0.80). We are still in the process of implementing the reciprocal setting for our RotatE model, which seems to be pretty effective according to [3]. \\n\\n[1] A Novel Embedding Model for Knowledge Base Completion Based on Convolutional Neural Network\\n[2] https://github.com/daiquocnguyen/ConvKB\\n[3] Canonical Tensor Decomposition for Knowledge Base Completion\"}", "{\"title\": \"Is it the RotatE scoring function or the adversarial sampling?\", \"review\": [\"# Summary\", \"This paper presents a neural link prediction scoring function that can infer symmetry, anti-symmetry, inversion and composition patterns of relations in a knowledge base, whereas previous methods were only able to support a subset. The method achieves state of the art on FB15k-237, WN18RR and Countries benchmark knowledge bases. I think this will be interesting to the ICLR community. I particularly enjoyed the analysis of existing methods regarding the expressiveness of relational patterns mentioned above.\", \"# Strengths\", \"Improvements over prior neural link prediction methods\", \"Clearly written paper\", \"Interesting analysis of existing neural link prediction methods\", \"# Weaknesses\", \"As the authors not only propose a new scoring function for neural link prediction but also an adversarial sampling mechanism for negative data, I believe a more careful ablation study should have been carried out. There is an ablation study showing the impact of the negative sampling on the baseline TransE, as well as another ablation in the appendix demonstrating the impact of negative sampling on TransE and the proposed method, RotatE, for the FB15k-237. However, from Table 10 in the appendix, one can see that the two competing methods, TransE and RotatE, in fact, perform fairly similarly once both use adversarial sampling it still remains unclear whether the gains observed in table 4 and 5 are due to adversarial sampling or a better scoring function. Particularly, I want to see results of a stronger baseline, ComplEx, equipped with the adversarial sampling approach. Ideally, I would also like to see multiple repeats of the experiments to get a sense of the variance of the results (as it has been done for Countries in Table 6).\", \"# Minor Comments\", \"Eq 5: Already introduce gamma (the fixed margin) here.\", \"While I understand that this paper focuses on knowledge graph embeddings, I believe the large body of other relational AI approaches should be mention as some of them can also model symmetry, anti-symmetry, inversion and composition patterns of relations as well (though they might be less scalable and therefore of less practical relevance), e.g. the following come to mind:\", \"Lao et al. (2011). Random walk inference and learning in a large scale knowledge base.\", \"Neelakantan et al. (2015). Compositional vector space models for knowledge base completion.\", \"Das et al. (2016). Chains of Reasoning over Entities, Relations, and Text using Recurrent Neural Networks.\", \"Rocktaschel and Riedel (2017). End-to-end Differentiable Proving.\", \"Yang et al. (2017). Differentiable Learning of Logical Rules for Knowledge Base Completion.\", \"Table 6: How many repeats were used for estimating the standard deviation?\"], \"update\": \"I thank the authors for their response and additional experiments. I am increasing my score to 7.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"comment\": \"You should mention the experimental results of ConvKB [1] and Reciprocal ComplEx-N3 [2]. Reciprocal ComplEx-N3 gives higher MRR and Hits@10 scores than yours on both FB15K and FB15k-237. ConvKB produces better scores than yours for MRR on FB15k-237 and MR on WN18RR.\\n\\n[1] A Novel Embedding Model for Knowledge Base Completion Based on Convolutional Neural Network. NAACL-HLT 2018.\\n[2] Canonical Tensor Decomposition for Knowledge Base Completion. ICML-2018. Oral presentation.\", \"title\": \"Not mention results of ConvKB and Reciprocal ComplEx-N3\"}" ] }
rye7XnRqFm
Q-map: a Convolutional Approach for Goal-Oriented Reinforcement Learning
[ "Fabio Pardo", "Vitaly Levdik", "Petar Kormushev" ]
Goal-oriented learning has become a core concept in reinforcement learning (RL), extending the reward signal as a sole way to define tasks. However, as parameterizing value functions with goals increases the learning complexity, efficiently reusing past experience to update estimates towards several goals at once becomes desirable but usually requires independent updates per goal. Considering that a significant number of RL environments can support spatial coordinates as goals, such as on-screen location of the character in ATARI or SNES games, we propose a novel goal-oriented agent called Q-map that utilizes an autoencoder-like neural network to predict the minimum number of steps towards each coordinate in a single forward pass. This architecture is similar to Horde with parameter sharing and allows the agent to discover correlations between visual patterns and navigation. For example learning how to use a ladder in a game could be transferred to other ladders later. We show how this network can be efficiently trained with a 3D variant of Q-learning to update the estimates towards all goals at once. While the Q-map agent could be used for a wide range of applications, we propose a novel exploration mechanism in place of epsilon-greedy that relies on goal selection at a desired distance followed by several steps taken towards it, allowing long and coherent exploratory steps in the environment. We demonstrate the accuracy and generalization qualities of the Q-map agent on a grid-world environment and then demonstrate the efficiency of the proposed exploration mechanism on the notoriously difficult Montezuma's Revenge and Super Mario All-Stars games.
[ "reinforcement learning", "goal-oriented", "convolutions", "off-policy" ]
https://openreview.net/pdf?id=rye7XnRqFm
https://openreview.net/forum?id=rye7XnRqFm
ICLR.cc/2019/Conference
2019
{ "note_id": [ "rke8uoz7MV", "HklxPRElgN", "B1ge36LsJN", "r1eBktwNJV", "Hke-4YVmkV", "r1lOBG5KCX", "rJxSZbqFCm", "rklgYx5Y07", "SJxwFycY0Q", "SJgnyWWqh7", "BklVETbY3Q", "HJx0bKkKhQ" ], "note_type": [ "comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1547017069994, 1544732247651, 1544412583680, 1543956700656, 1543878952579, 1543246399760, 1543246076864, 1543245943560, 1543245695506, 1541177572496, 1541115179817, 1541105926477 ], "note_signatures": [ [ "~Shishir_Sharma1" ], [ "ICLR.cc/2019/Conference/Paper1346/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1346/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1346/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1346/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1346/Authors" ], [ "ICLR.cc/2019/Conference/Paper1346/Authors" ], [ "ICLR.cc/2019/Conference/Paper1346/Authors" ], [ "ICLR.cc/2019/Conference/Paper1346/Authors" ], [ "ICLR.cc/2019/Conference/Paper1346/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1346/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1346/AnonReviewer1" ] ], "structured_content_str": [ "{\"comment\": \"The paper proposes an exploratory algorithm which replaces exploration approach such as e-greedy, which only relies on random walks, in favor of a goal oriented Reinforcement Learning (RL) approach. The authors propose Q-map, a convolutional autoencoder-like architecture that is used to simultaneously produce value estimates for all possible goals in compatible environments i.e environments that support spatial coordinates as goals. Finally, the authors report the results of a RL agent that explores using Q-map and exploits using a DQN on Montezuma\\u2019s Revenge and Super Mario All-Stars environment.\\n\\nWe tried to reproduce the authors' result on the Super Mario All-Stars environment and sought to extend the scope of the experimentation by testing the generalization of the agent as well. The authors have made public their code and we ported it to the Pytorch framework. While the authors have mentioned the details of most of the hyperparameters being used, the details regarding how the authors deal with a sliding window present in the environment are a little unclear. \\n\\nThe results regarding the comparison of the performance of the proposed algorithm and the baseline that we generated differ from those in the paper. Our results indicate a better performance of the baseline algorithm than the proposed algorithm. We tried to recreate the exact conditions under which the results in the paper were observed but the paper mentions an averaging of multiple runs of the algorithm with different seeds. As the algorithm and the baseline are required to be run for 5 million timesteps, which together end up taking 6 days to complete, we had to report the results for only a single run. Without any training or finetuning, the proposed agent generalizes poorly on unseen level, though this can be explained by the very different backgrounds of the levels.\", \"title\": \"Findings of the ICLR 2019 reproducibility challenge\"}", "{\"metareview\": \"The paper proposes to use a convolutional/de-convolutional Q function over on-screen goal locations, and applied to the problem of structured exploration. Reviewers pointed out the similarity to the UNREAL architecture, the difference being that the auxiliary Q functions learned are actually used to act in this case.\\n\\nReviewers raised concerns regarding novelty, the formality of the writing, a lack of comparisons to other exploration methods, and the need for ground truth about the sprite location at training time. A minor revision to the text was made, but the reviewers did not feel their main criticisms were addressed. While the method shows promise, given that the authors acknowledge that the method is somewhat incremental, a more thorough quantitative and ablative study would be necessary in order to recommend acceptance.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Promising but more thorough investigation needed\"}", "{\"title\": \"Insufficient Experiments\", \"comment\": \"While authors have addressed some concerns, they have not addressed others. I would encourage the authors to conduct thorough experimental evaluation and resubmit the paper.\"}", "{\"title\": \"Response\", \"comment\": [\"About UVFA. (a) During training, UVFA does not require querying all goals. As a matter of fact, the whole point of UVFA is to train on a small subset of goals, then to generalize by using the learned neural network. (b) As long as we can query UVFA \\\"the proper location\\\", we can construct similar exploration strategy. Therefore, it would be essential to compare to UVFA in the experiments.\", \"It is also noticeable that the proposed method here is not applicable to continuous state/action space.\", \"If sigmoid + logistic loss performs worse, it would be important to include such experiment (at least in the appendix) to justify your current choices.\", \"Still, the necessity of including \\\\epsilon_r, even though it is smaller than usual \\\\epsilon, implies that the proposed exploration scheme alone is not sufficient and not effective enough.\", \"To summarize, the submission is below satisfactory and not ready for publication.\"]}", "{\"title\": \"Response\", \"comment\": \"The authors agree that the idea of using a deconvolutional architecture for spatially correlated rewards/goals is not new but argue that there is no notion of goal oriented RL in the UNREAL paper. I'm not sure where the difference is since both UNREAL and Q-map learn a set of Q-functions/policies. It is true that UNREAL did not use the auxiliary policies for acting but it is not a stretch to do that. In fact, there is follow up work by Dilokthanakul et al. doing just that (see \\\"Feature Control as Intrinsic Motivation for Hierarchical Reinforcement Learning\\\" - https://arxiv.org/abs/1705.06769).\\n\\nGiven that the architecture and training procedure in this paper are not really new, I would expect a really compelling demonstration of how the additional prior knowledge can be used, and I don't think the paper provides that. As I mentioned in the original review, claims of better exploration are usually backed up with experiments on known hard exploration tasks. I would argue that this is even more important for a method that requires additional prior knowledge. If the motivation for the prior knowledge is robotics, then why not evaluate it on a related task?\\n\\nUltimately, I appreciate the minor revision but stand by my original assessment.\"}", "{\"title\": \"Response to AnonReviewer1\", \"comment\": \"1. Successor features, a generalization of Dayan\\u2019s successor representation, propose a framework for transfer learning when the reward function changes between tasks but not the environment\\u2019s dynamics. In Dyan (1993), the experiment shows how the successor representations predict the future state occupancy under the current policy when trained to reach a particular goal and describes how the learning is affected when the goal location is changed. We believe this literature is quite different from Q-map which directly and simultaneously learn how to reach every possible goals and is task-independent.\\n\\n2. While UVFA requires a goal to be provided in input of the neural network, Q-map doesn't as it produces the Q-values towards all possible goals at once in output. This implies a few algorithmic differences between the two approaches when used for the proposed exploration: 1) During the goal-selection step or training, the values for all goals are queried or updated in one pass through the network while would require as many passes as there are goals with UVFA. 2) When trying to reach a given goal, the Q-values at the proper location in output are used for Q-map while this goal would just be provided in input for UVFA.\\n\\n3. We have tested regression with various non-linearities in output but have found them to perform worse. For example, sigmoids tend to squeeze values to either 0 (the goal can't be reached) or 1 (the goal can be reached in one step). Furthermore, clipping is only performed when creating the target Q-frames as always clipping the output of the network would not give any gradient for values outside of (0, 1).\\n\\n4. We have retained a minimal amount of purely random actions for several reasons: 1) They are necessary for Q-map's own exploration 2) They allow DQN to discover actions which may not be helpful for navigating the environment, such as hitting blocks to gain coins in Mario 3) The proportion of random actions used is significantly smaller than what is used in the baseline, thus the drawbacks, such as \\u201cwasteful\\u201d actions, are reduced.\\n\\n5. We agree that such environments would have been an interesting test for the Q-map. Given the time available for the rebuttal we will have to consider these for future work. Yes, UVFA could be used instead of Q-map, but it would likely be computationally slower as every possible goal would need to be passed in input and have worse learning performance due to the lack of deconvolutional architecture to facilitate generalization. Such a comparison could also be worthwhile for future work.\"}", "{\"title\": \"Response to AnonReviewer3\", \"comment\": \"We agree with some of the pointed similarities with UNREAL, and now reference it in the paper. The autoencoder architecture and Q-learning used for its pixel-control auxiliary task are indeed similar. However, the meaning of the review's use of the term \\\"spatial goals\\\" is not very clear to us, as the pixel-control auxiliary task's purpose is to maximize the on-screen pixel value change, and has no notion of goal-oriented RL. Furthermore, the learned values are not used in any practical manner. Q-map on the other hand, is trained to minimize the number of steps towards all goal coordinates which can be used for a variety of applications, such as exploration as shown in the paper, goal-oriented control (e.g. if the task is to reach some coordinates), or hierarchical RL.\\n\\nWhile we agree that the necessity to localize the agent or a target object in the environment is significant, we would like to point out that it is a common assumption in goal-oriented RL, and is not impractical for certain areas of research, such as robotics. We chose to use Montezuma\\u2019s Revenge and Mario for their complexity and their role in various previous papers on exploration. We do not believe it was worthwhile showing performance chart for Montezuma\\u2019s Revenge, as the baseline random exploration never reached the key and we did not use environmental rewards.\"}", "{\"title\": \"Response to AnonReviewer2\", \"comment\": \"First, we would like to clarify that this paper makes two main contributions: 1) Q-map: a way to simultaneously learn to reach coordinates and 2) DQN + Q-map: a way to use Q-map for exploration. Unfortunately the review\\u2019s points did not address 1).\\n\\n(ii) We do reference these works in section 3.1, however, as most of them still use epsilon-greedy as part of their algorithms, our proposed method can be directly integrated with them. To isolate the impact of taking multiple steps in the direction of a goal versus random actions, we chose to only use a standard DQN agent.\\n\\n(iii) We unfortunately do not have results with the exact same experimental setup without goal biasing but during preliminary experiments we found that a goal biasing of 50% gave a performance boost on Mario. The experiment with Montezuma's Revenge does not use any biasing however as no reward was used and thus no DQN was trained.\\n\\n(iv) By exploratory actions we mean individual actions that are not greedy for the task (completely random or goal-directed). To have a fair comparison between epsilon-greedy exploration and the proposed exploration using Q-map, we ensure that these exploratory actions are following the same schedule, linearly decaying through the training.\\n\\n(v) The quoted training method was specifically used for the gridworld environment that was designed to evaluate the training of the Q-map under ideal conditions with a nearly uniform coverage of all transitions. For the experiments with Mario and Montezuma's Revenge the goal was to evaluate the proposed exploration algorithm, we therefore used the original starting states at the beginning of the levels.\\n\\n(vi) We added a new experiment using a Q-map trained first on level 1.1 and then on level 2.1. We noticed faster training and some notions of generalization even though the two levels use different tilesets and backgrounds. The videos and code are available on the website.\"}", "{\"title\": \"General comment\", \"comment\": \"We thank the reviewers for their constructive feedback. It has helped us improve the quality of the paper and gave us directions for future work.\\n\\nSince the original submission, we have updated the paper and improved the website https://sites.google.com/view/q-map-rl with some new videos and cleaner source code.\"}", "{\"title\": \"Interesting Idea, but not well evaluated\", \"review\": \"Authors propose to overcome the sparse reward problem using an exploration strategy that incentivizes the agent to visit different parts of the game screen. This is done by building Q-maps, a 3D tensor that measures the value of the agent's current state (defined as the position of the agent) and action in reaching other (x, y) locations in the map. Each 2D slice of the Q-map measures the value at different (x, y) locations for one action. Such 2D slices (i.e. channels) are stacked together to form the Q-map. Taking the max across the channels, thus, provides the Q-value for the optimal action.\\n\\nA policy for maximizing the rewards is trained using DQN. The Q-map based exploration is used as a replacement for \\\\epsilon-greedy exploration.\", \"the_q_map_is_used_for_exploration_in_the_following_way\": \"(a) Chose a random action with probability \\\\epsilon_r. \\n(b) If neither a random action nor a \\\"goal\\\" is chosen, a new goal is chosen with probability \\\\epislon_g. The goal is a (x, y) location, chosen so that is not too hard or too easy to reach it (i.e. Q-map values are neither too high or low; intuitively [1 - Q-map(x, y, a)] (for normalized/clipped Q) is a measure of distance of the goal). \\n -- If a \\\"goal\\\" is chosen, the greedy action to go towards the goal is chosen. \\n(c) If neither a goal or random action is chosen, DQN is used to chose the greedy exploration. \\n\\nAuthors also bias the goal selection to match DQN's greedy action. This is done as following -- from a set of goals that satisfy (b) above; chose the goal for which Q-map selected action matches the DQN's greedy action. \\n\\nResults are presented on simple 2D maze environments, Mario and Montezuma's revenge.\", \"i_have_multiple_concerns_with_the_papers\": \"(i) The writing is informal and the ideas are not well explained. It would really benefit -- if authors introduce an algorithm box or talk about the method as a sequence of points. Right now, the ideas are scattered throughout the paper. I am still confused by figure 3 -- when are random goals chosen? Do random goals correspond to (b) above? Also, when the Horde architecture, GVF and UVF are mentioned, the references are missing -- I would love for the authors to include the corresponding references. \\n\\n(ii) The idea of reaching as many states as possible has been explored in count based visitation (Bellemare et al, Tang et al) \\u2014 but no comparisons have been made to any previous work. Its always good to put a new work in the perspective of old work with similar ideas. \\n\\n(iii) The authors propose biased and random goal sampling \\u2014 I would love to see how much improvement does biased goal sampling offer over random goal sampling. \\n\\n(iv) \\u201c\\u2026compare the performance of our proposed agent and a baseline DQN with a similar proportion of exploratory actions\\u201d .. I don\\u2019t agree with this a metric \\u2014 I think the total number of steps is a good metric. Exploration is part of the agent\\u2019s algorithm to find the goal, we shouldn\\u2019t compare against DQN by matching the number of exploratory actions. \\n\\n(v) \\u201cThe Q-map is trained with transitions generated by randomly starting from any free locations in the environment and taking a random action.\\u201d Does this mean that when the agent is trained with Mario \\u2014 the game is reset after every episode and the agent is placed a random starting location? If yes, then this is not a realistic assumption. \\n\\n(vi) I would like to see \\u2014 how do Q-maps generalize across levels of Mario or Montezuma\\u2019s revenge? Does Q-map trained on level-1 help in good exploration on future levels without any further fine-tuning? \\n\\nOverall, I like the idea of incentivizing exploration without changing the reward function as is done in multiple prior works. However, I think more thorough quantitative evaluation is required and it will be interesting to see transfer of Q-maps outside the 2D-domains. I am happy to increase my score if such evidence is provided.\", \"other_references_worth_including\": \"(a) Strategies for goal generation: Automatic Goal Generation for Reinforcement Learning Agents (https://arxiv.org/abs/1705.06366)\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Review\", \"review\": \"The main idea in the paper is to use on-screen locations as goals for an RL agent. Using a de-convolutional network to parameterize the Q-function allows all goals to be updated at once and correlations between nearby or similar goal locations could be modelled. The paper explores how this type of goal space can be used for better exploration showing modest improvement in scores on Super Mario.\\n\\nClarity - The paper is well written and easy to follow. The Q-map architecture is well motivated and intuitive and the exploration strategy based on Q-maps is interesting.\\n\\nNovelty - The idea of using spatial goals combined with a de-convolutional architecture is not new and goes back at least to \\u201cReinforcement Learning with Unsupervised Auxiliary Tasks\\u201d by Jaderberg et al.. The UNREAL agent used the same type of de-convolutional \\u201cQ-map\\u201d to update a spatial grid of goals all at once. The main difference is that the UNREAL agent learns about spatial goals as an auxiliary task and does not execute/act on the goals like the Q-map agent. Nevertheless, the type of architecture and algorithm (called 3D Q-learning in this paper) is essentially the same.\\n\\nSignificance - The Q-map architecture requires access to the position of the avatar on the screen at training time. I would expect that using such a significant part of the agent\\u2019s true state during training should lead to a significant improvement in performance at test time. Why not evaluate the proposed exploration strategy on well known hard exploration tasks? The results on Montezuma\\u2019s Revenge are only qualitative. There Q-map agent did outperform an epsilon-greedy DQN baseline on Super Mario but the improvement does not seem very significant given how much prior knowledge Q-map was given compared to the baseline. It is also not clear how much of the improvement comes from training the Q-map as an auxiliary task and how much of it comes from better exploration.\\n\\nOverall quality - Given that the architecture is not very novel and requires the avatar\\u2019s position to train I did not find the qualitative or quantitative results compelling enough. Perhaps the authors could show that the exploration strategy works well on several difficult exploration games. Another possibility would be to showcase other ways to use the Q-map, for example in an HRL setup.\\n\\nMinor comment - Some sections seem to be missing references. For example, the second paragraph of the introduction discusses GVFs and the Horde architecture without any references.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Do not have enough comparison to existing works; need to improve writing\", \"review\": \"Focus on navigation problems, this paper proposes Q-map, a neural network that estimates the number of steps (in terms of the discount factor gamma) required to reach any position on the observable screen/window. Moreover, it is shown that Q-map can be applied for exploration, by trying to reach randomly selected goal.\\n\\nPros\\n1. Novel goal-based exploration scheme\\n\\nCons\\n1. Similar idea has been proposed before\\nFor example, Dayan (1993) estimates the number of steps to reach any position on the map using successor representations. Discussion about this field (successor representations/features) is completely missing in the paper.\", \"ref\": \"- Peter Dayan. Improving generalization for temporal difference learning: The successor representation. Neural Computation, 5(4):613\\u2013624, 1993.\\n- Andre Barreto, Will Dabney, Remi Munos, Jonathan J Hunt, Tom Schaul, David Silver, and Hado van Hasselt. Successor features for transfer in reinforcement learning. In Advances in Neural Information Processing Systems, pp. 4058\\u20134068, 2017.\\n- Andre Barreto, Diana Borsa, John Quan, Tom Schaul, David Silver, Matteo Hessel, Daniel Mankowitz, Augustin Zidek, and Remi Munos. Transfer in deep reinforcement learning using successor features and generalised policy improvement. In International Conference on Machine Learning, pp. 510\\u2013519, 2018.\\n\\n2. Comparison to existing methods is only vaguely discussed\\nFor example, it is claimed multiple times that UVFA requires the goal coordinates, but Q-map also requires coordinates when doing the exploration.\\n\\n3. The network architecture is not clearly presented\\nFor example, the output of the network needs to be clipped, which suggests that there is no output transform. Since the predicted output is in [0,1], it would make sense to use Sigmoid transform for each pixel and use logistic loss.\\n\\n4. The proposed exploration scheme could be unnecessarily complicated\\nSec.3.1 provides lengthy discussion about the drawback of eps-greedy exploration. Then in Sec.3.2, \\\\epsilon_r is basically the same as the eps-greedy algorithm, using to randomly select an action. Isn't this a \\\"bad\\\" thing as suggested in Sec.3.1? Moreover, the new exploration scheme requires two more hyper-parameters (min/max distance threshold), which will add more complication to the already very complicated deep RL learning procedure.\\n\\n5. Experiment results are limited\\nFor the toy experiment in Sec.2.3, the map are relatively simple. The example of Dayan (1993) with an agent surrounded by walls is an interesting scenario and should be included. The proposed Q-map (ConvNet) could fail because it is hard to learn geodesic distance with only local information. More importantly, there is no comparison to similar methods in Sec.3. UVFA can replace Q-map to do similar exploration.\\n\\n6. Writing can be greatly improved\\nThere are many grammar errors. To name a few, \\\"agent capable to produce\\\", \\\"the gridworld consist of\\\", \\\"in the thrist level\\\".\\n\\nMinors\\n- UFV should be UVF in the introduction\\n- Citation in Sec.3 is not consistent with the rest of the paper. Use \\\\citep or \\\\citet properly.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}" ] }
ByxmXnA9FQ
A Variational Dirichlet Framework for Out-of-Distribution Detection
[ "Wenhu Chen", "Yilin Shen", "William Wang", "Hongxia Jin" ]
With the recently rapid development in deep learning, deep neural networks have been widely adopted in many real-life applications. However, deep neural networks are also known to have very little control over its uncertainty for test examples, which potentially causes very harmful and annoying consequences in practical scenarios. In this paper, we are particularly interested in designing a higher-order uncertainty metric for deep neural networks and investigate its performance on the out-of-distribution detection task proposed by~\cite{hendrycks2016baseline}. Our method first assumes there exists a underlying higher-order distribution $\mathcal{P}(z)$, which generated label-wise distribution $\mathcal{P}(y)$ over classes on the K-dimension simplex, and then approximate such higher-order distribution via parameterized posterior function $p_{\theta}(z|x)$ under variational inference framework, finally we use the entropy of learned posterior distribution $p_{\theta}(z|x)$ as uncertainty measure to detect out-of-distribution examples. However, we identify the overwhelming over-concentration issue in such a framework, which greatly hinders the detection performance. Therefore, we further design a log-smoothing function to alleviate such issue to greatly increase the robustness of the proposed entropy-based uncertainty measure. Through comprehensive experiments on various datasets and architectures, our proposed variational Dirichlet framework with entropy-based uncertainty measure is consistently observed to yield significant improvements over many baseline systems.
[ "out-of-distribution detection", "variational inference", "Dirichlet distribution", "deep learning", "uncertainty measure" ]
https://openreview.net/pdf?id=ByxmXnA9FQ
https://openreview.net/forum?id=ByxmXnA9FQ
ICLR.cc/2019/Conference
2019
{ "note_id": [ "BJxjVto0JN", "SkxJntDG1V", "r1lPrDdk1E", "HylthcXjA7", "Skx98khBCX", "r1gofqwBR7", "H1gx-5vSAQ", "HJx_JxfNRQ", "rJx-4Lt6nX", "HJghUGPw2Q", "rkxmjBjz3Q", "Byl7LG6Vi7", "S1eBylRZiX", "Syxt4Id3cm", "HyxxfMDscX" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review", "official_comment", "comment", "official_comment", "comment" ], "note_created": [ 1544628531469, 1543825830952, 1543632703020, 1543350961172, 1542991697779, 1542973971209, 1542973944400, 1542885343900, 1541408297327, 1541005908080, 1540695450679, 1539785290617, 1539592156959, 1539241521256, 1539170824059 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1345/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1345/Authors" ], [ "ICLR.cc/2019/Conference/Paper1345/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1345/Authors" ], [ "ICLR.cc/2019/Conference/Paper1345/Authors" ], [ "ICLR.cc/2019/Conference/Paper1345/Authors" ], [ "ICLR.cc/2019/Conference/Paper1345/Authors" ], [ "ICLR.cc/2019/Conference/Paper1345/Authors" ], [ "ICLR.cc/2019/Conference/Paper1345/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1345/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1345/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1345/Authors" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper1345/Authors" ], [ "~Andrey_Malinin1" ] ], "structured_content_str": [ "{\"metareview\": \"The paper proposes a new framework for out-of-distribution detection, based on variational inference and a prior Dirichlet distribution.\", \"the_reviewers_and_ac_note_the_following_potential_weaknesses\": \"(1) arguable and not well justified choices of parameters and (2) the performance degradation under many classes (e.g., CIFAR-100).\\n\\nFor (2), the authors mentioned that this is because \\\"there are more than 20% of misclassified test images\\\". But, AC rather views it as a limitation of the proposed approach. The out-of-detection detection problem is a one or two classification task, independent of how many classes exist in the neural classifier.\\n\\nIn overall, the proposed idea is interesting and makes sense but AC decided that the authors need more significant works to publish the work.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Arguable choices of parameters and the performance degradation issue\"}", "{\"title\": \"Thank you for your feedback\", \"comment\": \"1. My design is inspired by \\\"Evidential Deep Learning to Quantify Classification Uncertainty\\\" (equation 9). Maybe it is better to revise it back to the concentration \\\"clipping\\\" version so that the uniform prior is not dependent on the data.\\n\\n2. Due to the regularization, most of the concentration parameters are around 1.0, while there are certain dimensions adopting extremely large values like 1000+, for examples, an image (label=3) could output its Dirichlet concentration parameter as [1.0, 1.1, 200, 1.0, 1500], the mode of such Dirichlet is [0, 0.00005, 0.11, 0, 0.88], this distribution is extremely sharp at the corner (edge) between class 3 and class 5. Such over-confidence on the misclassified example makes the model very sensitive, by adopting a log smoothing, the distribution becomes [0.6, 0.7, 5.3, 0.6, 7.3], which is way less sharp than the previous one. Such confidence decreasing is demonstrated to have a stronger impact on the out-distribution examples than in-distribution examples, thus able to better separate these two input sources. \\n\\n3. We will correct the minors in the revision.\"}", "{\"title\": \"Thanks for clarification\", \"comment\": \"I notice you have made some changes to the paper. The new version is clearer.\\n\\nPrediction/truth-preserved Prior: Conceptually I am not sure if you still want to call that prior if it is constructed based on data. But I think it makes sense that you want to maintain label-prediction while increasing/maximizing entropy. \\n\\n--\\\"By investigating the magnitude distribution of concentration parameter \\u03b1 for in-distribution test cases, we can see that \\u03b1 is either adopting the prior \\u03b1 = 1.0 or adopting a very large value \\u03b1 \\u226b 1.0. In order words, the Dirichlet distribution is heavily concentrated at a corner of the simplex regardless of whether the inputs\\\"\\nIf sometimes \\u03b1 can sometimes be adopted to be 1.0, why do you say it is *always* heavily concentrated at a corner?\", \"minor\": \"\\\"Lowe-order\\\" in Figure 1\\n\\\"*-preserving priors\\\" before figure 3\", \"figure_4\": \"labels are too small to read\"}", "{\"title\": \"Summary of Revision\", \"comment\": \"We have submitted a revised manuscript and made the following modifications to address the reviewers' major concerns:\\n\\n-- Add detailed explanation about what's lower-level uncertainty and what's higher-level uncertainty, why the higher-level uncertainty is better.\\n-- Add detailed model definition to follow variational inference nomenclature.\\n-- Add the derivation of our proposed evidence lower bound to Appendix.\\n-- Rewrite the prior part, the previous is to clip concentration in a certain dimension, the current version is to raise the uniform Dirichlet in a certain dimension. Though achieving the same effect, we believe such a change makes the paper easier to be understood.\\n-- Add more ablation studies to verify different prior functions and different smoothing functions to see their impact on the detection accuracy.\\n-- Update references according to Reviewer2 and Malinin.\\n-- Shorten the paper to fit the 8-page standard.\\n-- Fix many typos.\\n\\nWhile limited by time in the response period, we do still plan to address *all* the reviewer\\u2019s other additional comments in future revisions. We also welcome any further feedback to improve this paper!\"}", "{\"title\": \"Thank you for your feedback\", \"comment\": \"First of all, we really appreciate your useful suggestions. We have revised the paper to make it more clear and principled based on your suggestions.\", \"q\": \"Theoretical ground, why it is better than previous ICLR papers?\\nThis is really a good question, I think there are mainly two reasons. First of all, we manage to separate the higher-level uncertainty from lower-level uncertainty, identifying the uncertainty sources can help to distinguish between \\\"noisy in-domain example\\\" (due to lower-level uncertainty) and \\\"out-domain example\\\" (due to higher-level uncertainty). Secondly, our proposed smoothing algorithm can kind of alleviate the over-confidence issue in model training, thus making the model more robust to out-domain examples. These details are better demonstrated in the Introduction section in the current revision.\"}", "{\"title\": \"Second Part due to comment limit\", \"comment\": \"Q: \\\"why some methods appear in some tables and not in other\\\"\\nHere we mainly copy the results reported in the original paper to ensure fairness, the previous papers use different architectures, therefore some results are missing. Specifically for CIFAR100, the previous papers like Learning-Confidence (Devries et al.), Adversarial Training (Lee et al.) and Deep Prior Network (Malinin et al.) did not report their results on such really challenging dataset. The most recently reported results are from ODIN (Liang et al.) and Semantic (Shalev et al.), therefore only these two baselines are compared in the more challenging CIFAR100 setting. The inferior performance on CIFAR100 is hard because there are more than 20% of misclassified test images, which is really hard to be distinguished from out-of-distribution images.\", \"q\": \"Page Limit\\nThank you for your remind, we already restructured the paper to fit exactly 8 pages, some parts are moved to the appendix. We really hope you can carefully read our revision again.\"}", "{\"title\": \"Thank you for your feedback\", \"comment\": \"First of all, I would like to thank you for your constructive feedback. We revise the paper a lot based on your helpful suggestions.\", \"q\": \"The smoothing function\\nThe smoothing function is to lower model's overconfidence on unseen samples so that the model can better detect the abnormal instances. For example, an unseen image whose label should be 2, but the model outputs [0.5, 0.1, 10, 100], the log smoothing can scale it to [0.4, 0.1, 2.3, 4.6], which greatly lowers model's confidence on its own prediction (y=3) while maintaining the other dimensions. Such smoothing function can increase the detection accuracy a lot, the details are shown in ablation study in Fig 5. In our revision, we also experimented with many other smoothing functions to investigate what is the essential property a smoothing function needs to make the model more robust against outliers, the results are shown in Fig 7. The same goes with input perturbation, which is used to enlarge the distance between normal instances and abnormal instances so that the model can better detect the abnormal ones.\"}", "{\"title\": \"Thank you so much for your constructive feedback\", \"comment\": \"First of all, I would like to thank you for your constructive feedback. Most of our revision is based on your helpful suggestions.\\n1. We totally agree with you on the NN setting and revise the introduction and model part, where we view the neural network only as a general approximation function to generate concentration parameters.\\n2. We follow the usual nomenclature in variational inference to revise the model definition part. In fig3, we specifically define the x,y,z and talk about their connections, the current prior distribution does not contain $x$ as input. \\n3. We write about the statistical process in the model section and define the three probabilities, and we demonstrate the derivation process in appendix eq10.\\n4. About the clipping part: sorry for the misunderstanding, the original version actually means clipping the groundtruth dimension of the predicted concentration to 1 while maintaining the rest dimensions. For example, for y=2 and concentration [0.6, 5, 1.2, 8], clipping will change it to [0.6, 1, 1.2, 8] so that the goundtruth dimension does not contribute to the KL-divergence with U=[1, 1, 1, 1]. The motivation of clipping is to give the model one dimension of freedom. In the new revision, we rewrite this part in a more principled way. Instead of clipping the concentration, we raise the prior uniform concentration in a certain dimension, for example [1, 1, 1, 1] will become [1, 5, 1, 1], which have the same goal of allowing one dimension of freedom in the concentration parameter. More importantly, we design different prior functions and perform ablation study to investigate their influences on the final results.\\n5. We change some tables into bar charts to better visualize the results.\\n\\nAgain, thank you for your feedback and hope you can read our new revision.\"}", "{\"title\": \"Bayesian reasoning about DNN outcome\", \"review\": \"Summary\\n=========\\nThe paper describes a probabilistic approach to quantifying uncertainty in DNN classification tasks.\\nTo this end, the author formulate a DNN with a probabilistic output layer that outputs a multinomial over the\\npossible classes and is equipped with a Dirichlet prior distribution.\\nThey show that their approach outperforms other SOTA methods in the task of out-of-distribution detection.\\n\\nReview\\n=========\\nOverall, I find the idea compelling to treat the network outputs as samples from a probability distribution and\\nconsequently reason about network uncertainty by analyzing it.\\nAs the authors tackle a discrete classification problem, it is natural to view training outcomes as samples from\\na multinomial distribution that is then equipped with its conjugate prior, a Dirichlet.\\n\\nHowever, the model definition needs clarification. In the classical NN setting, I find it misleading\\nto speak of output distributions (here called p(x)). As the authors point out, NNs are deterministic function approximators\\nand thus produce deterministic output, i.e. rather a function f(x) that is not necessarily a distribution (although can be interpreted as a probability).\\nOne could then go on to define a latent multinomial distribution over classes p(z|phi) instead that is parameterized by a NN, i.e. phi = f_theta(x).\\nThe prior on p(phi) would then be a Dirichlet and consequently the posterior is Dirichlet as well.\\nThe prior distribution should not be dependent on data x (as is defined below Eq. 1).\\n\\nThe whole model description does not always follow the usual nomenclature, which made it at times hard for me to grasp the idea.\\nFor instance, the space that is modeled by the Dirichlet is called a simplex. The generative procedure, i.e. how does data y constructed from data x and the probabilistic procedure, is missing.\\nThe inference procedure of minimizing the KL between approximation and posterior is just briefly described and could be a hurdle to understand, how the approach works when someone is unfamiliar with variational inference.\\nThis includes a proper definition of prior, likelihood and resulting posterior (e.g. with a full derivation in an appendix).\\n\\nAlthough the authors stress the importance of the approach to clip the Dirichlet parameters, I am still a bit confused on what the implications of this step are.\\nAs I understood it, they clip parameters to a value of one as soon as they are greater than one.\\nThis would always degrade an informative distribution to a uniform distribution on the simplex, regardless whether the parameters favor a dense or sparse multinomial.\\nI find this an odd behavior and would suggest, the authors comment on what they mean with an \\\"appropriate prior\\\". Usually, the parameters of the prior are fixed (e.g. with values lower one if one expects a sparse multinomial).\\nThe prior then gets updated through the data/likelihood (here, a parameterized NN) into the posterior.\\n\\nClipping would also lead to the KL term in Eq. 3 to be 0 often times, as the Dir(z|\\\\alpha_c) often degrades to Dir(z|U).\\n\\nThe experiments are useful to demonstrate the application and usefulness of the approach. \\nOutcome in table 3 could maybe be better depicted using bar charts, results from table 4 can be reported as text only, which would free up space for a more thorough model definition.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Not well motivated.\", \"review\": \"This paper proposes a new framework for out-of-distribution detection, based on variational inference and a prior Dirichlet distribution. The Dirichlet distribution is presented, and the way it is used withing the method is discussed (i.e. clipping, scaling, etc). Experiments on several datasets and comparison with the state of the art is reported and extensively discussed.\\n\\nThe motivation of the proposed approach is clear, and I agree with the attempt to regularize the network output. The choice of the Dirichlet distribution is quite natural, as each of its samples are the prior weights of a multinomial distribution. However, some other choices are less clear (clipping, scaling, decision, and the use of a non-informative prior). The overall inference procedure appears to be advantageous in the many experiments that the authors report (several datasets, and several baselines).\\n\\nThe first thought that came to my mind, is that out-of-distribution detection is to classification what outlier detection is to regression. Therefore, relevant and recent work on the topic deserves to be cited, for instance:\\nS. Lathuiliere, P. Mesejo, X. Alameda-Pineda and R. Horaud, DeepGUM: Learning Deep Robust Regression with a Gaussian-Uniform Mixture Model, In ECCV, 2018.\\n\\nOne thing that I found quite strange at first sight is the choice of clipping the parameters of the Dirichlet distribution. It is said that this is done in order to choose an appropriate prior distribution. However, the choice is not very well motivated, because what \\\"appropriate\\\" means is not discussed. So why do you clip to 1? What would happen if some of the alpha's go over 1? Is it a numerical problem, a modeling problem, a convergence issue?\\n\\nI would also like the authors to comment on the use of a non-informative Dirichlet distribution within the KL-divergence. The KL divergence measures the deviation between the approximate a posteriori distribution and the true one. If one selects the non-informative Dirichlet distribution, this is not only a brutal approximation of the true posterior, but most importantly a distribution that does not depend on x, and that therefore cannot be truly called posterior.\\n\\nIt is also strange to take a decision based on the maximum alpha. On the contrary, the smallest alpha should be taken, since it is the one that concentrates more probability mass to the associated corner in the simplex.\\n\\nRegarding the scaling function, it is difficult to grasp its motivation and effects. It is annouced that the aim of the smoothing function is to smooth the concentration parameters alpha. But in what sense? Why do they need to be smoothed? Is this done to avoid numerical/convergence problems? Is this a key part of the model? The same stands, by the way, for the form of the input perturbation.\\n\\nThe experiments are plenty, and I appreciated the sanity check done after introducing the datasets. However, I did not manage to understand why some methods appear in some tables and not in other (for example \\\"Semantic\\\"). I also have the feeling that the authors could have chosen CIFAR-100 in Table 2, since most of the metrics reported are quite high (the problems are not much challenging).\\n\\nRegarding the parameter eta, I would say that its discussion right after Table 3 is not long enough. Specially, given the high sensitivity of this parameter, as reported in the Table of Figure 4. What is the overall interpretation of this sensitivity?\\n\\nFrom a quantitative perspective, the results are impressive, since the propose methods systematically outperforms the baselines (at least the ones reported). However, since these baselines are not the same in all experiments, CIFAR-100 is not used, and the discussion of the results could be richer, I conclude that the experimental section is not too strong.\\n\\nIn addition to all my comments, I would say that the authors chose to exceed the standard limit of 8 pages. Even if this is allowed, the extra pages should be justified. I am affraid that there are many choices not well motivated, and that the discussion of the results is not informative enough. I am therefore not inclined to accept the paper as it is.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"new method approximating the distribution of classification probability\", \"review\": \"This paper provides a new method that approximates the confidence distribution of classification probability, which is useful for novelty detection. The variational inference with Dirichlet family is a natural choice.\\n\\nThough it is principally insightful to introduce the \\u201chigher-order\\u201d uncertainty, I do see the fundamental difference from the previous research on out-of-distribution detection (Liang, Lee, etc.). They are aimed at the same level of uncertainty. Consider a binary classier, the only possible distribution of output y is Bernoulli- a mixture of Bernoulli is still Bernoulli. \\n\\nIn ODIN paper, the detector contains both the measurement of the extent to which the largest unnormalized output of the neural network deviates from the remaining outputs (U1 in their notation) and another measurement of the extent to which the remaining smaller outputs deviate from each other (U2 in their notation). In this paper, the entropy term has the same flavor as U2 part?\\n\\nI am a little bit concerned with the VI approach, which introduces extra uncertainty. I do not understand why there is another balancing factor eta in equation 6, which makes it no longer a valid elbo. Is the ultimate goal to estimate the exact posterior distribution of p(z) through VI, or purely some weak regularization that enforces uniformity? Could you take advantage of some recent development on VI diagnostics and quantify how good the variational approximation is?\\n\\nIn general, the paper is clear and well-motivated, but I find the notation sometimes confusing and inconsistent. For example, the dependency on x and D is included somewhere but depressed somewhere else. alpha_0 appears in equation 4 but it is defined in equation 7. \\n\\nI am impressed by the experiment result that the new method almost always dominates best known methods, previously published in ICLR 2018. But I am not fully convinced why it works theoretically. I would recommend a weak accept.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Thank you for your kindly remind\", \"comment\": \"Thank you so much for your careful and insightful comments. I totally agree with you, the current ablation set up is a bit problematic. Therefore, we add a few experiments to demonstrate the following results:\\n\\nDirichlet + Perturbation:\\nModel OOD FPR Detection Error\\nVGG13 iSUN 16.8 8.5 \\nCIFAR10 LSUN 15.2 8.6\\n Tiny-IM 20.3 9.9\\n\\nDirichlet + Smoothing:\\nModel OOD FPR Detection Error\\nVGG13 iSUN 14.4 7.9 \\nCIFAR10 LSUN 13.8 7.7\\n Tiny-IM 18.9 9.1\\n\\nFrom the above observation, it's probably safer to claim that Dirichlet smoothing strategy is marginally more important than Input perturbation technique. Thank you for your kind remind, we will refine this part in the future revision.\"}", "{\"comment\": \"Hi, thanks for the nice paper.\\n\\nI have a questions on ablation studies in Table 4.\\n\\n>>We also observe that concentration smoothing is playing a more important role than input perturbation.\\n\\nI think it unfair to compare \\\"concentration smoothing\\\" with \\\"input perturbation\\\". Is it necessary to design the Dirichlet+Perturbation experiment for each data sets ?\", \"title\": \"Questions on ablation studies\"}", "{\"title\": \"Thank you for your insightful comments\", \"comment\": \"Hi Andrey,\\n\\nThank you so much for your careful and insightful comments. Before I respond to your comment, I hope to clarify that I wasn't aware of the existence of your paper before our submission, it was only a few days after the deadline that I found your paper through the reference of \\\"https://openreview.net/forum?id=H1gh_sC9tm\\\". After carefully reading your paper, I found it is really interesting and elegant, especially you have conducted comprehensive experiments on different uncertainty measures. I'm sorry that we didn't cite your paper in the current version, but we will definitely put your paper in our citation list in the future revision and compare against your results to gain a deeper understanding.\", \"now_i_would_like_to_answer_your_comments\": \"1. We indeed try to interpret the problem from two different angles, but we end up having the same architecture. I think it's probably due to the fact that Dirichlet seems to be the only \\\"appropriate\\\" choice for prior or posterior distribution (it has so many well-studied properties like entropy, variance, mean, etc).\\n2. I think this is one of the main differences between our papers. My method is based on ELBO, which only depends on the in-domain dataset and your method takes the contrastive loss, which depends on both in-domain and out-domain dataset (though the out-domain dataset can be synthesized). \\n3. Yes, clipping and smoothing are the two weapons (tricks) to make our method work without providing any out-domain dataset. Because the model never gets the chance to see the adversarial (out-domain) examples under our setting, it is very inclined to put extremely high confidence in its own beliefs. In order to alleviate such an issue, we are inspired by the distribution calibration technique in the statistical theory (http://scikit-learn.org/stable/modules/calibration.html) to use transform function to re-adjust the model's belief in a more rational range. We experimented with several different smoothing techniques and end up having the simple log(x+1) as our calibration function. \\n4. I really like your theoretical explanation for different uncertainty measures. In our framework, we actually experimented with some intuitive uncertainty measure like low-level entropy (over label) and max value, but their results turn out to be worse than our baseline (ODIN), therefore we just leave them out from the paper due to the page limit. Actually, we did compare against two different uncertainty measures (variational ratio (BNN) in table2, and evidential-Dirichlet uncertainty measure in table4). Anyway, I will list the results of other uncertainty measures in our future revision. \\n\\nAgain, I'm really grateful for your helpful discussion!\\n\\nBest regards.\"}", "{\"comment\": \"Hello!\", \"your_work_is_similar_to_our_work_which_is_due_to_appear_at_nips_2018____https\": \"//arxiv.org/pdf/1802.10501.pdf\\n\\nBoth our works parameterise a distribution over distributions using a DNN in order to derive measures of uncertainty in predictions for detection of out-of-distribution samples.\\n\\nAs far as I understand, the main differences between your work and ours are the following:\\n\\n1. You interpret the the model to have latent variables which capture the distribution over distributions, while we interpret the model to be directly parameterizing a distribution over distributions. Though in the end the model architectures are essentially the same (DNN parameterises Dirichlet).\\n2. You train the model using ELBO using only in-domain data while we train the model using a contrastive KL-Divergence loss using in-domain and out-of-distribution data. \\n3. You use additional heuristics, like clipping and smoothing the alphas, in order to get a well-behaved model.\\n4. You investigate only the Differential Entropy of the Dirichlet as a measure of 'higher level uncertainty' while we investigate a range of uncertainty measures, derivable from distributions over distributions, which capture uncertainty in predictions due to different sources of uncertainty (data/distributional/model uncertainty).\\n\\nPlease correct me if I have misunderstood anything. I find your paper to be very interesting. Specifically, I find it impressive that you are able to achieve very good empirical results without out-of-distribution training data! Smoothing the alphas also sounds like a good idea :) .\\n\\nBest Regards,\\nAndrey Malinin\", \"title\": \"Similar work\"}" ] }
BJxmXhRcK7
TENSOR RING NETS ADAPTED DEEP MULTI-TASK LEARNING
[ "Xinqi Chen", "Ming Hou", "Guoxu Zhou", "Qibin Zhao" ]
Recent deep multi-task learning (MTL) has been witnessed its success in alleviating data scarcity of some task by utilizing domain-specific knowledge from related tasks. Nonetheless, several major issues of deep MTL, including the effectiveness of sharing mechanisms, the efficiency of model complexity and the flexibility of network architectures, still remain largely unaddressed. To this end, we propose a novel generalized latent-subspace based knowledge sharing mechanism for linking task-specific models, namely tensor ring multi-task learning (TRMTL). TRMTL has a highly compact representation, and it is very effective in transferring task-invariant knowledge while being super flexible in learning task-specific features, successfully mitigating the dilemma of both negative-transfer in lower layers and under-transfer in higher layers. Under our TRMTL, it is feasible for each task to have heterogenous input data dimensionality or distinct feature sizes at different hidden layers. Experiments on a variety of datasets demonstrate our model is capable of significantly improving each single task’s performance, particularly favourable in scenarios where some of the tasks have insufficient data.
[ "deep learning", "deep multi-task learning", "tensor factorization", "tensor ring nets" ]
https://openreview.net/pdf?id=BJxmXhRcK7
https://openreview.net/forum?id=BJxmXhRcK7
ICLR.cc/2019/Conference
2019
{ "note_id": [ "SkgvVInfgV", "SJg6gJbN1V", "rJxhhaVQk4", "r1x1qqWYCX", "r1eCaY-Y0X", "rkly-fbYAQ", "rJgp8MmuCX", "HJe-_2fOAX", "BJeJCol32X", "H1er7SL92Q", "r1xm4z8cnQ" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544893999368, 1543929588882, 1543880116283, 1543211655360, 1543211461796, 1543209463488, 1543152212625, 1543150696779, 1541307334893, 1541199133461, 1541198379360 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1344/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1344/Authors" ], [ "ICLR.cc/2019/Conference/Paper1344/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1344/Authors" ], [ "ICLR.cc/2019/Conference/Paper1344/Authors" ], [ "ICLR.cc/2019/Conference/Paper1344/Authors" ], [ "ICLR.cc/2019/Conference/Paper1344/Authors" ], [ "ICLR.cc/2019/Conference/Paper1344/Authors" ], [ "ICLR.cc/2019/Conference/Paper1344/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1344/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1344/AnonReviewer3" ] ], "structured_content_str": [ "{\"metareview\": \"AR1 is concerned about the poor organisation of this paper. AR2 is concerned about the similarity between TRL and TR. The authors show some empirical results to support their intuition, however, no theoretical guarantees are provided regarding TRL superiority. Moreover, experiments for the Taskonomy dataset as well as on RNN have not been demonstrated, thus AR2 did not increase his/her score. AR3 is the most critical and finds the clarity and explanations not ready for publication.\\n\\nAC agrees with the reviewers in that the proposed idea has some merits, e.g. the reduction in the number of parameters seem a good point of this idea. However, reviewer urges the authors to seek non-trivial theoretical analysis for this method. Otherwise, it indeed is just an intelligent application paper and, as such, it cannot be accepted to ICLR.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Numerous concerns.\"}", "{\"title\": \"The new perspective of our contribution and novelty\", \"comment\": \"Thank you for your response. We provide totally different perspective of our contribution and novelty. Please give us one more opportunity by reading the following explanation:\\n\\n1 ''(1) Novelty is somewhat limited and incremental.''\\nThe most significant novelty of our paper is the information sharing mechanisim in deep multitask learning. A major challenging problem in all existing deep multitask learning methods is that they cannot handle heterogeneous dataset and neural network structure (i.e., different dimensions of inputs and all layers) corresponding to the different tasks. Our method solves this problem by using tensor ring representation of model parameters and then information sharing on the latent space. As a consequence, our method can also provide more refined control of how much information to be shared and much reduction of model complexity. Therefore, as compared to the existing methods, our method is more effective, efficient, and flexible. \\n\\n\\n2 ''Experimental validation should be performed on a large-scale dataset ...''\\nThe contribution of this paper is not applicability in large-scale data and reduction of parameters by using TT/TR. We discovered another totally different advantages by using TT/TR in deep multi-task learning, which is the refined information sharing control over the latent space rather than the hard-sharing and soft-sharing mechanism. Then we can control how much information is shared between each task in more detailed scales. \\n\\n\\n3 ''Hyper-parameter sensitivity is a crucial issue.''\\nIn hard-sharing method, one hyper-parameter is the number of shared layers. In soft-sharing method, one hyperparameter is the penalty weight of regularization or the rank of task mode factor. As compared to existing deep multi-task learning method, our method also has only one hyperparameter that is how many cores are shared. Therefore, our method does not introduce more hyper-parameters. \\n\\nTT/TR is sensitive to the hyper-parameters, because their methods try to find the optimal balance between compression and performance. However, in this paper, we do not focus on the compression ability. We try to achieve the optimal performance when using information sharing on latent space. Thus, we usually fixed the TT/TR ranks to have sufficient representation ability, for example it can be the upper bound of TT/TR ranks.\"}", "{\"title\": \"Response\", \"comment\": \"Thank you for the response.\\n\\nAfter reading other reviews and the responses from authors, I still believe that the paper has some merit. Though the authors tried to justify the novelty (and they did a good job in explaining I must say), I still believe in the following cons:\\n\\n(1) Novelty is somewhat limited and incremental.\\n(2) Experimental validation should be performed on a large-scale dataset. To me, the entire ``\\\" selling pitch\\\" for TT/ TR based methods is the applicability in large-scale data. Reduction in parameter otherwise cannot be much appreciated. \\n(3) Hyperparameter sensitivity is a crucial issue. My limited experience in TT/ TR tells me that it is indeed very sensitive to \\\"tuning of hyper-parameters\\\". On top of that, this method introduces more hyper-parameters. So that's not good.\\n\\nNow that we are almost there to make the final decision, I am sorry but I have to reject the paper as after reading other reviews and some thinking, I can not see this paper in its current form as an ICLR paper. But I highly appreciate the clarification and encourage the authors to do some large-scale experiment and possibly submit to the next ML/ CV venue.\"}", "{\"title\": \"Improvement in writing, motivation and evaluation in the revised paper, to clarify the misunderstandings of the proposed method (feedback part2)\", \"comment\": \"We would like to thank the Reviewer for the insightful and constructive questions and comments.\\n\\n6. \\u201c2.2 Many claims are inaccurate or not adequately backed up \\u2026\\u201d \\nWe agree that the explanation should have been formulated more accurately. However, we like to emphasize that there are four major types of generalizations from DMTRL to our TRMTL. (Please see Section 2). Our TRMTL indeed generalizes or subsumes DMTRL-Tucker in terms of the first three types/aspects, no matter what tensor format (TT or Tucker) DMTRL may use.\\n\\n\\u201cTR-ranks are usually smaller than TT-ranks\\u2019\\u2019 is presented in tensor preliminaries section just as background knowledge. Besides, this property has already been verified in papers [Zhao16,17], which we added the citation. Verifying this property is not the focus and the objective of our work. \\n\\n7. ''2.3 Sentences are taken verbatim from other papers, plagiarism. For example: \\u201cTR model \\u2026 \\u2018' \\nWe respectfully disagree with the Reviewer that it seems several 'Sentences are taken verbatim from other papers\\u2019, and this is clearly not the case. \\nOnly in the sentence 'TR model is more flexible than \\u2026 ', we use the original expression for the purpose of exactly expressing the meaning that [Zhao16] wants to express in their paper. This sentence is written in the tensor preliminaries section and we added [Zhao16] as citation in that section. However, it would be better if we should have used our own words.\\n\\n8. \\u201c3. Hyperparameters: This paper apparently gains some practical benefit \\u2026\\u201d\\nWe respectfully disagree with the Reviewer. Our TRMTL can effectively deal with all major challenges of current deep MTL. The benefits of TRMTL can totally compensate for the downside of the introduction of the parameter of \\u2018c\\u2019. Besides, tuning one or two hyper-parameters is quite common in deep learning research. In practical, by using some heuristics, e.g. choosing the right sharing styles such as \\u2018bottom-heavy', the searching space of \\u2018c' can be greatly reduced. We may also employ a greedy search on \\u2018c\\u2019 layer by layer, which is much easier for tuning \\u2018c\\u2019.\\n\\n9. \\u201c4. Hyperparameters+Tuning: Hyperparameters Private proportion \\u2026\\\" \\nWe disagree with the Reviewers that the better performance is because of the additional tuning. In our experiment, the i/o dimension of tensorization is pre-fixed, and the location/order in which the cores of each task are arranged is also pre-fixed. We even tried empirically fixing the TR-ranks and also got fairly good performance. Besides TR-ranks, only the sharing portion \\u2018c\\u2019 is left to tune. If we are allowed to tune all above hyper-parameters, (a.k.a all the potential flexibility of our method is fulfilled), our performance would be much better than we reported in the paper.\\n\\nRegarding \\u201cWe test different ... with the best accuracies\\u201d, the writhing should have been more accurate, we meant that the \\u2018c\\u2019 is turned, and the best accuracies is reported w.r.t only to this \\u2018c\\u2019.\\n\\n10. \\u201c5. Insight & Analysis. All the core selection & public/private core selection \\u2026\\\" \\nWe respectfully disagree that the core selection are treated as black box optimization. Some insights about the cores selection are shown in our experiments. In MNIST experiment of previous version, Figure 3 demonstrates that the styles of sharing pattern have significant impact on the performance. Within different style categories, our model is very robust to \\u2018c\\u2019 for the pattern selection. For example, in Table 1, both \\u2018410\\u2019 and \\u2018420\\u2019 obtain the similarly good performance, which means small variation in \\u2018c\\u2019 does not affect a lot on performance, if the right style category is determined. Also In CIFAR experiment, in Table 2, both good patterns \\u20184421\\u2019 and \\u20184431\\u2019 belong to \\u2019bottom-heavy\\u2019 category and achieve similarly good accuracies, but clearly outperform the bad pattern \\u20184444\\u2019 which belongs to the \\u2018balanced\\u2019 style category. Some remarks/strategy on core selection are also provided in the revised version, please see Section 4.3 for details.\"}", "{\"title\": \"Improvement in writing, motivation and evaluation in the revised paper, to clarify the misunderstandings of the proposed method (feedback part1)\", \"comment\": \"We would like to thank the Reviewer for the insightful and constructive questions and comments.\\n\\n1. ''Novelty. Existing studies already established the template of different tensor \\u2026\\u2019' \\nWe respectfully disagree with the Reviewer on this comment. Firstly, to the best of our knowledge, DMTRL is the only framework integrates tensor factorization (TT, Tucker) into deep MTL but with a highly restricted soft-sharing mechanism. Secondly, there is no any good solution or template to plug tensor factorization into different kinds of soft-sharing deep MTL. Most importantly, three major challenges posed by deep MTL remain largely unsolved, which includes: 1) the ineffectiveness of knowledge sharing mechanism; 2) the inefficiency of size of parameters for deep MTL model; 3) the inflexibility of network architectures to handle heterogeneous features and inputs. None of existing deep MTL models (especially DMTRL) can well handle all above difficulties.\", \"our_incorporation_of_tr_into_deep_mtl_serves_for_special_purposes\": \"1) flexible/finer granularity of knowledge sharing (via tensorization); 2) higher compression ratio of deep MTL model sizes; 3) a generalized expressivity power (of TT); 4) better performance.\\n\\nThis is why we come up TRMTL, the main novelty/significance of which is that we propose a totally new generalized, highly flexible and latent-subspace knowledge sharing framework that can effectively address all these challenges. It should be noted that by simply combining DMTRL template (A) and TR-format (B), it cannot achieve such goals. \\n\\n2. ''2.1.1 Paper claims the benefit that each task can have its own I/O dimensionality \\u2026\\u2019' \\nWe agree that the explanation should be more clear. The benefit/flexibility of our TRMTL for heterogenous inputs mainly comes from our proposed architecture and sharing mechanisms, not due to the TR format. In fact, our framework can accommodate more generalized tensor network formats including TT or TR as a special case.\\n\\n3. ''2.1.2 Paper claims favourable ability to use more private cores than TT \\u2026\\u2019' \\nThe Reviewer has a misunderstanding about our framework, especially the relations between DMTRL and our TRMTL (Please refer to Section 2 in the revised paper for details). DMTRL has to stack up all the equal-sized weights from different tasks, and hence has the \\u2018task\\u2019 axis. In contrast, our TRMTL decomposes each task's own weight individually, then any subset of the cores can be shared among tasks. Therefore, it is not correct to say that TRMTL shares one core at default because our method is even possible to sharing zero core, and there is no such \\u2019task\\u2019 axis involved. \\n\\nOur framework is so generalized that TT or other tensor network can also be subsumed into our architecture. However, using TR instead of TT does bring benefits which are very preferable by our framework, e.g., higher compression ratio (lower overall-ranks via tensorization) of TR allows for the sharing among more compact (smaller-sized) cores, which has a big impact on parameter complexity for deep MTL models.\\n\\n4. ''2.1.3 Statement \\u201cTRMTL generalizes to allow the layer-wise weight \\u2026 \\u2018\\u2019 \\nThe sentence should have been more clear. We meant to say that our TRMTL framework generalizes DMTRL in terms of 4 major aspects. (Please see Section 2 in revised version), and one aspect of generalization is that TRMTL firstly tensorizes the weight into a much higher order weight tensor before factorizing it (In DMTRL paper, they did not employ tensorization). By doing so, the weight can be factorized into more cores than of just 3 cores for MLP (or 5 cores for CNN ) in DMTRL. \\n\\n5. \\u201c2.1.4 Statements like \\u201cTR enjoys the property of circular dimensional permutation \\u2026\\\" \\nWe present the 'circular dimensional permutation invariance\\u2019 property only in tensor preliminaries (Section 3). This property is one advantage of TR over TT, and we introduce this only as a background knowledge, which is not related to our current work.\"}", "{\"title\": \"Summary of the revision\", \"comment\": \"We thank all the reviewers for their insightful and valuable reviews. In the revised version, we have made several major revisions and improved our paper in the following aspects:\\n\\n1. Motivation\\nWe have rewritten and improved the introduction part (Section 1) so as to have a strong and clear motivation of the proposed method. \\n\\n2. Related work \\nWe have revised the related work part (Section 2) to further illustrate the essential difference between our method and DMTRL, so as to help to clarify some misunderstandings about our method. \\n\\n3. Experiment evaluation \\nWe have added the following extra experiments, in order to support the claims on the advantages/properties of our method:\\n1) experiment on tasks with heterogenous input dimensionality \\n2) experiment on tasks from multiple datasets \\n3) report the model complexity\\n\\n4. Paper structure\\nWe have reorganized our paper for a better paper structure. For example, we moved the related work part from Section 4 to Section 2; we optimized structure of the experiment section (Section 5) to better highlight the key properties of the proposed method.\\n \\n5. Paper writing\\nWe have fixed the typos, and improved the overall quality in writing. For example, in tensor preliminaries part (Section 3), we give a clearer presentation of the TR background knowledge, so as not to misunderstand between [Zhao16,17]'s contribution/focus and ours.\"}", "{\"title\": \"TRMTL looks 'simple' but is essentially nontrivial, a fairly nice solution to all three major challenges of deep MTL\", \"comment\": \"We would like to thank the Reviewer for the insightful and constructive suggestions and comments.\\n1. ''As to my knowledge ... it's a very \\\"simple\\u201d extension\\u2019\\u2019\\u00a0\\nWe thank the Reviewer for pointing out our main contribution that we propose a new generalized highly flexible latent-subspace based knowledge sharing mechanism.\", \"there_are_major_challenges_in_deep_mtl_that_remains_largely_unaddressed\": \"1) the ineffectiveness of knowledge sharing mechanism\\u00a0 2) the inefficiency of size of parameters for deep MTL model\\u00a0 3) the inflexibility of network architectures to handle heterogeneous features and inputs.\\nAlthough DMTRL has used TT/Tucker factorization, their model is rather restricted and unable to solve these issues. None of existing deep MTL models can well handle these difficulties (please see the revised paper for details ). Our extension looks 'simple' but is essentially nontrivial, because by using the proposed architecture, we can provide a fairly nice solution to all three major challenges.\\n\\n2. ''Though authors called ... is just an indexing scheme so essentially the same idea of TR.\\u2019'\\u00a0\\nWe agree with the Reviewer that TRL is based on and very similar to TR, but we respectfully disagree that TRL is simply an indexing scheme. The reasons are described as follows.\\nTensorization ('indexing scheme') in TRL plays a key role in our proposed sharing mechanism. Due to the tensorization, we can share the cores in a much finer granularity. Moreover, tensorization in TRL can lead to more compact tensor network representation (with lower ranks), and thus a higher compression ratio for the parameters. (please see section 5.1 in the revised version for the comparison of model complexity, where DMTRL without tensoriztion versus ours with tensorization)\\n\\n3. ''I wonder why ... looks very out of the place discussion.\\u2019'\\u00a0\\nWe mention this because this is variant of TRL for CNN setting, since in our experiments, we share the filters/kernels (itself is a 4th order tensor H x W x U x V) in CNN instead of sharing weight matrices in MLP between tasks. The two sharing settings are essentially the same but with slightly different formulation.\\n\\n\\n4. ''I suggest ...make the shareable cores not adjacent in Eq. (4) as they claimed\\u2019\\u2019\\u00a0\\n\\u00a0We thank the Reviewer for the nice suggestion, which is much better to make sharable cores not adjacent, we have updated it in the received version.\\n\\n\\n5. ''The experiments are somewhat ''simplistic\\\" \\u2026\\u2019' \\u00a0\\nWe agree with the Reviewer that more experiments should be done on larger dataset, e.g. Taskonomy data. The Taskonomy data is somewhat huge and we have some difficulty to get access to it during this rebuttal, but we will try to add it in the next version. However, in this revised version, we have conducted more experiments to further validate the merits of our TRMTL. In section 5.3, we have tested on tasks with heterogeneous input dimensions. In section 5.4, we applied our method to multiple datasets settings, where tasks could be loosely related.\\u00a0\\n\\n\\n6. ''Can the authors comment on the number of parameters used?\\u2019'\\u00a0\\nWe reports the number of parameters for different models in section 5.1 to demonstrate the compactness of our model. Overall, STL (6060K) has enormous parameters; MRN (3096K) have huge number of parameters, since they share weights in the original space. DMTRL Tucker (1194K) and TT (1522K) also have large parameters as models does not employ tensorization and just decomposes the stacked weight tensor of original order. In contrast, our TRMTL only uses 13K parameters, which is about 100 times fewer than DMTRL. Our model first tensorizes the weight into a much higher order tensor before factorizing it. By doing so, the weights can be represented into larger number of cores but with much lower ranks via TRL, which yields a highly compact model.\\u00a0\\n\\n\\n7. ''I wonder if the author can show some RNN \\u2026 \\u2018' \\u00a0\\nWe thank the Reviewer for the RNN advice. It would be interesting to see how method works on tasks with sequence data. However, such an experiment is not trial and will take some time,\\u00a0 since all the current compared methods are only designed for MLP/CNN. We will try to add this in future version.\\n\\n\\n8. ''I believe the authors should comment on \\u2026\\u2019\\u2019\\u00a0\\nIn the revised version, some remarks or discussions are given in section 4.3. In current work, the location of shared cores are arranged in a left-to-right order, since there is natural intuition on the connection between cores and image resolution [Zhao et al 17], in which the first core mainly controls the small patches while the last core affects the large patches. In this experiment, we preferentially share the features from detailed scale to coarse scale (share the cores from left to right). By fixing that, we only use \\u2018c\\u2019 to control the fraction of sharing. In future work, we plan to automatically select shared core pairs with highest similarity between tasks.\"}", "{\"title\": \"improvement with a clear organization and a better motivation\", \"comment\": \"We would like to thank the reviewer for the insightful and constructive suggestions and comments.\\n\\n1. ''This paper discusses ..., while the prior work is in section 4, seems backwards.'' \\nWe agree that the paper is not well organized. In our revised version, we have reorganized our paper by putting the prior work in Section 2. We have revised introduction part in Section 1 with a much clearer motivation for our work. The Section 3 gives a brief tensor background knowledge without mathematics. The proposed method with its mathematical formulation is given in Section 4. \\nPlease note that we have completely rewritten the introduction part (Section 1) in order to give a strong and clear motivation of the proposed method. Please refer to the revised paper for more details.\\n\\n2. ''Please reference the first papers to employ tensor decompositions for imaging\\u2019' \\nThe mentioned papers are the very original and popular work that successfully apply tensor decomposition to imaging analysis and also CV. We have referenced these work in the revised version.\"}", "{\"title\": \"Poorly organized, poorly motivated paper.\", \"review\": \"Summary: The authors propose tensor ring nets for multi-task learning\", \"cons\": \"This is a poorly organized paper and poorly motivated.\\nThis paper discusses relevant mathematics with no motivation in section 2, while the prior work is in section 4. Seems backward.\\n\\nPlease reference the first papers to employ tensor decompositions for imaging.\\n\\nM. A. O. Vasilescu, D. Terzopoulos, \\\"Multilinear Analysis of Image Ensembles: TensorFaces,\\\" Proc. 7th European Conference on Computer Vision (ECCV'02), Copenhagen, Denmark, May, 2002, in Computer Vision -- ECCV 2002, Lecture Notes in Computer Science, Vol. 2350, A. Heyden et al. (Eds.), Springer-Verlag, Berlin, 2002, 447-460. \\n\\n M. A. O. Vasilescu, D. Terzopoulos, \\\"Multilinear Subspace Analysis for Image Ensembles,'' Proc. Computer Vision and Pattern Recognition Conf. (CVPR '03), Vol.2, Madison, WI, June, 2003, 93-99. \\n\\nM.A.O. Vasilescu, \\\"Multilinear Projection for Face Recognition via Canonical Decomposition \\\", In Proc. Face and Gesture Conf. (FG'11), 476-483.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Simple idea with interesting results\", \"review\": \"The novelty and experiments are somewhat limited. Thus I am lowering my score.\\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------\\n\\nThe authors proposed a variant of tensor ring formulation for multi-task learning. They achieved that by sharing some of the TT cores for learning \\\"common task\\\" while learning individual TT cores for each separate tasks.\", \"pros\": \"1) Overall nice but simple extension of TT/ TR framework\\n2) Nice set of experiments which have shown improvement over standard TT/ TR framework for MTL.\\n\\nCons (and suggestions):\\n1) As to my knowledge TT/ TR have not been used for MTL before, I wonder if someone wanted to the proposed method is the only way to achieve it, so in that sense, it's a very \\\"simple\\\" extension.\\n2) Though authors called something called \\\"TRL\\\", I think it is just an indexing scheme so essentially the same idea of TR.\\n3) I wonder why authors suddenly mentioned about convolution in the end of section 3.1., looks very out of the place discussion.\\n4) I suggest in Section 3.2., make the shareable cores not adjacent in Eq. (4) as they claimed.\\n5) The experiments are somewhat \\\"````````simplistic\\\" and I believe the power of this sharing should have experimented on Taskonomy data (https://arxiv.org/pdf/1804.08328.pdf). Right now, the experimental setup is very much simplistic, which is one of the main points the authors should address.\\n6) Can the authors comment on the number of parameters used?\\n7) I wonder if the author can show some RNN/ LSTM experiment because some of the datasets used like OMLIGLOT/ MNIST are too simple to count as an experiment. Challenge will be to see the performance in challenging MTL. \\n8) I believe the authors should comment on the choice of c and the location of the shareable cores.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Tensor-based soft-sharing MTL, upgraded with TR-decomposition. Maybe practically useful to enhance. But not clearly enough written or evaluated.\", \"review\": \"Summary: This paper studies deep multi-task learning. Prior papers have studied various knowledge sharing approaches for deep multi-task learning including hard and soft sharing schemes. And some soft sharing schemes have used tensor decompositions (including TT, and Tucker). This paper fuses this line of work with the recently proposed Tensor-Ring decomposition in order to obtain Tensor Ring (TR)-based soft sharing for multi-task learning. The results show some improvement over prior deep MTL methods based on other tensor factorisation methods.\", \"strengths\": [\"Nice extension of existing line of work tensor-factorisation based MTL.\", \"More flexibility for controlling shared/unshared portions of weights compared to DMTRL.\", \"Improves on previous methods results.\", \"Experiments evaluate how MTL methods relate with various amounts of training data on each task.\"], \"weaknesses\": [\"Novelty/significance is limited.\", \"Writing. Many things are not clearly and intuitively explained. Some claims are not adequately justified.\", \"Introduces more hyper parameters to tune.\", \"Results may rely on hyper parameter tuning.\"], \"comments\": \"1. Novelty. Existing studies already established the template of different tensor factorisation methods (TT, Tucker) being possible to plug into deep networks for different kinds of soft-sharing MTL. Meanwhile, TR decomposition is taken off the shelf; (and as it\\u2019s been applied for compression before, this is not the first time TR decomposition has been used in a CNN context either). Therefore this is an A+B paper and a high bar should be met for the additional analysis, insight, or performance improvements that should provided.\\n2. Lots of writing issues:\\n2.1 Many things are not explained transparently enough at best (or major over-claim at worst). For example: \\n2.1.1 Paper claims the benefit that each task can have its own I/O dimensionality. However if TR-decomp is \\u201ccircularly connected\\u201d TT-decomp (Fig 1), then this seems not to happen automatically. So it should be unpacked more clearly how this is achieved. \\n2.1.2 Paper claims favourable ability to use more private cores than TT, where only one core is private. However circular TT would also seem to have one private core by default (the core with a task axis). So I suspect something else is going on, but this is completely unclear and should be explained more transparently. Furthermore it should be justified if whatever modifications do enable these properties are definitely a unique property of TR-decomp, or could also be applied to TT-decomp. \\n2.1.3 Statement \\u201cTRMTL generalizes to allow the layer-wise weight to be represented by a relatively lager number of latent cores\\u201d unclear: generalises what? larger number of cores than what? Than TT? The previous presentation suggests TT and TR should have same number of cores. \\n2.1.4 Statements like \\u201cTR enjoys the property of circular dimensional permutation invariance\\u201d are made without any explanation about what is the implication of this for neural networks and multi-task learning. \\n2.2 Many claims are inaccurate or not adequately backed up by theory or experiment. EG: (i) Paper claims to include DMTRL as a special case. But it only subsumes DMTRL-TT, not DMTRL-Tucker. Because TR-decomp does not include Tucker-decomp as an exact special case. (ii) Sentences \\u201cTR-ranks are usually smaller than TT-ranks\\u201d are assertions without verification. \\n2.3 Sentences are taken verbatim from other papers, plagiarism. For example: \\u201cTR model is more flexible than TT, because TR-ranks can be equally distributed in the cores, but TT-ranks have a relatively fixed pattern\\u201d is verbatim from Zhao\\u201916 TR-decomp paper. \\n3. Hyperparameters: This paper apparently gains some practical benefit due to the notion of shared/unshared cores. However, this also introduces additional hyper parameters (E.g., each layers private proportion \\u201cc\\u201d) to tune besides the ranks. Unlike the rank that can be pre-estimated by reconstruction error, this one seems to require tuning by cross-validation. This is not scalable. \\n4. Hyperparameters+Tuning: Hyperparameters Private proportion, \\u201csharing pattern\\u201d, IO dimension seem to be tuned by accuracy.( \\u201cWe test different sharing patterns and report the ones with the best accuracies\\u201d). This is even less scalable, and additional tuning makes it unsurprising it surpasses other models performance.\\n5. Insight & Analysis. All the core selection & public/private core selection are treated as black box optimisation. No insight is given about what turns out to be useful to share or not, and how consistent this is, etc.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
HJG7m2AcF7
Context Mover's Distance & Barycenters: Optimal transport of contexts for building representations
[ "Sidak Pal Singh", "Andreas Hug", "Aymeric Dieuleveut", "Martin Jaggi" ]
We propose a unified framework for building unsupervised representations of entities and their compositions, by viewing each entity as a histogram (or distribution) over its contexts. This enables us to take advantage of optimal transport and construct representations that effectively harness the geometry of the underlying space containing the contexts. Our method captures uncertainty via modelling the entities as distributions and simultaneously provides interpretability with the optimal transport map, hence giving a novel perspective for building rich and powerful feature representations. As a guiding example, we formulate unsupervised representations for text, and demonstrate it on tasks such as sentence similarity and word entailment detection. Empirical results show strong advantages gained through the proposed framework. This approach can potentially be used for any unsupervised or supervised problem (on text or other modalities) with a co-occurrence structure, such as any sequence data. The key tools at the core of this framework are Wasserstein distances and Wasserstein barycenters.
[ "representation learning", "wasserstein distance", "wasserstein barycenter", "entailment" ]
https://openreview.net/pdf?id=HJG7m2AcF7
https://openreview.net/forum?id=HJG7m2AcF7
ICLR.cc/2019/Conference
2019
{ "note_id": [ "HylY1l3ZlV", "SJl6gJd-g4", "S1eTBbEhy4", "SJeRp4fX14", "S1gWv4zXJ4", "S1eC3VETAm", "HyxRxKIjCQ", "SJlO8eO50Q", "HyeULBwq07", "rJlZ0BUGRX", "B1xjU8XzRX", "SJgglB7fCX", "H1gOKm7fC7", "H1xbR1mfRQ", "S1eNs17z0X", "BkeLx_Gz0m", "BJedqIv3hm", "SJgryzvchX", "rylvwtHK3X" ], "note_type": [ "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544826848992, 1544810229483, 1544466756868, 1543869638119, 1543869529048, 1543484597628, 1543362805541, 1543303248222, 1543300429587, 1542772168911, 1542760019045, 1542759656305, 1542759295553, 1542758344725, 1542758300496, 1542756334074, 1541334672390, 1541202397277, 1541130590622 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1343/Authors" ], [ "ICLR.cc/2019/Conference/Paper1343/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1343/Authors" ], [ "ICLR.cc/2019/Conference/Paper1343/Authors" ], [ "ICLR.cc/2019/Conference/Paper1343/Authors" ], [ "ICLR.cc/2019/Conference/Paper1343/Authors" ], [ "ICLR.cc/2019/Conference/Paper1343/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1343/Authors" ], [ "ICLR.cc/2019/Conference/Paper1343/Authors" ], [ "ICLR.cc/2019/Conference/Paper1343/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1343/Authors" ], [ "ICLR.cc/2019/Conference/Paper1343/Authors" ], [ "ICLR.cc/2019/Conference/Paper1343/Authors" ], [ "ICLR.cc/2019/Conference/Paper1343/Authors" ], [ "ICLR.cc/2019/Conference/Paper1343/Authors" ], [ "ICLR.cc/2019/Conference/Paper1343/Authors" ], [ "ICLR.cc/2019/Conference/Paper1343/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1343/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1343/AnonReviewer2" ] ], "structured_content_str": [ "{\"title\": \"Code release (package for hypernymy evaluation)\", \"comment\": \"Dear reviewers and area chair,\\n\\nAs promised earlier, we have released a python package for carrying out Hypernymy evaluations in an easy manner and with all the datasets organized in one place. The link is https://github.com/context-mover/HypEval\", \"we_also_aim_to_release_other_parts_of_the_code_soon_on_the_same_github_profile\": \"https://github.com/context-mover . Thanks for your time and feedback.\"}", "{\"metareview\": \"The paper proposes to build word representation based on a histogram over context word vectors, allowing them to measure distances between words in terms of optimal transport between these histograms. An empirical analysis shows that the proposed approach is competitive with others on semantic textual similarity and hypernym detection tasks. While the idea is definitely interesting, the paper would be streghten by a more extensive empirical analysis.\", \"confidence\": \"3: The area chair is somewhat confident\", \"recommendation\": \"Reject\", \"title\": \"Specific tasks and mixt results in empirical analysis limit significance\"}", "{\"title\": \"Thanks for sharing, though results seem to based on different corpus\", \"comment\": \"Thank you for pointing us to this fairly recent work which we were not aware of.\\n\\n| it looks like uSIF outperforms CoMB on all the STS tasks\\n\\nWe would request you to please carefully look at the numbers mentioned in uSIF before making this statement. It seems that their results are based on GloVe vectors obtained from Common Crawl corpus (840 billion tokens), c.f. https://github.com/kawine/usif/blob/master/STS/sts_glove.py#L56\\nComparing these numbers from uSIF directly would lead to an unfair comparison as we report all the results (for SIF and ComB) based on Toronto Book Corpus (~ 0.98 billion tokens).\\n\\n| I would recommend comparing it with uSIF (unsupervised smooth inverse frequency), which gets much better \\n| results on the sentence similarity tasks. \\n\\nIndeed uSIF seems to be better than SIF, although it still does the principal component removal based on test-set sentences (ref Section 6, subsection: Online Computation). Anyways, we will surely add the results of uSIF based on GloVe vectors from Toronto Book Corpus in our next revision. \\n\\nThanks for your interest.\"}", "{\"title\": \"Feedback on revision and response\", \"comment\": \"Dear Reviewer 1,\\n\\nWe were wondering if you have any additional suggestions/comments on our revision and response? In particular, we've included detailed qualitative analysis for sentence similarity and hypernymy (Sec B & C), results with validation (in Table 2) for hypernymy detection, clarification about STS baselines, and answers to individual questions.\\n\\nWe greatly appreciate the time that you've given to the paper so far and for providing the constructive feedback. Thank you so much!\"}", "{\"title\": \"Feedback on revision and response\", \"comment\": \"Dear Reviewer 3,\\n\\nWe greatly appreciate the time that you've given to the paper so far and for providing the feedback. We were wondering if you have any additional suggestions/comments on our revision and response? In particular, we've included detailed qualitative analysis for sentence similarity and hypernymy (Sec B & C), results with validation (in Table 2) for hypernymy detection, clarification about STS baselines, and answers to individual questions.\\n\\nOverall, please don\\u2019t hesitate to let us know if there are any additional clarifications that we can provide, as we would love to convince you of the merits of this work. Thank you so much!\"}", "{\"title\": \"Thank you for your consideration and we will be working on the suggestions\", \"comment\": \"Also, thank you for spending the time and valuable feedback.\\n\\nIf you look at the weighted averaging part in SIF (ignore the principal component part for a moment), it can be thought of as a special case of taking the Wasserstein barycenters with the histograms as Diracs at the word. In our case we have a histogram over the contexts of a word which inherently contains richer information than just a Dirac, and hence that's why we believe doing Barycenter is better than SIF in several cases. \\n\\nThough your suggestion is also right, in the sense that our current qualitative analysis mostly shows examples to give some understanding/intuition, and we agree with you that further analysis to demonstrate these points can be quite helpful. This unfortunately needed more time than available in the rebuttal session, but we aim to work on this by the camera ready deadline. \\n \\nThe second suggestion is very relevant too. In fact for sentence similarity task as well (like examples where the difference between sentences stems from just the subject/object), we think such a related modification in the ground metric can turn out to be useful. We will carry out experiments on this in the meanwhile and include the obtained results once the revision system is open again. \\n\\nWe really appreciate your detailed feedback and suggestions, which has helped in improving the paper. Please let us know if you have further questions/suggestions.\"}", "{\"title\": \"I have moved my position to accept\", \"comment\": \"The paper proposed two novel unsupervised methods, applying Barycenters to measure sentence similarity and applying Wasserstein distance to detect hypernym.\\nThe results show that the proposed method is efficient, outperforms SIF in many cases without accessing global statistics, and also outperforms DIVE + dS, which is the state of the art in many unsupervised hypernym detection tasks. \\nI believe these results are promising and are sufficient to demonstrate the usefulness of this proposed methods.\", \"final_suggestions\": \"1. Your qualitative analysis does not tightly connect to your methods. You only say in many situations, the proposed method might be better than SIF, but you do not explain why using Barycenters will lead to that. Analyzing why could really provide the motivations of your method.\\n2. The results show that the method significantly outperforms DIVE + dS, but the result is still mixed compared with DIVE + C*dS. I encourage authors to multiply your score with some word similarity measurements (like DIVE + W*dS in Chang et al. 2017) and really show that the proposed method outperforms DIVE + W*dS on average.\"}", "{\"title\": \"Thanks for the suggestions, we have added the qualitative analysis and can be found in sections B and C.\", \"comment\": \"Thank you so much for your prompt and helpful advice about carrying the qualitative analysis. As noted in the general comment above, we have added Sections B and C in the appendix about our performed qualitative analysis for the case of sentence similarity as well as hypernymy.\", \"sentence_similarity\": \"The details of our evaluation procedure can be found in section B.1. The analysis is carried out three datasets: STS14 News, STS15 Images and STS 14 WordNet. We also discuss our observations from the listed examples and mention explanations about where and why our proposed method works. For a quick overview, we suggest looking at observations in sections B.2 (from STS14 News) and some conclusions in B.3. We also encourage you to look at similar observations derived from analysing the other two datasets in Section B.5.\\n\\nIn section B.4, we also present some observations about the effect of sentence length on both methods, and in turn comment briefly about the kind of corpora.\", \"hypernymy\": \"We have performed a qualitative analysis here also and results can be found in Section C.2 (in both cases of maximum positive and negative difference in ranks).\\nBtw, we would like to remark that in Figure 1, we had used the entailment ground metric, but considered histograms with just some top contexts for illustration purpose. We have added this information in the Fig. 1 caption. \\n\\nWe believe that the observations made through these qualitative analysis experiments, should clarify questions regarding the motivation. In particular, the observations and conclusions mentioned in these examples should give intuitions about when and why our method would have a better chance to work. We have also noted points about some lessons/ideas for future (c.f. comments in these section, like about ground metric and complementary nature of errors). \\n\\nFinally, we would like to emphasize that our code to obtain histograms, CMD & CoMB as well as pre-built histograms would be made available (which would avoid the need to carry out co-occurrence related computation). Plus, a package would be released for making the standard evaluations on Hypernymy in a simple manner.\\n\\nIn the end, thanks a lot for encouraging us to do this analysis. This has not only re-affirmed our past beliefs, but also given us some more practical insights. We hope this improves your stance about the paper and please let us know if you have any other comments or suggestions. Thanks!\"}", "{\"title\": \"Summary of revision: 26th Nov, 2018:\", \"comment\": [\"We again thank the reviewers for their fruitful comments and suggestions. This revision contains the following:\", \"Updated results for Hypernymy when validation is performed on HypNet train set, following the procedure in DIVE (Chang et.al., 2018). The scores are in the same ballpark of what was previously presented.\", \"Added a new Section B in the appendix on qualitative analysis of sentence similarity for 3 datasets, giving some insights about which methods work when.\", \"Included some observations about the effect of sentence length in Section B.4.\", \"Also, added a Section C in appendix, showing the results of a similar qualitative analysis in the case of hypernymy detection.\", \"Overall, we hope that these additional experiments about the qualitative analysis should clarify concerns about the motivations of using context mover's distance & barycenters and where it can be helpful. Thus, we kindly request the reviewers to reassess their evaluations about the same.\", \"Lastly, we are happy to address any other further queries which the reviewers might have. Thank you for your time and feedback!\"]}", "{\"title\": \"The authors' revision is good, but the motivations of Barycenters and hypernym experiments are still not very clear\", \"comment\": \"Thanks for the revision and doing the additional analysis about cluster number k. The modification and clarification move my position to borderline (neither accept nor reject).\\n\\nNevertheless, most of my concerns about the motivation remain unsolved. The paper only tries to justify using Wasserstein distance between two words (e.g., in Figure 1). If the experiments are done in some word similarity benchmarks, the justification is fine. However, the experiment results in this paper are about measuring sentence similarity and detecting hypernyms, but explanations of why and when this could work well in these problems are not sufficient.\\n\\nThe experimental results are mixed and the method is complicated from practical perspectives (i.e., measuring Wasserstein distance based on co-occurrence in an efficient way is not that easy to be implemented). This will prevent other researchers from trying the proposed method unless they have some reasons/intuitions to believe the proposed method would have a high chance to work better in their application of interest. We need some more analysis to tell us which kinds of corpora we should try the proposed method in. One of the simple things authors could do is to list some similar sentence pairs or word pairs with a hypernym relation which are ranked higher using the proposed method than using the baselines and try to explain why the proposed method works better.\\n\\nI understand that some biases might be created during the analysis as you said, but some methods could be used to reduce the biases. For example, you could also list some examples where the baselines outperform the proposed method. When the results are mixed, I believe more analysis is unavoidable. Otherwise, people do not even know what are the lessons experiments tell us and what are problems this paper actually solves.\"}", "{\"title\": \"Summary of Revision: 20th Nov, 2018\", \"comment\": [\"We would like to thank the reviewers for all their great suggestions which have definitely have helped shape our paper better. Following which we have:\", \"Added results of experiments analysing the effect of number of clusters on performance. The results and the plot can be found in Section A.9 of the Appendix.\", \"Explained the difference of SIF results in the paper and included the baseline of Skip-thoughts with results taken from [Arora, et.al, 2017].\", \"Changed the title to the more factual \\u201cContext Mover's Distance & Barycenters: Optimal transport of contexts for building representations\\u201d.\", \"Addressed specific questions raised by the reviewers.\", \"Made the writing more clear in the required places and added missing references.\", \"In addition, we have added a sub-section about \\u201cOnline computation\\u201d in the Sentence similarity experiments to highlight a particular aspect of SIF that has remained under the rug.\", \"Specifically, the principal component removal is carried out (in a topic-wise manner) on the sentence embeddings in the test set. This gives them an advantage as it utilizes the inter-sentence information in the test set, but even without utilizing such information our methods perform competitively and leads to an overall gain.\", \"We greatly acknowledge the helpful suggestions from the reviewers. Also, we welcome any other comment/questions that the reviewer or the area chairs or the public might have. Thank you so much!\"]}", "{\"title\": \"Thanks for the feedback. Response to questions/criticisms below (2/2):\", \"comment\": \"(Continued from Part1)\\n\\n| Experiments\\n\\n| results for the SIF baseline are much lower\\n\\nThe SIF scores in Arora, et. al., are based on vector embeddings trained on CommonCrawl (840 Billion tokens), while the results in our paper are based on embeddings trained on the Toronto Book Corpus (~ 0.98 Billion tokens) and thus the difference. The CommonCrawl vectors are publicly available but the co-occurrence information isn\\u2019t, which is required for our method. Hence, we use Toronto book corpus for a fair and accurate comparison of SIF and our method.\\n\\n| there should be a comparison with the current state of the art\\n\\nWe would like to remark that our emphasis is on unsupervised sentence representation methods. Including other standard methods such as InferSent (Conneau, et.al, 2017) or USE (Cer, et.al, 2018) would lead to an unfair comparison as they are supervised or based on conversational data respectively. As a reference, we have included scores for Skip-Thought, the unsupervised method from (Kiros et al., 2015), in our revision. Overall, our focus is to compare unsupervised methods which can just build up on the word-embedding information (i.e., training-free methods).\\n\\n| on the hypernymy task, what validation data was used\\n\\nIn the hypernymy experiments, only exploring the limited space of hyper-parameters (< 10 configurations) listed in Table:4 were sufficient to obtain these results. Hence, you are correct to notice that the parameters are not set based on performance on validation set and rather the benchmarks themselves. As a matter of fact, we are currently experimenting on a much larger space of hyper-parameters with a validation set in place and we will be updating our revision soon. \\n\\n| writing and language style\\n\\nThanks for the suggestions about the writing style, we have taken them into account.\\n\\nBy addressing these criticisms, we hope that it changes your viewpoint of the paper, and that you re-evaluate the score for the review. Lastly, please let us know if you have any other comments or questions in mind.\"}", "{\"title\": \"Thanks for the feedback. Response to questions/criticisms below (1/2):\", \"comment\": \"Thank you so much for your detailed feedback and taking the time. We are happy to know that the reviewer acknowledges the elegance of the approach and that it doesn\\u2019t require additional training on top of existing approaches. Please find the answer to the specific questions as follows:\\n\\n| only really allows a way of computing distances between pairs of word representations\\n\\nWe respectfully disagree with this statement. Representing words as distribution over contexts and then using Context Mover's Distance (CMD) is just half of the picture. \\n\\n- In addition, our method importantly provides a principled manner to extend this to form representation of sentences via Wasserstein barycenters. \\n\\n- Since this obtained representation is also a distribution over contexts, we can again employ CMD to compare the semantics of two sentences.\\n\\n- Further, as the Context Mover's Distance and Barycenters are parameterized by the ground metric, this offers the flexibility to compare words and sentences with respect to different cost such as that of entailment. \\n\\n| question on use-cases\\n\\nWe agree that just computing distance between word representations may not have been as useful in translation, QA. But, since our contribution is beyond just word representations, consider the following cases:\\n\\n- Using CMD for downstream tasks like QA or NLI: Currently, we rely on existing approaches like GloVe to form the ground metric between words. But, for a supervised task like QA/NLI, it would be more effective to learn this ground metric. For example, one can learn a linear (or non-linear) projection M that takes in the GloVe embeddings and maps it to a space that better measures the cost of transport between two points as required for the downstream task. Thus the objective/loss can be designed so that the hypothesis and the true premise are closer in CMD, while the hypothesis and the false promised are far away with respect to CMD. \\n\\n- Somewhat similar to the above point, Tay et.al, 2017 [1] have explored the idea of using the Hyperbolic representation space for QA and this seems to have a strong performance [2]. Of course this doesn't imply that doing it in the Wasserstein space would be better or worse, but points to the potential of utilizing different representation spaces for tasks like QA.\\n\\n- For low-resource languages, Qi et.al, 2018 [3], demonstrate the effectiveness of having pre-trained word representations for translation, and thus having improved representations of words for instance can be useful in such a scenario.\\n\\n[1] Tay et.al, 2017: \\\"Hyperbolic Representation Learning for Fast and Efficient Neural Question Answering\\\", https://arxiv.org/pdf/1707.07847.pdf\\n[2] https://aclweb.org/aclwiki/Question_Answering_(State_of_the_art)\\n[3] Qi et. al, 2018: \\\"When and Why are Pre-trained Word Embeddings Useful for Neural Machine Translation?\\\", NAACL, http://www.aclweb.org/anthology/N18-2084\\n\\n| point estimates easy to work with\\n\\nWe agree with you that point estimates are easier to work. The focus on distributional estimates is comparatively recent, mainly originating from Vilnis & McCallum, 2014. This doesn't imply that we should stop focussing on such distributional estimates, but rather strengthen our relatively weak toolkit for dealing with them, given the potential offered by representing entities as distributions. \\n\\nMotivated by the results of our proposed approach, an exciting future exploration would be to try searching for an embedding space where euclidean distance mimics Wasserstein distance (Courty, et.al, 2017) for the particular formulation of Context mover's distance.\\n\\n| Motivate : why the method can be useful\\n\\nThe motivation for representing words and sentences as distributions over the contexts is that it inherently allows the ability to capture the various senses under which they can appear. For instance, the Table 3 in Appendix A.6 tries to qualitatively probe the sentence representation produced with Wasserstein barycenter, by looking at the nearest neighbor in the vocabulary (with respect to CMD). Here, the method seems to capture a varied range of contexts, like for the ambiguous sentence \\u201che lives in europe for\\u201d, the obtained closest neighbors include: \\u2018decades\\u2019, \\u2018masters\\u2019, \\u2018majority\\u2019, \\u2018commerce'. More such examples can be found in this section A.6, and we hope these examples address your questions about motivation.\"}", "{\"title\": \"Thanks a lot for the detailed feedback and suggestions. Please find our response to all comments: (2/2)\", \"comment\": \"(continued from Part1 )\\n\\n| presentation suggestions\\n\\n| Title\\n\\nWe agree with your statement in general. But, we would first like to clarify that through our previous title, we were not claiming that Wasserstein is or isn\\u2019t all you need. Rather our motivation was to show how the tools of Wasserstein distance and barycenter which are at the heart of our framework can hold significance in problems with a co-occurrence structure (like your remark that Wasserstein could be \\u201chelpful for measuring co-occurrence-based similarity\\u201d).\\n\\nNatural language is one such domain with an inherent co-occurrence structure and we show that our framework is competitive (and can also beat state-of-the-art) on sentence representation/similarity and hypernymy, which are quite independent in their nature. All this without requiring additional training.\\n\\nYou are completely right about how such naming practice isn\\u2019t useful for the community. Taking this into account: we have changed the title to \\u201cContext Mover\\u2019s Distance & Barycenter: Optimal transport of contexts for building representations\\u201d, in order to make it more factual and reflective about our method.\\n\\n| Last point in contribution: \\n\\nYes, thanks for this tip. We have written it as a future work direction. \\n\\n| basically find the representative word which is most likely to co-occur with every word in the sentence \\n\\nYes, this is right to an extent. In the qualitative analysis, we wanted to understand the nature of resulting wasserstein barycenter of a sentence and hence looked at the nearest words in the vocabulary. But while computing similarity between sentences, we do it by directly measuring the CMD between the produced barycenters of sentences and not between the representative word. \\n\\nMost probably looking at the nearest neighbors in the space of sentences would give a more finer picture, but such a space is huge and looking amidst just a few sentences would be biased. \\n\\n| Minor writing suggestions \\n\\nThanks you so much for pointing these out. We admit that the reference section was a bit untidy and have organized it better. All other suggestions have been taken into account as well and can be seen in the revision. \\n\\nHope this addresses all of your questions about the experiments. In light of this response, it would be great if you can reconsider the score for your review. And, please feel free to ask about any questions or parts which you might need clarification.\"}", "{\"title\": \"Thanks a lot for the detailed feedback and suggestions. Please find our response to all comments: (1/2)\", \"comment\": \"These suggestions have been quite helpful and certainly helped us further improve the paper. Thank you.\\n\\n| Related Work & Novelty: \\n\\nThanks for sharing these articles which are indeed interesting applications of Wasserstein distance in NLP. The Word Mover\\u2019s distance paper from Kusner, et.al. (2015) has already been discussed more than once. We have included [Rolet, et. al, 2016] and the concurrent work by [Xu, et. al. 2018] in our revised related work as it might be fruitful for the readers to know. \\n\\nBut, since the focus of all these works is on transporting the words (as points) directly to form a metric over documents, an important issue still remains. This is the inability to define a suitable metric for comparing words, phrases or sentences which lie below documents in the levels of grammatical hierarchy. When we define the transport over contexts, it not only handles these lower levels of hierarchy, but also offers the ability to go up the hierarchy with the Wasserstein barycenter. \\n\\nThus, we are confident that our formulation of Context Mover's Distance & Barycenters is novel and opens the floor to further possibilities.\\n\\n| Questions about experiments:\\n\\n| Difference in SIF scores \\n\\nThe SIF scores in [Arora, et. al.] and in [4] are based on vector embeddings trained on CommonCrawl (840 Billion tokens), while the results in our paper are based on embeddings trained on the Toronto Book Corpus (~ 0.98 Billion tokens) and thus the difference. The CommonCrawl vectors are publicly available but the co-occurrence information isn\\u2019t, which is required for our method. Hence, we use Toronto book corpus for a fair and accurate comparison of SIF and our method.\\n\\n| if you multiply the scores from CMD with the word similarity\\n\\nThanks for pointing this, and your guess seems probable to help in further improving the performance. Note that our focus over here was to compare CMD between the words (using Henderson embeddings for the ground metric) versus just using these embeddings alone. There are probably several nice tricks (as you suggested) that can be further used along with CMD to improve the performance. This although wouldn\\u2019t have helped us much to validate our hypothesis of using distributional estimates over just point embeddings. \\n\\n| Could you provide some performance comparison with different numbers of K\\n\\nYes, you are right that it would be useful for the readers to know about the trade-off. We have taken this into account in our revision, where in Section A.8 of the Appendix, we plot the performance versus number of clusters for the three best variants in Table 1 as well as for the average across these variants. \\n\\nWe observe (c.f. Fig 3) that on average increasing the number of clusters until around K=300, the performance significantly improves. Beyond that it mostly plateaus (\\u00b1 0.5) and isn't worth the increased computation time. Please check out section A.8 for more details.\"}", "{\"title\": \"Thanks for the very encouraging and helpful feedback. Response to all the questions/suggestions below:\", \"comment\": \"Also, we appreciate that you recognize the importance of topic and find the paper to be well structured and clear.\\n\\n| First, the method does not learn the representations. Instead, augments a given one and computes the context mover distance on top \\n\\nThis is a great point. Under the proposed framework, our first aim was to investigate the out-of-box performance, i.e., by just using off-the-shelf embeddings like GloVe to form the ground metric. We realized through the empirical experiments that this already served as a decent starting point, where we were able to perform competitively on sentence similarity and beat state-of-the-art methods in hypernymy on several datasets.\\n\\n| maybe representations are better to be \\\"learned\\\"\\n\\nWe totally agree with your remark, and an excellent direction to pursue in the future would be to learn the representations based on the Context Mover's Distance and Barycenter. An example of such a learning procedure could be like CBOW with negative sampling. Another would be to learn the ground metric based on supervised tasks. \\n\\nOverall, our paper takes the first step and shows that the framework holds promise via 'augmenting' the representations, and provides for an interesting direction to explore with the learning aspect. \\n\\n| whether an object is represented as a single point or as a distribution \\n\\nA given object (e.g. 'cat') which we seek to represent is considered to be as a distribution/histogram over contexts (e.g. {'milk', 'drinking', 'cute', \\u2026}). So, over here the distribution is over all possible contexts with which the object can co-occur. Hence, it suffices to denote each individual context (like 'cute') as a single point here. \\n\\n| Other issues:\\n\\n| the exact value of p \\n\\nThis is an important question as (Agueh and Carlier, 2011) have shown that when the underlying space is Euclidean and p=2, there exists a unique minimizer to the Wasserstein barycenter problem. Since we are anyways solving the regularized Wasserstein barycenter (Cuturi & Doucet, 2014) over here instead of the exact one, the particular value of p is less of an issue. Empirically in the sentence similarity experiments, we observed p=1 to perform better than p=2 (by ~ 2-3 points). We have included this in our revision.\\n\\n| Standard embedding methods are missing from the list.\\n\\nWe would like to remark that our emphasis is on unsupervised sentence representation methods. Including other standard methods such as InferSent (Conneau, et.al, 2017) or USE (Cer, et.al, 2018) would lead to an unfair comparison as they are supervised or based on conversational data respectively. As a reference, we have included scores for Skip-Thought, the unsupervised method from (Kiros et al., 2015), in our revision. Overall, our focus is to compare unsupervised methods which can just build up on the word-embedding information (i.e., training-free methods).\\n\\n| Authors raise a question in the title\\n\\nAt the heart of our method are the core tools from Wasserstein geometry, which are then demonstrated on two important tasks in NLP: sentence representation & similarity, and hypernymy detection. Hence, our intention is to emphasize how helpful these tools can be and thus we raise the question. In fact, we have decided to change the title to \\u201cContext Mover\\u2019s Distance & Barycenters: Optimal transport of contexts for building representations\\u201d, in order to make it more factual and reflective.\\n\\n| It is not clear why the \\\"context\\\" of hyponym is expected to be a subset of the context of the hypernym.\\n\\nThanks for raising this point. It is indeed true that the context of the hyponym may not always be a subset of the context of the hypernym. The essential idea originates from the Distributional Inclusion Hypothesis (Geffet & Dagan, 2005), which states that a word \\u2018v\\u2019 entails another word \\u2018w\\u2019, if \\u201cthe characteristic contexts of v are expected to be included within all w's contexts (but not necessarily amongst the most characteristic ones for w).\\u201d\\n\\nHowever, in contrast to the above distributional inclusion hypothesis, we see our method as a relaxation of this strict condition, by having an entailment based ground cost between the contexts while using CMD. \\n\\n| the impression that parameter might not be set based on performance on validation set\\n\\n In the hypernymy experiments, only exploring the limited space of hyper-parameters (< 10 configurations) listed in Table:4 were sufficient to obtain these results. Hence, you are correct to notice that the parameters are not set based on performance on validation set and rather the benchmarks themselves. As a matter of fact, we are currently experimenting on a much larger space of hyper-parameters with a validation set in place and we will be updating our revision soon. \\n\\n| Minor\\n\\nThanks for pointing them, have been corrected.\\n\\nWe hope this clarifies the questions you had about the shortcomings and in general. Please feel free to post any other comments that you might have! :)\"}", "{\"title\": \"Interesting method, but needs more to show it is useful\", \"review\": \"The submission explores a new form of word representation based on a histogram over context word vectors, allowing them to measure distances between words in terms of optimal transport between these histograms. The authors speculate that this may allow better representations of polysemous words. The approach is mathematically elegant, and requires no additional training on top of existing approaches like Glove. To improve efficiency, they use clustering on context vectors. They present results on various semantic textual similarity and hypernym detection tasks, outperforming some baselines.\\n\\nThe paper presents itself as an alternative to word embeddings as a way of representing words. As far as I can tell, their method only really allows a way of computing distances between pairs of word representations, which hasn't been a useful concept for the vast majority of cases that word embeddings have been used for (translation, QA, etc.). Point estimates are at least very convenient to work with. That doesn't mean the proposed approach is useless, but the paper needs to give a much stronger motivation for when and why measuring distances between words may be helpful.\\n\\nThe experiments are a bit underwhelming. STS and hypernymy detection are somewhat unimpressive tasks to work - I'm not aware of any results on these tasks that have generalized to more realistic applications like translation or question answering. I think for publication with just these tasks, the method would need to show a dramatic breakthrough, which the submission definitely does not. The STS baselines are very simple bag-of-words approaches, and even then the results for the SIF baseline are much lower than those reported by Arora et al. (2017). At the least, there should be a comparison with the current state of the art. On the hypernymy task, what validation data was used? Unfortunately I'm not able to suggest better experiments, because I can't think of cases where their method would be useful.\\n\\nThe paper is significantly weakened by frequently making very strong claims based on rather limited experimental results (for one example, \\\"we illustrate how our framework can be of significant benefit for a wide variety of important tasks\\\" feels like quite a stretch). It would be much improved if some of the language was toned down. \\n\\nOverall, the paper introduces a mathematically elegant method for representing words as distributions over contexts, and for computing distances between these words. For acceptance, I think the paper needs to better motivate why the method could be useful, and back that up with more convincing experiments.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Interesting approach, proposes representation augmentation as opposed to representation learning and the proposed distance not used for training.\", \"review\": \"The paper proposes a method to augment representation of an entity (such as a word) from standard \\\"point in a vector space\\\" to a histogram with bins located at some points in that vector space. In this model, the bins correspond the context objects, the location of which are the standard point embedding of those objects, and the histogram weights correspond to the strength of the contextual association. The distance between two representations is then measured with, Context Mover Distance, based on the theory of optimal transport, which is suitable for computing the discrepancy between distributions.\\nThe representation of a sentence is proposed to be computed as the barycenter of the representation of words inside.\\nEmpirical study evaluate the method in a number of semantic textual similarity and hypernymy detection tasks. \\n\\nThe topic is important. The paper is well written and well structured and clear. The method could be interesting for the community. However, there are a number of conceptual issues that make the design a little surprising. First, the method does not learn the representations. Instead, augments a given one and computes the context mover distance on top of that. But, if the proposed context mover distance is an effective distance, maybe representations are better to be \\\"learned\\\" based on the same distance rather than being received as inputs.\\nAlso, whether an object is represented as a single point or as a distribution seems to be an orthogonal matter to whether the context predicts the entity or vice versa. This two topics are kind of mixed up in the discussions in this paper.\", \"other_issues\": \"- One important technicality which seems to be missing is the exact value of p in Wp which is used. This becomes important for barycenters computations and the uniqueness of barycenters. \\n- Competitors in Table 1 are limited. Standard embedding methods are missing from the list.\\n- Authors raise a question in the title of the paper, but the content of the paper is not much in the direction of trying to answer the question. \\n- It is not clear why the \\\"context\\\" of hyponym is expected to be a subset of the context of the hypernym. This should not always be true.\\n- Table 4 gives the impression that parameter might not be set based on performance on validation set, but instead based on the performance on the test set.\\n\\n- Minor:\\nof of\\ndata ,\\nby\\nbyMuzellec\\nCITE\\n\\nOverall, comparing strengths and shortcomings of the paper, I vote for the paper to be marginally accepted.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"The method is not very novel or not very well-motivated. The experiment results are interesting but mixed.\", \"review\": \"\", \"pros\": \"I also study some related tasks and suspect that Wasserstein is helpful for measuring co-occurrence-based similarity. It is nice to see the effort in this direction.\", \"cons\": \"The methods are either not very novel or not very well-motivated. The experiment results are interesting but mixed. If the doubts about the experiments are clarified and the methods are motivated better (or the strengths/weaknesses are better analyzed), I will vote for acceptance.\", \"related_work\": \"In addition to the work in the related work section, some other work also studied the NLP applications of Wasserstein, especially the ones (such as [1,2,3]) which are related to similarity measurement. The authors should include them in the related work section.\", \"question_about_experiments\": \"1. Why are the SIF scores reported in Table 1 much lower than the results reported in Arora et al., 2017 and in [4]?\\n2. If we compare CMD with DIVE + C * delta S, the proposed method wins in EVALution and Weeds, loses in Baroni, Kotlerman, BLESS, and Levy. If you compare DIVE + delta S (Chang et al. 2017) with DIVE + C * delta S, delta S also wins in EVALution and Weeds, loses in Baroni, BLESS, Kotlerman, and Levy (although CMD seems to be better than DIVE + delta S). \\nBased on the fact that your method has a high correlation with DIVE + delta S (Chang et al. 2017), I guess that CMD does not work very well when the dataset contains random negative samples, but work well when all the negative samples are similar to the target words. If my guess is right, the performance should be improved on average if you multiply the scores from CMD with the word similarity measurement.\\n3. To make it efficient, CMD seems to sacrifice some resolutions by using the K representative context. Does this step hurt the performance? Could you provide some performance comparison with different numbers of K to let readers know whether there is a tradeoff between accuracy and efficiency?\\n4. Since the results are mixed, I suppose readers would like to know when this method will perform better and the reasons for having worse results sometimes.\\n\\nWriting and presentation suggestions/questions:\\n1. If the proposed method is a breakthrough, I am fine with the title but I think the experiment results tell us that Wasserstein is not all you need. I understand that everyone wants to have an eye-catching title for their paper. The title of this paper indeed serves this purpose. Since the strategy is effective, more and more people might start to write papers with a title like this. However, having lots of paper called \\\"XXX is all you need?\\\" or \\\"Is XXX all you need?\\\" is definitely not good for the whole community. Please use a more specific title such as Context Mover Distance or something like that.\\n2. The last point in the contribution is not supported by experiments. I suggest that the authors move this point to the future work section.\\n3. It is good to see some negative results like Baroni in Table 2. Results on other datasets should not be put into Table 4 in Appendix.\\n4. Using Wasserstein barycenter to measure sentence similarity seems to be novel, but the motivation is not very clear. Based on A.6, we could see that for each sentence, authors basically find the representative word which is most likely to co-occur with every word in the sentence (has the highest average relatedness rather than similarity) and measure the Wasserstein distance between the co-occurrence probability distribution. I suppose sometimes relatedness is a better metric when measuring sentence similarity, but I think authors should provide some motivative sentence pairs to explain when that is the case.\\nUsing Wasserstein to detect hypernym seems to also be novel, but the motivation is also not clear. Again, a good example would be very helpful.\\nThis point is also related to the last question for experiments.\", \"minor_writing_suggestions\": \"1. In section 3, present the full name of CITE\\n2. If you put some important equations to the appendix (e.g., the definition of SPPMI_{alpha,gamma}), remember to point readers to the appendix. \\n3. In the second paragraph of section 7, Nickel & Kiela, 2017 is a method supervised by a hierarchical structure like WordNet rather than a count-based or word embedding based methods. \\n4. In Chang et al., the training dataset is not Wikipedia dump from 2015. This difference of evaluation setup should be mentioned somewhere (e.g., in the caption of Table 2).\\n5. The reference section is not very organized. For example, the first name of Benotto is missing for the PhD thesis \\\"Distributional Models for Semantic Relations: A Study on Hyponymy and Antonymy\\\". The arXiv papers are cited using different formats. Only some papers have URL. The venue's names are sometimes not capitalized. Gaussian embedding is cited twice, etc.\\n\\n\\n[1] Kusner, M., Sun, Y., Kolkin, N., & Weinberger, K. (2015). From word embeddings to document distances. In International Conference on Machine Learning (pp. 957-966).\\n[2] Xu, H., Wang, W., Liu, W., & Carin, L. (2018). Distilled Wasserstein Learning for Word Embedding and Topic Modeling. NIPS \\n[3] Rolet, A., Cuturi, M., & Peyr\\u00e9, G. (2016, May). Fast dictionary learning with a smoothed wasserstein loss. In Artificial Intelligence and Statistics (pp. 630-638).\\n[4] Perone, C. S., Silveira, R., & Paula, T. S. (2018). Evaluation of sentence embeddings in downstream and linguistic probing tasks. arXiv preprint arXiv:1806.06259.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
BJemQ209FQ
Learning to Navigate the Web
[ "Izzeddin Gur", "Ulrich Rueckert", "Aleksandra Faust", "Dilek Hakkani-Tur" ]
Learning in environments with large state and action spaces, and sparse rewards, can hinder a Reinforcement Learning (RL) agent’s learning through trial-and-error. For instance, following natural language instructions on the Web (such as booking a flight ticket) leads to RL settings where input vocabulary and number of actionable elements on a page can grow very large. Even though recent approaches improve the success rate on relatively simple environments with the help of human demonstrations to guide the exploration, they still fail in environments where the set of possible instructions can reach millions. We approach the aforementioned problems from a different perspective and propose guided RL approaches that can generate unbounded amount of experience for an agent to learn from. Instead of learning from a complicated instruction with a large vocabulary, we decompose it into multiple sub-instructions and schedule a curriculum in which an agent is tasked with a gradually increasing subset of these relatively easier sub-instructions. In addition, when the expert demonstrations are not available, we propose a novel meta-learning framework that generates new instruction following tasks and trains the agent more effectively. We train DQN, deep reinforcement learning agent, with Q-value function approximated with a novel QWeb neural network architecture on these smaller, synthetic instructions. We evaluate the ability of our agent to generalize to new instructions onWorld of Bits benchmark, on forms with up to 100 elements, supporting 14 million possible instructions. The QWeb agent outperforms the baseline without using any human demonstration achieving 100% success rate on several difficult environments.
[ "navigating web pages", "reinforcement learning", "q learning", "curriculum learning", "meta training" ]
https://openreview.net/pdf?id=BJemQ209FQ
https://openreview.net/forum?id=BJemQ209FQ
ICLR.cc/2019/Conference
2019
{ "note_id": [ "r1lYxf1bl4", "Bklaz-qtRm", "SJeSplqYR7", "HJlCjlqKAX", "BJexNkcF0Q", "ryejZrK9h7", "SylnSTaghX", "HkxIVs6PsQ" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544774129286, 1543246100935, 1543246012555, 1543245990485, 1543245607955, 1541211395108, 1540574532323, 1539984173896 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1342/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1342/Authors" ], [ "ICLR.cc/2019/Conference/Paper1342/Authors" ], [ "ICLR.cc/2019/Conference/Paper1342/Authors" ], [ "ICLR.cc/2019/Conference/Paper1342/Authors" ], [ "ICLR.cc/2019/Conference/Paper1342/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1342/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1342/AnonReviewer1" ] ], "structured_content_str": [ "{\"metareview\": \"All reviewers (including those with substantial expertise in RL) were solid in their praise for this paper that is also tackling an interesting application that is much less well studied but deserves attention.\", \"confidence\": \"3: The area chair is somewhat confident\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Important topic, solid contribution\"}", "{\"title\": \"Thank you for the comments and questions.\", \"comment\": \"We thank the reviewer for the comments and questions. Below are our responses.\\n\\n> \\u201cIn the first set of experiments it is clear the improved performance of QWeb over Shi17 and Liu18, however, it is not clear why QWeb is not able to learn in the social-media-all problem. The authors tested only one of the possible variants (AR) of the proposed approach with good performance.\\u201d \\n\\nThe main reason is that in social-media-all environment, the size of the vocabulary is more than 7000 and task length is 12 which are both considerably larger compared to other environments. Another reason is that the QWeb can not learn the correct action by focusing on a single node; it needs to incorporate siblings of a node in the DOM tree to generate the correct action. Without adding shallow encoding (SE) and one of the proposed approaches (such as AR), QWeb is not able to train purely from trial-and-error as the number of successful episodes is very small. \\n\\nWe updated the Section 6.1 of the paper with these explanations and we plan to conduct more experiments in future work.\\n\\n> \\u201cIt is not clear in the book-flight-form environment, why the QWeb+SE+AR obtained 100% success while the MetaQWeb, which includes one of main components in this paper, has a lower performance.\\u201d\\n\\nThe main reason of the performance difference between the QWeb+SE+AR and the MetaQWeb can be explained by the difference between the generated experience that these models learn from. In training QWeb+SE+AR, we use the original and clean instructions that the environment sets at the beginning of an episode. MetaQWeb, however, is trained with the instructions that instructor agent generates. These instructions are sometimes incorrect (as indicated by the error rate of INET : $4\\\\%$) and might not reflect the goal accurately. These noisy experiences hurt the performance of MetaQWeb slightly and causes the $1\\\\%$ drop in performance. \\n\\nWe updated the Section 6.2 with this explanation.\\n\\n> \\u201cThe proposed method uses a large number of components/methods, but it is not clear the relevance of each of them. The papers reads like, \\\"I have a very complex problem to solve so I try all the methods that I think will be useful\\\". The paper will benefit from an individual assessment of the different components.\\u201d\\n\\nThank you for the comment. We have revised the Introduction, and Sections 4 and 5 to clarify the differences between the methods and contributions. Below is the summary that hopefully brings more clarity to the reasoning before the approaches.\\n\\nWe aim to solve the web navigation tasks in two situations, when the expert demonstrations are available and when they are not. When the expert demonstrations are available, we need to make several improvements to the training to outperform the baselines. These improvements are: better neural network architecture (QWeb), and more dense rewards. We get the more dense rewards by using the reward potentials and setting up a curriculum over the given demonstrations. \\n\\nIn the second case, when the expert demonstrations are not available. In that situation, we use the meta-trainer to generate new demonstrations. \\n\\n> \\u201dThe authors should include a section of conclusions and future work.\\u201d\\nThank you for point it out. The section is added to the paper.\"}", "{\"title\": \"Thank you for the insightful comments (2/2).\", \"comment\": \"> \\u201cIn Section 5.1, are there two RL agents, an instructor and a learner with different reward functions? If so, isn\\u2019t this becoming game theoretic and is this likely to converge in most scenarios?\\u201d\", \"there_are_two_different_rl_agents\": \"instructor agent (INET) and navigator agent (QWeb). These are trained in two phases : (i) we first train INET (a DQN agent with Q value function defined at the end of Section 5.1) using the instruction generation environment that we described in Section 5.1, (ii) next, parameters of the INET agent is fixed and we train QWeb using the instruction and goal pairs that the meta-trainer generates by running INET at the beginning of each episode. Hence, we avoid the problems that could have arised by jointly training two different RL agents with different objectives.\\n\\n> \\u201cWhat does Q_D^I actually represent? Why is maximizing these values a good thing?\\u201d\\nQ_D^I is the Q value function that we used to train instructor agent (INET) as we described in Section 5.1.\\n\\n> \\u201cThere are a few grammatical mistakes in the paper including.\\u201d\\nThank you for pointing it out. We updated in the paper, and will make another pass for the final version if accepted.\"}", "{\"title\": \"Thank you for the insightful comments (1/2).\", \"comment\": \"We thank the reviewer for the insightful comments. Below are our responses.\\n\\n> \\u201cHowever, I see many references in the paper are not from peer reviewed conferences or journals. Unless absolutely necessary, such papers should not be cited because they have not been properly peer reviewed.\\u201d\\n\\nThank you for pointing that out. We updated the references where the archival versions became available, and will do so again before camera-ready if accepted. At the same time, we also wanted to kindly point out that ICLR reviewer guidelines consider publication on Arxiv as prior work that should be properly cited: https://iclr.cc/Conferences/2019/Reviewer_Guidelines\\n\\n\\n> \\u201cThe only thing I would have liked to seen beyond these results are actual learning curves showing, after X iterations, what percentage of the tasks could be completed. I suspect that in many domains the baseline LfD techniques are learning much faster since learning from teachers tends to be more targeted and sample efficient. Learning curves would show us whether or not this is the case. \\u201c\\n\\nWe collected the number of steps (k=1000) needed to reach the top performance :\\n___________________________________________________\\n| environment \\\\ method | QWeb | LIU18 | \\n--------------------------------------------------------------\\n| click-pie | 175k | 13k |\\n| login-user | 96k | < 1k |\\n| click-dialog | 5k | < 1k |\\n| enter-password | 3k | < 1k |\\n--------------------------------------------------------------\\n\\nThese numbers reflect the reviewer\\u2019s intuition that LfD techniques learn faster, however, with a drop in success rate for some environments. We updated the experimental results in Section 6.1 with these results.\\n\\n> \\u201dThe weakest part of the paper was the description of the instructor network and the Meta-training in general. This portion seemed ill-described and largely speculative, despite the promising results in Figure 7. In particular, Section 5 is very unclear on how exactly the Meta-Learning works. Pseudocode is definitely needed in this portion well beyond the quick descriptions in Figure 4 and 5, which I could not understand, despite multiple readings. I suggest eliminating those figures and providing concrete pseudo\\u2014code describing the meta learning and also addressing the following open questions in the text:\\u201d\\n\\nThank you so much for the suggestion. We added the Algorithms 1, 2, 3 for the curriculum learning, DQN training, and meta learning. We removed Figure 5, and put Figures 3 and 4 side-by-side, since they both depict neural network architecture. We have also rewrote the Section 5. We are hoping that the changes are improving the clarity.\\n\\n> \\u201cWhy is a rule based randomized policy good to learn from? How is this different from learning from demonstration in the baselines?\\u201d\\nWhen the expert demonstrations are not available, we can use any policy (random or rule-based) and pretend that the policy is following some, to us known, instruction. The instructor agent learns to recover that hidden instructions, in effect creating new demonstrations. Once the instructor is trained to recover the instructions for a given policy, we generate new instruction / goal paths so that we can train QWeb. The choice of policy is arbitrary, and it was a design choice to select a simple, rule-based policy that visits each DOM element in web navigation environments.\", \"our_meta_training_approach_has_two_main_advantages_over_learning_from_demonstrations\": \"By learning to generate new instruction following tasks, we can generate unbounded amount of episodes for any environment where collecting large amount of episodes for each environment is costly.\\nSimilar to our curriculum generation with simulated goals approach, generated goal states are allowed to be incomplete. For example, if we constrain our rule based policy to run only a small number of steps, generated goal state could be incomplete and some DOM elements in the web page could be unvisited. In this case, QWeb can still leverage these experiences while also learning from the original instructions and sparse rewards that the environment generates.\\n\\nPaper\\u2019s introduction, and Section 5 are updated to clarify the role and selection of the rule-based policy, and advantages over the baselines.\\n\\n> \\u201cHow is a \\u201cfine grained signal\\u201d generated? What does that mean? Is it a reward?\\u201d\\nThank you for pointing it out. Yes, it is a dense reward. We updated the paper to use more commonly used term: dense reward.\"}", "{\"title\": \"Thank you for the kind words and questions that help us improve the paper.\", \"comment\": \"We thank the reviewer for the kind words and questions that help us improve the paper. We detail our responses below.\\n\\n> \\u201cThere are a few notations used without definition, for example DOM tree, Potential (in equation (4))\\u201d\\n\\nWe updated in the paper.\\nOn Page 3, line 3: \\u201cthe Document Object Model (DOM) trees, a hierarchy of web elements in a web page.\\u201d\\n\\nSection 4.2:\\u201cwe define a potential function ($Potential(s, g)$) that counts the number of matching DOM elements between a given state (s) and the goal state (g); normalized by the number of DOM elements in the goal state. Potential based reward is then computed as the scaled difference between two potentials for the next state and current state\\u201d\\n\\n> \\u201cSome justification regarding the the Q value function specified in (1) might be helpful, otherwise it looks very adhoc\\u201d.\\n\\nOur Q value function in Eq. (1) is motivated by the design of our composite actions (click(e) and type(e, y)) and the nature of web pages in general. A DOM element (e) in a web page mostly identifies which composite action to select, e.g., a text box such as destination airport is typed with the name of an airport code while a date picker is clicked. This motivates the dependency graph that we sketched in Figure 2. We define our Q value function for each composite action based on this dependency graph via a separate value function to model each node in the graph given its dependencies. We also also added this motivation to Section 3.\\n\\n\\n> \\u201cAlthough using both shallow encoding and augmented reward lead to good empirical results, it might be useful to give more insights, for example, sample size limit cause overfitting for deep models?\\u201d\\n\\nWe would like to give more insights into overfitting of deep models without and with augmented rewards. Without augmented rewards, the Q function overfits very early to the minimum Q value possible since the majority of the episodes are unsuccessful and the reward is highly unbalanced towards negative. Escaping this bad minima via purely random exploration is difficult especially in environments that require longer episodes. We observe that in majority of these cases the policy converges to terminating the episode as early as possible to get the least step penalty. With augmented rewards, Q function recovers from these cases very quickly and gradually learns from more successful episodes. We also added these insights into Section 6.1.\\n\\n\\n> \\u201cWhat are the sizes of action state and action spaces?\\u201d\\n\\nOur action and state spaces are mainly defined by the number of DOM elements in web pages and number of fields in the instructions. For example, in flight-booking-form environment, the number of DOM elements is capped at 100, number of fields is 3, and there are two types of actions (click or type). Hence, the number of possible actions would reach 600 and number of possible variables in a state reach 300. These numbers, however, do not reflect the possible \\u201crealization\\u201d of a DOM element or a field; they just reflect a sketch. For example, \\u201cfrom\\u201d field can take a value from 700 possible airports or \\u201cdestination\\u201d input DOM element can be repetitively typed with any value from the instruction. These greatly increase the space of both states and actions. We added this description into the Section 6.1.\\n\\n\\n> \\u201cThe conclusion part is missing.\\u201d\\nThank you for pointing that out. We added the conclusion in the paper.\"}", "{\"title\": \"a good RL application paper for dealing with large action and state spaces\", \"review\": \"This paper developed a curriculum learning method for training an RL agent to navigate a web. It is based on the idea of decomposing an instruction in to multiple sub-instructions, which is equivalent to decompose the original task into multiple easy to solve sub-tasks. The paper is well motivated and easily accessible. The problem tackled in this work is an interesting application of RL dealing with large action and state spaces. It also demonstrates superior performance over the state of the art methods on the same domains\", \"here_are_the_comments_for_improving_this_manuscript\": \"There are a few notations used without definition, for example DOM tree, Potential (in equation (4))\\n\\nSome justification regarding the the Q value function specified in (1) might be helpful, otherwise it looks very adhoc.\\n\\nAlthough using both shallow encoding and augmented reward lead to good empirical results, it might be useful to give more insights, for example, sample size limit cause overfitting for deep models?\\n\\nWhat are the sizes of action state and action spaces?\\n\\nThe conclusion part is missing.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"solid experiments, needs clarity improvement\", \"review\": \"Overall, the problem the paper considers is important and their results seem significant. The authors have derived a novel architecture and are the first to tackle the problem of filling in web forms at this scale with an autonomous learning agent rather than one that is taught mostly by demonstration. \\n\\nThe related work section is very well written with topical references to recent results and solid differentiations to the new algorithm. However, I see many references in the paper are not from peer reviewed conferences or journals. Unless absolutely necessary, such papers should not be cited because they have not been properly peer reviewed. If the papers cited have actually been in a conference or journal, please add the correct attribution.\\n\\nThe experiments seem well conducted. I liked that each new addition to the algorithm was tested incrementally in Figure 7 to give a realistic view of the gains introduced by each change. I also thought the earlier comparisons to the baselines were well done and I liked that they were done against modern cutting-edge LfD demonstrations. The only thing I would have liked to seen beyond these results are actual learning curves showing, after X iterations, what percentage of the tasks could be completed. I suspect that in many domains the baseline LfD techniques are learning much faster since learning from teachers tends to be more targeted and sample efficient. Learning curves would show us whether or not this is the case. \\n\\nThe weakest part of the paper was the description of the instructor network and the Meta-training in general. This portion seemed ill-described and largely speculative, despite the promising results in Figure 7. In particular, Section 5 is very unclear on how exactly the Meta-Learning works. Pseudocode is definitely needed in this portion well beyond the quick descriptions in Figure 4 and 5, which I could not understand, despite multiple readings. I suggest eliminating those figures and providing concrete pseudo\\u2014code describing the meta learning and also addressing the following open questions in the text:\\n\\u2022\\tWhy is a rule based randomized policy good to learn from? How is this different from learning from demonstration in the baselines?\\n\\u2022\\tHow is a \\u201cfine grained signal\\u201d generated? What does that mean? Is it a reward?\\n\\u2022\\tIn Section 5.1, are there two RL agents, an instructor and a learner with different reward functions? If so, isn\\u2019t this becoming game theoretic and is this likely to converge in most scenarios?\\n\\u2022\\tWhat does Q_D^I actually represent? Why is maximizing these values a good thing?\", \"summary\": \"The paper proposes a deep reinforcement learning approach to filling out web forms, called QWeb. In addition to both deep and shallow embeddings of the states, the authors evaluate various methods for improving the learning system, including reward shaping, introducing subgoals, and even a meta-learning algorithm that is used as an instructor. These variations are tested in several environments and basic QWeb is shown to outperform the baselines and many of the adaptations perform even better than that in more complex domains.\", \"there_are_a_few_grammatical_mistakes_in_the_paper_including\": \"Abstract \\u2013 simpler environments -> simple environments\\nAbstract- with gradually increasing -> with a gradually increasing\\nPage 2 \\u2013 generate unbounded -> generate an unbounded\\nPage 7 \\u2013 correct value -> correct values\\nPage 9 \\u2013 episode length -> episode lengths\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"A novel proposal addressing a complex problem with a large number of components but without a clear analysis of their relevance\", \"review\": \"The paper propose a framework to deal with large state and action\\nspaces with sparse rewards in reinforcement learning. In particular,\\nthey propose to use a meta-learner to generate experience to the agent\\nand to decompose the learning task into simpler sub-tasks. The authors\\ntrain a DQN with a novel architecture to navigate the Web.\", \"in_addition_the_authors_propose_to_use_several_strategies\": \"shallow\\nencoding (SE), reward shaping (AR) and curriculum learning (CI/CG). \\nIt is shown how the proposed method outperforms state-of-the-art\\nsystems on several tasks.\\n\\nIn the first set of experiments it is clear the improved performance\\nof QWeb over Shi17 and Liu18, however, it is not clear why QWeb is not\\nable to learn in the social-media-all problem. The authors tested only\\none of the possible variants (AR) of the proposed approach with good\\nperformance. \\n\\nIt is not clear in the book-flight-form environment, why the\\nQWeb+SE+AR obtained 100% success while the MetaQWeb, which includes\\none of main components in this paper, has a lower performance.\\n\\nThe proposed method uses a large number of components/methods, but it\\nis not clear the relevance of each of them. The papers reads like, \\\"I\\nhave a very complex problem to solve so I try all the methods that I\\nthink will be useful\\\". The paper will benefit from an individual\\nassessment of the different components.\\n\\nThe authors should include a section of conclusions and future work.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}" ] }
r1xQQhAqKX
Modeling Uncertainty with Hedged Instance Embeddings
[ "Seong Joon Oh", "Kevin P. Murphy", "Jiyan Pan", "Joseph Roth", "Florian Schroff", "Andrew C. Gallagher" ]
Instance embeddings are an efficient and versatile image representation that facilitates applications like recognition, verification, retrieval, and clustering. Many metric learning methods represent the input as a single point in the embedding space. Often the distance between points is used as a proxy for match confidence. However, this can fail to represent uncertainty which can arise when the input is ambiguous, e.g., due to occlusion or blurriness. This work addresses this issue and explicitly models the uncertainty by “hedging” the location of each input in the embedding space. We introduce the hedged instance embedding (HIB) in which embeddings are modeled as random variables and the model is trained under the variational information bottleneck principle (Alemi et al., 2016; Achille & Soatto, 2018). Empirical results on our new N-digit MNIST dataset show that our method leads to the desired behavior of “hedging its bets” across the embedding space upon encountering ambiguous inputs. This results in improved performance for image matching and classification tasks, more structure in the learned embedding space, and an ability to compute a per-exemplar uncertainty measure which is correlated with downstream performance.
[ "uncertainty", "instance embedding", "metric learning", "probabilistic embedding" ]
https://openreview.net/pdf?id=r1xQQhAqKX
https://openreview.net/forum?id=r1xQQhAqKX
ICLR.cc/2019/Conference
2019
{ "note_id": [ "ryeWLWXDJ4", "B1x9ZzYmCX", "SygJLmYLpX", "S1e2VQtLTQ", "Byx_z7K86m", "rkgPgQYLam", "r1gHkXK86X", "rJlt3zKLpm", "HylIjDWunQ", "SJl9pTb827", "ByegqBDioX" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544134985036, 1542849026343, 1541997383176, 1541997364369, 1541997328316, 1541997294578, 1541997276584, 1541997233155, 1541048222338, 1540918721923, 1540220296147 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1341/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1341/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1341/Authors" ], [ "ICLR.cc/2019/Conference/Paper1341/Authors" ], [ "ICLR.cc/2019/Conference/Paper1341/Authors" ], [ "ICLR.cc/2019/Conference/Paper1341/Authors" ], [ "ICLR.cc/2019/Conference/Paper1341/Authors" ], [ "ICLR.cc/2019/Conference/Paper1341/Authors" ], [ "ICLR.cc/2019/Conference/Paper1341/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1341/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1341/AnonReviewer2" ] ], "structured_content_str": [ "{\"metareview\": \"This work presents a method to model embeddings as distributions, instead of points, to better quantify uncertainty. Evaluations are carried out on a new dataset created from mixtures of MNIST digits, including noise (certain probability of occlusions), that introduce ambiguity, using a small \\\"toy\\\" neural network that is incapable of perfectly fitting the data, because authors mention that performance difference lessens when the network is complex enough to almost perfectly fit the data.\\n\\nReviewer assessment is unanimously accept, with the following points:\", \"pros\": [\"\\\"The topic of injecting uncertainty in neural networks should be of broad interest to the ICLR community.\\\"\", \"\\\"The paper is generally clear.\\\"\", \"\\\"The qualitative evaluation provides intuitive results.\\\"\"], \"cons\": [\"Requirement of drawing samples may add complexity. Authors reply that alternatives should be studied in future work.\", \"No comparison to other uncertainty methods, such as dropout. Authors reply that dropout represents model uncertainty and not data uncertainty, but do not carry out an experiment to compare (i.e. sample from model leaving dropout activated during evaluation).\", \"No evaluation in larger scale/dimensionality datasets. Authors mention method scales linearly, but how practical or effective this method is to use on, say, face recognition datasets, is unclear.\", \"As the general reviewer consensus is accept, Area Chair is recommending Accept; However, Area Chair has strong reservations because the method is evaluated on a very limited dataset, with a toy model designed to exaggerate differences between techniques. Essentially, the toy evaluation was designed to get the results the authors were looking for. A more thorough investigation would use more realistic sized network models on true datasets.\"], \"confidence\": \"2: The area chair is not sure\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Point embeddings changed to distribution embeddings to model uncertainty.\"}", "{\"title\": \"Response to rebuttal\", \"comment\": \"= While we agree it would be interesting to see if collecting additional views of an ambiguous input reduces its uncertainty, this is not always an option in practice (e.g., face recognition from a single image). Therefore, we focus on the task of assessing when a model should say \\\"I don't know\\\" given a single input.\\n\\nWell, you can imagine a reinforcement learning kind of scenario, or bandit scenario, where you are accomplishing some task where multiple samples increases certainty, but you have a budget on how many samples you can take. So, using your measure of uncertainty, you can then come up with an \\\"optimal sampling schedule\\\", which could really improve the performance of such a task. \\n\\n\\n=The aim of Figure 5 is not to measure the quality of embedding (which is measured in Section 4.3); it is to measure the quality of our uncertainty (i.e., higher uncertainty for low-performance inputs). Figure 5 is confirming that our measure of uncertainty is indeed a good predictor of downstream task performances. If this misses the point, please further clarify your concern.\\n\\nI still think such a thing should be measured against a task that did not use the embedding; you can still compare uncertainty and task performance in the same way. It is a bit nitpicky, but I think it would make the evaluation results more meaningful.\"}", "{\"title\": \"Author Response 2/2\", \"comment\": \"= R2: Is MoG implemented \\u201cby having 2C output branches generating C means and C standard deviation vectors\\u201d?\\n\\nYes, your description is correct. We have added the detail in Section 2.2, \\u201cMoG embedding\\u201d paragraph in the revision.\\n\\n\\n= R2: \\u201cIt would be useful to report results without the VIB regularization.\\u201d\\n\\nWe have added the results in the new appendix section E, \\u201cKL Divergence Regularization\\u201d. We confirm that the KL term improves generalisation for the main tasks (verification and recognition) and better calibrates the uncertainty measure.\\n\\n\\n= R2: Doubt \\u201cpractical usefulness\\u201d in a higher-dimensional case than D=2 or 3 in the paper.\\n\\nThe space and time complexity of increasing D scale only linearly with D. We focus on D=2 and 3 because these compact embeddings stress the network's ability to discriminate well across many (100 or 1000) classes. We have successfully trained HIB with larger dimensional embeddings (D=6) and in that case HIB also exhibits good correlations between uncertainty and task performance. However, the task accuracy with N-digit-MNIST begins to saturate, making it difficult to further explore the relationship between the uncertainty measure and task performance.\\n\\n\\n= R3: Evaluation should confirm that the \\u201cuncertainty measure actually affects the downstream task in a known manner\\u201d for example by showing it helps certain \\u201cactive learning framework\\u201d.\\n\\nWhile we agree it would be interesting to see if collecting additional views of an ambiguous input reduces its uncertainty, this is not always an option in practice (e.g., face recognition from a single image). Therefore, we focus on the task of assessing when a model should say \\\"I don't know\\\" given a single input.\\n\\n\\n= R3: In Figure 5, correlation between the embedding uncertainty and KNN will be high regardless of the quality of embedding.\\n\\nThe aim of Figure 5 is not to measure the quality of embedding (which is measured in Section 4.3); it is to measure the quality of our uncertainty (i.e., higher uncertainty for low-performance inputs). Figure 5 is confirming that our measure of uncertainty is indeed a good predictor of downstream task performances. If this misses the point, please further clarify your concern.\"}", "{\"title\": \"Author Response 2/2\", \"comment\": \"= R2: Is MoG implemented \\u201cby having 2C output branches generating C means and C standard deviation vectors\\u201d?\\n\\nYes, your description is correct. We have added the detail in Section 2.2, \\u201cMoG embedding\\u201d paragraph in the revision.\\n\\n\\n= R2: \\u201cIt would be useful to report results without the VIB regularization.\\u201d\\n\\nWe have added the results in the new appendix section E, \\u201cKL Divergence Regularization\\u201d. We confirm that the KL term improves generalisation for the main tasks (verification and recognition) and better calibrates the uncertainty measure.\\n\\n\\n= R2: Doubt \\u201cpractical usefulness\\u201d in a higher-dimensional case than D=2 or 3 in the paper.\\n\\nThe space and time complexity of increasing D scale only linearly with D. We focus on D=2 and 3 because these compact embeddings stress the network's ability to discriminate well across many (100 or 1000) classes. We have successfully trained HIB with larger dimensional embeddings (D=6) and in that case HIB also exhibits good correlations between uncertainty and task performance. However, the task accuracy with N-digit-MNIST begins to saturate, making it difficult to further explore the relationship between the uncertainty measure and task performance.\\n\\n\\n= R3: Evaluation should confirm that the \\u201cuncertainty measure actually affects the downstream task in a known manner\\u201d for example by showing it helps certain \\u201cactive learning framework\\u201d.\\n\\nWhile we agree it would be interesting to see if collecting additional views of an ambiguous input reduces its uncertainty, this is not always an option in practice (e.g., face recognition from a single image). Therefore, we focus on the task of assessing when a model should say \\\"I don't know\\\" given a single input.\\n\\n\\n= R3: In Figure 5, correlation between the embedding uncertainty and KNN will be high regardless of the quality of embedding.\\n\\nThe aim of Figure 5 is not to measure the quality of embedding (which is measured in Section 4.3); it is to measure the quality of our uncertainty (i.e., higher uncertainty for low-performance inputs). Figure 5 is confirming that our measure of uncertainty is indeed a good predictor of downstream task performances. If this misses the point, please further clarify your concern.\"}", "{\"title\": \"Author Response 2/2\", \"comment\": \"= R2: Is MoG implemented \\u201cby having 2C output branches generating C means and C standard deviation vectors\\u201d?\\n\\nYes, your description is correct. We have added the detail in Section 2.2, \\u201cMoG embedding\\u201d paragraph in the revision.\\n\\n\\n= R2: \\u201cIt would be useful to report results without the VIB regularization.\\u201d\\n\\nWe have added the results in the new appendix section E, \\u201cKL Divergence Regularization\\u201d. We confirm that the KL term improves generalisation for the main tasks (verification and recognition) and better calibrates the uncertainty measure.\\n\\n\\n= R2: Doubt \\u201cpractical usefulness\\u201d in a higher-dimensional case than D=2 or 3 in the paper.\\n\\nThe space and time complexity of increasing D scale only linearly with D. We focus on D=2 and 3 because these compact embeddings stress the network's ability to discriminate well across many (100 or 1000) classes. We have successfully trained HIB with larger dimensional embeddings (D=6) and in that case HIB also exhibits good correlations between uncertainty and task performance. However, the task accuracy with N-digit-MNIST begins to saturate, making it difficult to further explore the relationship between the uncertainty measure and task performance.\\n\\n\\n= R3: Evaluation should confirm that the \\u201cuncertainty measure actually affects the downstream task in a known manner\\u201d for example by showing it helps certain \\u201cactive learning framework\\u201d.\\n\\nWhile we agree it would be interesting to see if collecting additional views of an ambiguous input reduces its uncertainty, this is not always an option in practice (e.g., face recognition from a single image). Therefore, we focus on the task of assessing when a model should say \\\"I don't know\\\" given a single input.\\n\\n\\n= R3: In Figure 5, correlation between the embedding uncertainty and KNN will be high regardless of the quality of embedding.\\n\\nThe aim of Figure 5 is not to measure the quality of embedding (which is measured in Section 4.3); it is to measure the quality of our uncertainty (i.e., higher uncertainty for low-performance inputs). Figure 5 is confirming that our measure of uncertainty is indeed a good predictor of downstream task performances. If this misses the point, please further clarify your concern.\"}", "{\"title\": \"Author Response 1/2\", \"comment\": \"We thank the reviewers for recognising the importance of the problem (R2) and finding the paper well-written (R2, R3). We have revised the submission according to reviewers\\u2019 suggestions and proposals (see summary of updates below). We respond to each reviewer\\u2019s comments below.\\n\\n\\n= Summary of updates in revision:\\n\\n- Discussion of conceptual inapplicability of MC dropout for probabilistic embedding in Section 3 (R1).\\n- Discussion of the intuition behind self-similarity for uncertainty measure and its conceptual advantage over the trace of covariance matrix (R2).\\n- Description of network architecture for MoG embedding in Section 2.2 (R2).\\n- New appendix section E for qualitative and quantitative analysis of the impact of KL divergence regularization term (R2).\\n- Added (N=2, D=3) columns in tables 1 and 2.\\n- Typos (R2,3).\\n\\n\\n= R1: Comparison against \\u201cexisting uncertainty methods like dropout\\u201d.\\n\\nRandomness in MC dropout is independent of input. It is designed to measure model uncertainty (epistemic uncertainty). On the other hand, our model is designed to measure input uncertainty (aleatoric uncertainty). They are conceptually distinct methods. We have added this discussion in Section 3, \\u201cProbabilistic DNNs\\u201d paragraph of the revision.\\n\\n\\n= R1: Unlike MC dropout, \\u201chyperparameters [such as number of components for MoG] are a pain point\\u201d.\\n\\nWe have found results to be fairly insensitive to number of mixture components in the MoG. Note that MC dropout also has parameters to tune!\\n\\n\\n= R1: How should I choose the number of components given that there are ten possibilities for each digit?\\n\\nDo cross-validation if performance is critical. We have shown, however, that both a single Gaussian and two-component MoG perform well in our setup (Section 4.3).\\n\\n\\n= R2: Sampling based similarity computation is \\u201ccomplicated\\u201d. Why not compute analytic distances for Gaussians like Expected Likelihood Kernel or Hellinger Kernel? \\n\\nOur similarity computation based on distance samples is motivated by the contrastive loss metric learning objective (Section 2.1), where Euclidean distances between embeddings encode similarity of inputs. Compared to divergence (KL or JS), inner product (ELK), or Hellinger kernel based distances, HIB is designed to more directly represent distributions of Euclidean distances on the embedding space (through the match probability, Equation 2). However, we agree that it would be interesting to explore no-sampling alternatives in future work.\\n\\n\\n= R2: What is the intuition behind self-similarity for uncertainty? Why not use the trace of the covariance matrix for uncertainty?\\n\\nThe self-similarity uncertainty measure starts from the intuition that for ambiguous inputs, their embeddings will span diverse semantic classes (as in Figure 1b). To quantify this, we have defined self-similarity as the chance that given an input x and two independent samples (z1, z2) from its embedding p(z|x), they belong to the same semantic cluster (i.e., their match probability).\\n We do not use volumetric uncertainty measures like trace or determinant of covariance matrix because it does not make sense for multi-modal distributions like MoG. We have updated Section 2.4 with this discussion.\"}", "{\"title\": \"Author Response 1/2\", \"comment\": \"We thank the reviewers for recognising the importance of the problem (R2) and finding the paper well-written (R2, R3). We have revised the submission according to reviewers\\u2019 suggestions and proposals (see summary of updates below). We respond to each reviewer\\u2019s comments below.\\n\\n\\n= Summary of updates in revision:\\n\\n- Discussion of conceptual inapplicability of MC dropout for probabilistic embedding in Section 3 (R1).\\n- Discussion of the intuition behind self-similarity for uncertainty measure and its conceptual advantage over the trace of covariance matrix (R2).\\n- Description of network architecture for MoG embedding in Section 2.2 (R2).\\n- New appendix section E for qualitative and quantitative analysis of the impact of KL divergence regularization term (R2).\\n- Added (N=2, D=3) columns in tables 1 and 2.\\n- Typos (R2,3).\\n\\n\\n= R1: Comparison against \\u201cexisting uncertainty methods like dropout\\u201d.\\n\\nRandomness in MC dropout is independent of input. It is designed to measure model uncertainty (epistemic uncertainty). On the other hand, our model is designed to measure input uncertainty (aleatoric uncertainty). They are conceptually distinct methods. We have added this discussion in Section 3, \\u201cProbabilistic DNNs\\u201d paragraph of the revision.\\n\\n\\n= R1: Unlike MC dropout, \\u201chyperparameters [such as number of components for MoG] are a pain point\\u201d.\\n\\nWe have found results to be fairly insensitive to number of mixture components in the MoG. Note that MC dropout also has parameters to tune!\\n\\n\\n= R1: How should I choose the number of components given that there are ten possibilities for each digit?\\n\\nDo cross-validation if performance is critical. We have shown, however, that both a single Gaussian and two-component MoG perform well in our setup (Section 4.3).\\n\\n\\n= R2: Sampling based similarity computation is \\u201ccomplicated\\u201d. Why not compute analytic distances for Gaussians like Expected Likelihood Kernel or Hellinger Kernel? \\n\\nOur similarity computation based on distance samples is motivated by the contrastive loss metric learning objective (Section 2.1), where Euclidean distances between embeddings encode similarity of inputs. Compared to divergence (KL or JS), inner product (ELK), or Hellinger kernel based distances, HIB is designed to more directly represent distributions of Euclidean distances on the embedding space (through the match probability, Equation 2). However, we agree that it would be interesting to explore no-sampling alternatives in future work.\\n\\n\\n= R2: What is the intuition behind self-similarity for uncertainty? Why not use the trace of the covariance matrix for uncertainty?\\n\\nThe self-similarity uncertainty measure starts from the intuition that for ambiguous inputs, their embeddings will span diverse semantic classes (as in Figure 1b). To quantify this, we have defined self-similarity as the chance that given an input x and two independent samples (z1, z2) from its embedding p(z|x), they belong to the same semantic cluster (i.e., their match probability).\\n We do not use volumetric uncertainty measures like trace or determinant of covariance matrix because it does not make sense for multi-modal distributions like MoG. We have updated Section 2.4 with this discussion.\"}", "{\"title\": \"Author Response 1/2\", \"comment\": \"We thank the reviewers for recognising the importance of the problem (R2) and finding the paper well-written (R2, R3). We have revised the submission according to reviewers\\u2019 suggestions and proposals (see summary of updates below). We respond to each reviewer\\u2019s comments below.\\n\\n\\n= Summary of updates in revision:\\n\\n- Discussion of conceptual inapplicability of MC dropout for probabilistic embedding in Section 3 (R1).\\n- Discussion of the intuition behind self-similarity for uncertainty measure and its conceptual advantage over the trace of covariance matrix (R2).\\n- Description of network architecture for MoG embedding in Section 2.2 (R2).\\n- New appendix section E for qualitative and quantitative analysis of the impact of KL divergence regularization term (R2).\\n- Added (N=2, D=3) columns in tables 1 and 2.\\n- Typos (R2,3).\\n\\n\\n= R1: Comparison against \\u201cexisting uncertainty methods like dropout\\u201d.\\n\\nRandomness in MC dropout is independent of input. It is designed to measure model uncertainty (epistemic uncertainty). On the other hand, our model is designed to measure input uncertainty (aleatoric uncertainty). They are conceptually distinct methods. We have added this discussion in Section 3, \\u201cProbabilistic DNNs\\u201d paragraph of the revision.\\n\\n\\n= R1: Unlike MC dropout, \\u201chyperparameters [such as number of components for MoG] are a pain point\\u201d.\\n\\nWe have found results to be fairly insensitive to number of mixture components in the MoG. Note that MC dropout also has parameters to tune!\\n\\n\\n= R1: How should I choose the number of components given that there are ten possibilities for each digit?\\n\\nDo cross-validation if performance is critical. We have shown, however, that both a single Gaussian and two-component MoG perform well in our setup (Section 4.3).\\n\\n\\n= R2: Sampling based similarity computation is \\u201ccomplicated\\u201d. Why not compute analytic distances for Gaussians like Expected Likelihood Kernel or Hellinger Kernel? \\n\\nOur similarity computation based on distance samples is motivated by the contrastive loss metric learning objective (Section 2.1), where Euclidean distances between embeddings encode similarity of inputs. Compared to divergence (KL or JS), inner product (ELK), or Hellinger kernel based distances, HIB is designed to more directly represent distributions of Euclidean distances on the embedding space (through the match probability, Equation 2). However, we agree that it would be interesting to explore no-sampling alternatives in future work.\\n\\n\\n= R2: What is the intuition behind self-similarity for uncertainty? Why not use the trace of the covariance matrix for uncertainty?\\n\\nThe self-similarity uncertainty measure starts from the intuition that for ambiguous inputs, their embeddings will span diverse semantic classes (as in Figure 1b). To quantify this, we have defined self-similarity as the chance that given an input x and two independent samples (z1, z2) from its embedding p(z|x), they belong to the same semantic cluster (i.e., their match probability).\\n We do not use volumetric uncertainty measures like trace or determinant of covariance matrix because it does not make sense for multi-modal distributions like MoG. We have updated Section 2.4 with this discussion.\"}", "{\"title\": \"Modelling Uncertainty with Hedged Instance Embeddings\", \"review\": \"# Summary\\nPaper proposes an alternative to current point embedding and a technique to train them. Point embedding are conventional embedding where an input x is deterministically mapped to a vector in embedding space.\\n\\ni.e f(x) = z where f may be a parametric function or trained Neural network.\\n\\nNote that this point embedding means that every x is assigned a unique z, this might be an issue in cases where x is confusing for example if x is an image in computer vision pipeline then x may be occluded etc. In such cases paper argues that assigning a single point as embedding is not a great option.\\n\\nPaper says that instead of assigning a single point it's better to assign smear of points (collection of points coming from some distributions like Gaussian and mixture of Gaussian etc) \\n\\nThey provide a technique based on variational inference to train the network to produce such embeddings. They also propose a new dataset made out of MNIST to test this concept.\\n\\n# Concerns\\n\\nAlthough they have results to back up their claim on their proposed dataset and problem. They have not compared with many existing uncertainty methods like dropout. (But I\\u2019m not sure if such a comparison is relevant here)\\nUnlike Kendall method or dropout method, hyperparameters here are a pain point for me, i.e how many Gaussians should I consider in my mixture of Gaussian to create the embeddings (results will depend upon that)\\nI.e consider the following scenario\\nThe first digit is occluded and can be anything 1,2,3,4,5,6,7,8,9,0 should I use only one Gaussian to create my embeddings like they have shown in the paper for this example, or should I choose 10 gaussian each centered about one of the digits, which might help in boosting the performance?\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Great paper! Could use more uncertainty-measuring application / experiments\", \"review\": \"pros: The paper is well-written and well-motivated. It seems like uncertain-embeddings will be a valuable tool as we continue to extend deep learning to Bayesian applications, and the model proposed here seems to work well, qualitatively. Additionally the paper is well-written, in that every step used to construct the loss function and training seem well motivated and generally intuitive, and the simplistic CNN and evaluations give confidence that this is not a random result.\", \"cons\": \"I think the quantitative results are not as impressive as I would have expected, and I think it is because the wrong thing is being evaluated. It would make the results more impressive to try to use these embeddings in some active learning framework, to see if proper understanding of uncertainty helps in a task where a good uncertainty measure actually affects the downstream task in a known manner. Additionally, I don't think Fig 5 makes sense, since you are using the embeddings for the KNN task, then measuring correlation between the embedding uncertainty and KNN, which might be a high correlation without the embedding being good.\", \"minor_comments\": [\"Typo above (5) on page 3.\", \"Appendix line under (12), I think dz1 and dz2 should be after the KL terms.\"], \"reviewer_uncertainty\": \"I am not familiar enough with the recent literature on this topic to judge novelty.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Review of \\\"Modeling Uncertainty with Hedged Instance Embeddings\\\"\", \"review\": \"While most works consider embedding as the problem of mapping an input into a point in an embedding space, paper 1341 considers the problem of mapping an input into a distribution in an embedding space. Computing the matching score of two inputs (e.g. two images) involves the following steps: (i) assuming a Gaussian distribution in the embedding space, computing the mean and standard deviation for each input, (ii) drawing a set of samples from each distribution, (3) computing the normalized distances between the samples and (iv) averaging to obtain a global score.\\n\\nThe proposed approach is validated on a new benchmark built on MNIST.\", \"on_the_positive_side\": [\"The topic of injecting uncertainty in neural networks should be of broad interest to the ICLR community.\", \"The paper is generally clear.\", \"The qualitative evaluation provides intuitive results.\"], \"on_the_negative_side\": [\"The whole idea of drawing samples to compute the distance between two Gaussian distributions seems unnecessarily complicated. Why not computing directly a distance between distributions? There exist kernels between distributions, such as the Probability Product Kernel (PPK). See Jebara, Kondor, Howard \\u201cProbability product kernels\\u201d, JMLR\\u201904. The PPK between two distributions p(x) and q(x) writes as: \\\\int_x p^a(x) q^a(x) dx, where a is a parameter. When a=1, it is known as the Expected Likelihood Kernel (ELK). When a=1/2, this is known as the Hellinger or Bhattacharyya kernel (BK). In p and q are Gaussian distributions, then the PPK can be computed in closed form. If p and q are mixtures of Gaussians, then the ELK can be computed in closed form.\", \"The Mixture of Gaussians embedding extension is lacking in details. How does the network generate C Gaussian distributions? By having 2C output branches generating C means and C standard deviation vectors?\", \"It might be useful to provide more details about why the self-similarity measure makes sense as an uncertainty measure. In its current state, the paper does not provide much intuition and it took me some time to understand (I actually understood when I made the connection with the ELK). Also, why not using a simpler measure of uncertainty such as the trace of the covariance matrix?\", \"The experiments are lacking in some respects:\", \"o\\tIt would be useful to report results without the VIB regularization.\", \"o\\tThe focus on the cases D=2 and D=3 (embedding in a 2D or 3D space) shades some doubt on the practical usefulness of this framework in a higher-dimensional case.\"], \"miscellaneous\": \"-\\tIt seems there is a typo between equations (4) and (5). It should write z_1^{(k_1)} \\\\sim p(z_1|x_1)\\n\\n--- \\n\\nIn their rebuttal, the authors satisfyingly addressed my concerns. Hence, I am upgrading my overall rating.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}" ] }
B1ffQnRcKX
Automatically Composing Representation Transformations as a Means for Generalization
[ "Michael Chang", "Abhishek Gupta", "Sergey Levine", "Thomas L. Griffiths" ]
A generally intelligent learner should generalize to more complex tasks than it has previously encountered, but the two common paradigms in machine learning -- either training a separate learner per task or training a single learner for all tasks -- both have difficulty with such generalization because they do not leverage the compositional structure of the task distribution. This paper introduces the compositional problem graph as a broadly applicable formalism to relate tasks of different complexity in terms of problems with shared subproblems. We propose the compositional generalization problem for measuring how readily old knowledge can be reused and hence built upon. As a first step for tackling compositional generalization, we introduce the compositional recursive learner, a domain-general framework for learning algorithmic procedures for composing representation transformations, producing a learner that reasons about what computation to execute by making analogies to previously seen problems. We show on a symbolic and a high-dimensional domain that our compositional approach can generalize to more complex problems than the learner has previously encountered, whereas baselines that are not explicitly compositional do not.
[ "compositionality", "deep learning", "metareasoning" ]
https://openreview.net/pdf?id=B1ffQnRcKX
https://openreview.net/forum?id=B1ffQnRcKX
ICLR.cc/2019/Conference
2019
{ "note_id": [ "BJggy6LTy4", "SygCi7pr14", "BJeiNE7TaQ", "H1ei6m7TaX", "Byg3F7Qap7", "r1gbHlap3X", "BJxemWyanX", "r1x-kZyT2X", "Hkx25xk6hQ", "BJl1SmLj3m", "SJg_qTBs3X", "B1eifeX9n7", "HJlzO5a1nm" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "comment", "official_review", "comment" ], "note_created": [ 1544543448322, 1544045478250, 1542431794992, 1542431682960, 1542431620461, 1541423161088, 1541366040057, 1541365977065, 1541365908094, 1541264182944, 1541262735839, 1541185554949, 1540508265541 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1340/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1340/Authors" ], [ "ICLR.cc/2019/Conference/Paper1340/Authors" ], [ "ICLR.cc/2019/Conference/Paper1340/Authors" ], [ "ICLR.cc/2019/Conference/Paper1340/Authors" ], [ "ICLR.cc/2019/Conference/Paper1340/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1340/Authors" ], [ "ICLR.cc/2019/Conference/Paper1340/Authors" ], [ "ICLR.cc/2019/Conference/Paper1340/Authors" ], [ "ICLR.cc/2019/Conference/Paper1340/AnonReviewer1" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper1340/AnonReviewer2" ], [ "(anonymous)" ] ], "structured_content_str": [ "{\"metareview\": \"\", \"pros\": [\"the paper is well-written and presents a nice framing of the composition problem\", \"good comparison to prior work\", \"very important research direction\"], \"cons\": \"- from an architectural standpoint the paper is somewhat incremental over Routing Networks [Rosenbaum et al]\\n- as Reviewers 2 and 3 point out, the experiments are a bit weak, relying on heuristics such as a window over 3 symbols in the multi-lingual arithmetic case, and a pre-determined set of operations (scaling, translation, rotation, identity) in the MNIST case.\\n\\nAs the authors state, there are three core ideas in this paper (my paraphrase):\\n\\n(1) training on a set of compositional problems (with the right architecture/training procedure) can encourage the model to learn modules which can be composed to solve new problems, enabling better generalization. \\n(2) treating the problem of selecting functions for composition as a sequential decision-making problem in an MDP\\n(3) jointly learning the parameters of the functions and the (meta-level) composition policy.\\n\\nAs discussed during the review period, these three ideas are already present in the Routing Networks (RN) architecture of Rosenbaum et al. However CRL offers insights and improvements over RN algorithmically in a several ways:\\n\\n(1) CRL uses a curriculum learning strategy. This seems to be key in achieving good results and makes a lot of sense for naturally compositional problems.\\n(2) The focus in RN was on using the architecture to solve multi-task problems in object recognition. The solutions learned in image domains while \\\"compositional\\\" are less clearly interpretable. In this paper (CRL) the focus is more squarely on interpretable compositional tasks like arithmetic and explores extrapolation.\\n(3) The RN architecture does support recursion (and there are some experiments in this mode) but it was not the main focus. In this paper (CRL) recursion is given a clear, prominent role.\\n\\nI appreciate that the authors' engagement in the discussion period. My feeling is that the paper offers nice improvements, a useful framing of the problem, a clear recursive formulation, and a more central focus on naturally compositional problems. I am recommending the paper for acceptance but suggest that the authors remove or revise their contributions (3) and (4) on pg. 2 in light of the discussion on routing nets.\\n\\nRouting Networks, Adaptive Selection of Non-Linear Functions for Multi-task Learning, ICLR 2018\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Nice framing of the problem; architecturally somewhat incremental over routing nets\"}", "{\"title\": \"Quantitive Evaluation of MNIST transformations\", \"comment\": \"Below is a quantitative evaluation of how CRL compares with a CNN baseline.\\n\\nThe dataset contains MNIST digits that have been scaled (S), rotated (R), and translated (T). There are two types of scaling: large and small. There are two types of rotation: left and right. There are four types of translation: left, right, up, and down. The set of depth-2 compositions (20 total) we considered are scale->translate (2*4 possible), rotate->translate (2*4 possible), scale->rotate (2*2 possible). \\u201cscale->translate\\u201d means that the image was first scaled, then translated. The set of depth-3 compositions we considered are scale->rotate->translate (2*2*4 possible). \\n\\nThe training set is 16 out of the 20 depth-2 compositions, the first hold-out set is the remaining 4 out of the 20 depth-2 compositions, and the second hold-out set is the set of depth-3 compositions. The first hold-out set tests extrapolation to a disjoint set of transformation combinations of the same depth as training; the second hold-out set tests extrapolation to a set of transformation combinations of longer depth than in training.\\n\\nThe CNN baseline was pre-trained to classify canonical MNIST digits, and it continued training on transformed MNIST digits.\\nCRL used the same pre-trained MNIST classifier as a decoder (whose weights are frozen), and learned a set of Spatial Transformer Networks (STN) constrained to rotate, scale, or translate.\\nWe noticed instability in training the STNs to model drastic translations (where the digit was translated more than 15% the width of the images). A potential reason for this is that because the weights of CRL\\u2019s decoder (pre-trained MNIST classifier) are frozen, the classifier acts as a more complex loss functions for the upstream STNs. We addressed this challenge by defining a curriculum for the translated data, where initially the digit was translated by a small amount, and at the end of the curriculum, the digit is translated to the far edge of the image. We applied this curriculum to both CRL and the baseline.\\n\\nThe results are as follows (over 5 random seeds):\", \"training_set_accuracy\": \"\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\u2014\\nCNN\", \"median\": \"0.69\\n10% quantile: 0.60\\n90% quantile: 0.71\\n\\nWe notice that CRL performs a bit worse on the training set because it is constrained to go through the bottleneck of only using Spatial Transformation Networks, whereas the CNN is free to fit the training set without such constraints. In the hold-out sets, it is clear that the CNN overfits to the training set and is unable to classify MNIST digits that have been transformed by a set of transformation combinations it has not seen before. CRL, on the other hand, generalizes significantly better because it re-uses the primitive spatial transformations it had learned during training to re-represent the image into a canonical MNIST digit.\"}", "{\"title\": \"Comparison with Routing Networks\", \"comment\": \"Based on OP\\u2019s suggestions, we have included a paragraph in Section 3.4 (\\u201cDiscussion of Design Choices\\u201d) that features a discussion that compares CRL with Routing Networks.\\n\\nTo avoid misrepresenting Routing Networks, we have revised the wording of the experiment of Appendix D.2 to compare with a mixture-of-expert- inspired baseline, rather than Routing Networks, because as OP points out, 1) RN does not necessarily have a separate controller per time step and 2) RN does not necessarily use a different set of functions per computation step. The purpose of this experiment is to show the benefits of reusing modules across computation steps and to show the benefit of allowing a flexible computation horizon.\"}", "{\"title\": \"Response to Reviewer 2\", \"comment\": \"We thank Reviewer 2 for their constructive review, which helped us improve the paper in the following aspects. We would be happy to incorporate any other suggestions Reviewer 2 may have for the paper.\\n\\n1. We have revised Section 3.1 and the introductory paragraph of Section 3 to be more precise about the domain-specific assumptions CRL makes about the problem distribution. In particular, we included a discussion about restricting the representational vocabulary and the functional form of the modules as a way to incorporate as an inductive bias domain-specific knowledge of the problem distribution. \\n\\n2. We agree with Reviewer 2 that the \\u201crecursive\\u201d/\\u201dtranslational\\u201d terminology should be clearer. Therefore, we have revised the \\u201cProblems\\u201d and \\u201cThe goal\\u201d paragraphs in Section 2 to remove the discussion on translational problems and only focus on recursive problems, where the input and output representations are drawn from the same vocabulary.\\n\\n3. Further, we agree with and appreciate Reviewer 2\\u2019s analysis that our paper is only a first step towards the full general problem of discovering subproblem decomposition. Accordingly we have revised the end of Section 6 (Discussion) to acknowledge this. We also revised \\u201cThe challenge\\u201d paragraph in Section 2 to be more precise that we are not solving the general subproblem decomposition problem, but rather solving the problem of learning to compose partial solutions to subproblems when the general form of the subproblem decomposition of a task distribution is known.\"}", "{\"title\": \"Response to Reviewer 3\", \"comment\": \"We thank Reviewer 3 for their constructive review, which helped us improve the paper in various aspects. We would be happy to incorporate any other suggestions Reviewer 3 may have for the paper. We would like to make the following clarifications:\\n\\n1. We have clarified in Section 4.1 that arithmetic problems are modulo-10.\\n\\n2. With regards to how CRL compares to the RNN on test data with the same length as the training data, Figure 2b shows that there is a substantial difference between CRL (red curve) and RNN (purple curve). It is only with 10x more data does the RNN (yellow curve) reach comparable performance with CRL.\\n\\n3. Reviewer 3 noted that in the right half of Figure 4, the top-two examples showed that CRL performs transformation twice, when in fact this can be achieved by only translation. This is true. For simplicity, we had fixed the number of transformations to two transformations. That CRL finds alternate ways of achieving the same end representation (using two translations instead of one) illustrates a core feature of the CRL framework: that it is possible to solve a problem (e.g. a large translation) by composing together partial solutions (two small translation).\\n\\n4. We will have the baseline experiments Reviewer 3 requested in time for the final, and will endeavor to add these in to the paper during the discussion period.\"}", "{\"title\": \"Well-written paper; second experiment could be made stronger.\", \"review\": \"Summary: This paper is about trying to learn a function from typed input-output data so that it can generalize to test data with an input-output type that it hasn't seen during training. It should be able to use \\\"analogy\\\" (if we want to translate from French to Spanish but don't know how to do so directly, we should translate from French to English and English to Spanish). It should also be able to generalize better by learning useful \\\"subfunctions\\\" that can be composed together by an RL agent. We set up the solution as having a finite number of subfunctions, including \\\"HALT\\\" which signifies the end of computation. At each timestep an RL agent chooses a subfunction to apply to the current representation until \\\"HALT\\\" is chosen. The main idea is we parameterize these subfunctions and the RL agent as neural networks which are learned based on input -output data. RL agent is also penalized for using many subfunctions. The algorithm is called compositional recursive learner (CRL). Both analogy and meaningful subfunctions should arise purely because of this design.\\n\\nMultilingual arithmetic experiment. I found this experiment interesting although it would be helpful to specify that it is about mod-10 arithmetic. I was very confused for some time since the arithmetic expressions didn't seem to be evaluated correctly. It also seems that it is actually the curriculum learning that helps the most (vanilla CRL doesn't seem to perform very well) although authors do note that such curriculum learning doesn't help the RNN baseline. It also seems that CRL with curriculum doesn't outperform the RNN baseline that much on test data with the same length as training data. The difference is larger when tested on longer sequences. However here, the CRL learning curve seems to be very noisy, presumably due to the RL element. The qualitative analysis illustrates well how the subfunctions specialize to particular tasks (e.g. translation or evaluating a three symbol expression) and how the RL agent successively picks these subfunctions in order to solve the full task.\\n\\nImage transformations experiment. This experiment feels a bit more artificial although the data is more complicated than in the previous experiment. Also, in some of the examples in Figure 2, the algorithms seems to perform translation (action 2) twice in a row while it seems like this could be achieved by only one translation. How does this perform experimentally in comparison to an RNN (or other baseline)?\\n\\nI found this paper to be well-written. Perhaps it could be stronger if the \\\"image transformations\\\" experiment quantitatively compared to a baseline. I'm not an expert in this area and don't know in detail how this relates to existing work (e.g. by Rosenbaum et al; 2018).\", \"edit\": \"change score to 7 in light of revisions and new experiment.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}", "{\"title\": \"Response to AreaChair1\", \"comment\": \"We apologize for the delay. We have now posted a detailed response.\"}", "{\"title\": \"Response to \\\"Update to concerns above\\\"\", \"comment\": \"Question 1\\n\\nAlthough we have acknowledged the similarities in \\\"Response to Relation of the Compositional Recursive Learner to Routing Networks\\\", we respectfully disagree with OP that \\u201cCRL is in effect a Routing Network.\\u201d To make such a statement would be to mischaracterize the difference between the generative nature of CRL and the routing-based nature of RN and to ignore the respective problem domains that CRL and RN tackles.\\n\\nWe focus on the extrapolation problem (Sec 2 and 3), for which learning on multiple tasks is a means to this end, whereas Rosenbaum et al. focus on task interference, for which multi-task learning is the end itself (see abstract of\\u00a0Rosenbaum et al.). Because our focus is on subproblem decomposition, CRL restricts the representation space such that harder problems can be expressed in the same vocabulary as easier problems. RN do not focus on subproblem decomposition, so it is not clear whether their modules learn any interpretable atomic functionality or whether their representations capture semantic boundaries between subproblems that comprise a larger problem. Therefore, RN does not have the inductive bias for extrapolation problems that require the learner to re-represent the new problem in terms of problems the learner has seen during training.\\n\\nThe key methodological difference between CRL and RN lies in the generative nature of CRL and the routing-based nature of RN. RN and other work such as PathNet (Fernando et al. 2017) route input-dependent paths through a large fixed architecture. In contrast, the extrapolation problem necessitates CRL be generative, meaning that it incrementally builds module on top of module without a fixed computational horizon. This is necessary for the problem domain we consider, in which we want to train and extrapolate to different problems that require various computation depths. Therefore, variable-length computation horizon, the restrictions on the representational vocabulary, and the emergent semantic functionality of its submodules as solutions to subproblems within a larger problem (see Figure 3) are crucial design considerations for the capability of CRL that RN does not incorporate in their approach.\\n\\nQuestion 2\\n\\nBased on the crucial difference between the generative nature of CRL and the routing-based nature of RN, the variable computation horizon is a crucial feature of CRL, not a minor difference, as we discussed above and in the Related Work. Because of the variable computation horizon, it is not possible to have a separate controller at each timestep/depth because the number of time steps of computation unknown; therefore this is also not a minor difference.\\u00a0\\n\\nWe agree with OP that the particular RL algorithm (PPO vs MARL-WPL) is not particularly relevant to the central focus of our paper, which is extrapolation in compositionally structured problems, and we indeed did not claim so. Nevertheless, our work represents an algorithmic improvement that does make the single controller architecture more effective (above 90% extrapolation accuracy for multilingual arithmetic) than Rosenbaum et al.\\u2019s architecture (Figure 4 and Figure 5 of Rosenbaum et al. shows < 50% accuracy, whereas their best method achieves around 60%).\\n\\nCRL\\u2019s focus on capturing interpretable atomic functionality in its modules and using representations capture semantic boundaries between subproblems that comprise a larger problem are important ingredients for CRL\\u2019s analogical reasoning: literally re-representing a problem in terms of problems it has already seen. This is another key difference between RN and CRL, because the architectural design of RN do not have the inductive bias (restrictions on the modules and representations) that encourage it re-represent problems in literally terms of previously-seen problems.\", \"question_3\": \"Novelty\\n\\nThe novelty of our work (with respect to RN) lies in the generative nature of CRL because we reframe of the extrapolation problem as a problem of learning algorithmic procedures over transformations between representations, as discussed in the abstract, intro, and discussion. CRL generates function composition, in contrast to how RN routes through function paths. As shown in the experiments section, the transformations CRL learns have interpretable, atomic functionality and the representations capture semantic boundaries between subproblems that comprise a larger problem. These features of the CRL architecture crucially differentiate it from other routing-based architectures, including RN and PathNet.\"}", "{\"title\": \"Response to \\\"Relation of the Compositional Recursive Learner to Routing Networks\\\"\", \"comment\": \"We are grateful to the Anonymous Commenter (OP) for their detailed and insightful comment.\\n\\nIt is true, as OP points out, that there is a close connection to Routing Networks (RN), an important and interesting paper that seeks to mitigate task interference in multi-task learning by routing through the modules of a convolutional neural network. Like RN, a feature of our work is that the learner creates and executes a different computation graph for different inputs, where this computation graph consists of a series of functions applied according to a controller. Therefore, it is possible to see CRL as taking a step beyond the single-controller (which they refer to as \\u201csingle-agent\\u201d in Rosenbaum et al.) version of RN by incorporating several algorithmic improvements that make the single controller version not only effective for solving the task (c.f. Figure 4 and Figure 5 of Rosenbaum et al.) but also effective for extrapolation, a problem domain that Rosenbaum et al. does not consider.\\u00a0\\n\\nWe will follow OP\\u2019s recommendation and make the comparison with RN more salient in the experiments and related work section. However, we would like to emphasize that the problem that RN tackles (mitigating task interference in multi-task learning) is not the central focus of the paper. That CRL and RN started from significantly different motivations and problem domains but converged to a similar architecture design serves as encouraging evidence in support of an old idea that exploiting modularity and encapsulation yield help more efficiently capture the modalities of a task distribution, and we are excited that both we and Rosenbaum et al. are actively pushing this front.\\n\\nWe thank OP for pointing out it is indeed true that 1) RN does not necessarily have a separate controller per time step and 2) RN does not necessarily use a different set of functions per computation step; we will follow OP\\u2019s recommendation and clarify this in the next version of the paper to avoid potential misunderstanding. One source for our misunderstanding is that the exposition of RN in section 3 of Rosenbaum et al. (e.g. \\u201cIf the number of function blocks differs from layer to layer in the original network, then the router may accommodate this by, for example, maintaining a separate decision function for each depth\\u201d (page 4, Rosenbaum et al.) and \\u201cThe approximator representation can consist of either one MLP that is passed the depth (represented in 1-hot), or a vector of d MLPs, one for each decision/depth\\u201d (page 5, Rosenbaum et al.)) seems to heavily suggest the two assumptions we made on page 15 of our manuscript, so we thought that the single-controller or shared function cases were included in Rosenbaum et al. mostly for the sake of comparison. The reason that our submission discussed points (1) and (2) was not intended to misrepresent RN. Rather it was because we interpreted Figure 4, Figure 5, Table 3, Table 4 of Rosenbaum et al. as claiming the routing-all-fc (one-agent-per-task, separate controller per depth, different functions-per-layer) as the flag bearer of their results. To make the comparison that most fairly represents RN\\u2019s claims, we had conducted our comparison based on the best version of RN reported in Rosenbaum et al. (routing-all-fc), which uses a separate controller per depth and a different set of functions per depth (according to Table 3 and 4 in Rosenbaum et al.).\"}", "{\"title\": \"Trying to learn composition\", \"review\": \"This is a good review paper. I am not sure how much it adds to the open question of how to learn representation with high structure.\\n\\nI would like to see more detail on what is communicated between the controller and the evaluator. Is it a single function selected or a probability distribution that is sent? How does the controller know how many function the evaluator has created? Or visa versa. \\n\\nThere is a penalty for the complexity of the program, is there a penalty for the number of functions generated? \\n\\nHaving just read Hudson and Manning's paper using a separate controller and action/answer generator they make strong use of attention. It is not clear if you use attention? Maybe in that you can operate on a portion of X. What role does attention play in your work?\", \"rating\": \"9: Top 15% of accepted papers, strong accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"comment\": \"Now that the review period is officially over, I was hoping to get a response to the issues raised above. I ask the authors to address the following questions in particular:\\n1. Do the authors agree with the assessment that the CRL is in effect a Routing Network? (I might point out that the authors even hint at that in the arxiv version of this paper)\\n2. Do the authors agree that the only two minor differences (apart from the training schedule) are (1) that the CRL has infinite horizon recurrence, while RNs only have limited horizon recurrence, and (2) the RL algorithm chosen? (this implies a mischaracterization of RNs on the authors part)\\u00a0\\n3. In light of the previous two points, why do the authors claim that their architecture is novel? (this critique does not extend to the other parts of their paper)\", \"title\": \"Update to concerns above\"}", "{\"title\": \"Interesting approach to compositionally\", \"review\": \"==== Summary ====\\n\\nThis paper proposes a model for learning problems that exhibit compositional and recursive structure, called Compositional Recursive Learner (CRL). The paper approaches the subject by first defining a problem as a transformation of an input representation x from a source domain t_x to a target domain t_y. If t_x = t_y then it is called a recursive problem, and otherwise a translational problem. A composite problem is the composition of such transformations. The key observation of the paper is that many real-world problems can be solved iteratively by either recursively transforming an instance of a problem to a simpler instance, or by translating it to a similar problem which we already know how to solve (e.g., translating a sentence from English to French through Spanish). The CRL model is essentially composed of two parts, a set of differential functions and a controller (policy) for selecting functions. At each step i, the controller observes the last intermediate computation x_i and the target domain t_y, and then selects a function and the subset of x_i to operate on. For each instance, the resulting compositional function is trained via back-propagation, and the controller is trained via policy gradient. Finally, the paper presents experiments on two synthetic datasets, translating an arithmetic expression written in one language to its outcome written in another language, and classifying MNIST digits that were distorted by an unknown random sequence of affine transformations. CRL is compared to RNN on the arithmetic task and shown to be able to generalize both to longer sequences and to unseen language pairs when trained on few examples, while RNN can achieve similar performance only using many more examples. On MNIST, it is qualitatively shown that CRL can usually (but not always) find the sequence of transformations to restore the digit to its canonical form.\\n\\n==== Detailed Review ====\\n\\nI generally like this article, as it contains a neat solution to a common problem that builds on and extends prior work. Specifically, the proposed CRL model is a natural evolution of previous attempts at solving problems via compositionally, e.g. Neural Programmer [1] that learns a policy for composing predefined commands, and Neural Module Networks [2] that learns the parameters of shared differential modules connected via deterministically defined structure (found via simple parse tree). The paper contains a careful review of the related works and highlights the similarities and differences from prior approaches. Though the experiments are mostly synthetic, the underlying method seems to be readily applicable to many real-world problems.\\n\\nHowever, the true contributions of the paper are somewhat muddied by presenting CRL as more general than what is actually supported by the experiments. More specifically, the paper presents CRL as a general method for learning compositional problems by decomposing them into simpler sub-problems that are automatically discovered, but in practice, a far more limited version of CRL is used in the experiments, and the suggested translational capabilities of CRL, which are important for abstract sub-problem discovery, are not properly validated:\\n\\n1. In both experiments, the building-block functions are hand-crafted to fit to the prior knowledge on the compositionally of the problem. For the arithmetic task, the functions are limited to operate each step just on a single window of encompassing 3 symbols (e.g., <number> <op> <number>, <op> <number> <op>) and return a distribution over the possible symbols, which heavily forces the functions to represent simple evaluators for simple expressions of the form <number> <op> <number>. For the distorted MNIST task, the functions are limited to neural networks which choose the parameter of predetermined transformations (scaling, translation, or rotation) of the input. In both cases, CRL did not *found* sub-problems for reducing the complexity of the original instance but just had to *fine tune* loosely predefined sub-problems. Incorporating expert knowledge into the model like so is actually an elegant and useful trick for solving real problems, and it should be emphasized far clearly in the article. The story of \\u201cdiscovering subproblems\\u201d should be left for the discussion / future research section, because though it might be a small step towards that goal, it is not quite there yet.\\n2. The experiments very neatly show how recursive transformations offer a nice framework for simplifying an instance of a problem. However, the translation capabilities of the model are barely tested by the presented experiments, and it can be argued that all transformations used by the model are recursive in both experiments. First, only the arithmetic task has a translation aspect to it, i.e., the task is to read an expression in one language and then output the answer in a different language. Second, this problem is only weakly related to translation because it is possible to translate the symbols independently, word by word, as opposed to written language that has complex dependencies between words. Third, the authors report that in practice proper translation was only used in the very last operation for translating the computed value of the input expression to the requested language, and not as a method to translate one instance that we cannot solve into another that we can. Finally, all functions operate and return on all symbols and not ones limited to a specific language, and so by the paper\\u2019s own definition, these are all recursive problems and not translational ones.\\n\\nIn conclusion, I believe this paper should be accepted even with the above issues, mostly because the core method is novel, clearly explained, and appears to be very useful in practice. Nevertheless, I strongly suggest to the authors to revise their article to focus on the core qualities of their method that can be backed by their current experiments, and correctly frame the discussion on possible future capabilities as such.\\n\\n[1] Reed et al. Neural Programmer-Interpreters. ICLR 2016.\\n[2] Andreas et al. Neural Module Networks. CVPR 2016.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"comment\": \"I have read the paper \\\"Automatically Composing Representation Transformations as a Means for Generalization\\\" with great pleasure. I particularly enjoyed how the paper tries to link compositionality to analogical reasoning. I think an architecture for compositional reasoning that can solve even complex tasks elegantly is of great value.\\nI do though have some concerns about the relationship between the \\\"Compositional Recursive Learner\\\" (CRL) and \\\"Routing Networks\\\" (RN). Specifically, it seems to me that the CRL is an example of a single agent recursive routing network, as described in (Rosenbaum et al, ICLR 2018). In particular, the design of a compositional computation and learning framework that combines trainable function blocks with a reinforcement learning meta learner (as described in section 3.2 and 3.3) is highly similar (section 3.2) or nearly identical (section 3.3) to the formulation in the routing networks paper.\\nThe main difference is that while (Rosenbaum et al) focused on a limited-horizon recurrence (see pages 1, 3, 4, 7, and particularly 14 in the appendix), CRL uses an infinite-horizon recurrence.\\nSurprisingly, this relationship is not discussed in the paper in any detail. Routing Networks are more closely examined in the appendix only. Additionally, there are two stated assumptions (on p. 15) on routing networks that I do not think are true: (1) Routing Networks necessarily have a separate controller per computation step and (2) Routing Networks necessarily use a different set of functions per computation step. The idea of an RN with a single controller applied across computation steps is discussed on page 5 of (Rosenbaum et al). The idea of re-using function blocks across computation steps is discussed on pages 1, 3, 4, 7 and 14.\\n\\nGiven the obviously close relationship between these two works, I feel that the connection should be more emphasized and the comparison more central to the paper. And indeed, the results shown for routing networks are somewhat hard to believe (at least for smaller problems as routing networks are not expected to scale to inputs of the same size). Is the routing networks implementation compared to actually also recurrent? Does the routing network receive the same curriculum learning strategy training?\", \"the_link_to_rosenbaum_et_al_in_iclr_2018\": \"https://openreview.net/forum?id=ry8dvM-R-\", \"title\": \"Relation of the Compositional Recursive Learner to Routing Networks\"}" ] }
ryfz73C9KQ
Neural Predictive Belief Representations
[ "Zhaohan Daniel Guo", "Mohammad Gheshlaghi Azar", "Bilal Piot", "Bernardo Avila Pires", "Rémi Munos" ]
Unsupervised representation learning has succeeded with excellent results in many applications. It is an especially powerful tool to learn a good representation of environments with partial or noisy observations. In partially observable domains it is important for the representation to encode a belief state---a sufficient statistic of the observations seen so far. In this paper, we investigate whether it is possible to learn such a belief representation using modern neural architectures. Specifically, we focus on one-step frame prediction and two variants of contrastive predictive coding (CPC) as the objective functions to learn the representations. To evaluate these learned representations, we test how well they can predict various pieces of information about the underlying state of the environment, e.g., position of the agent in a 3D maze. We show that all three methods are able to learn belief representations of the environment---they encode not only the state information, but also its uncertainty, a crucial aspect of belief states. We also find that for CPC multi-step predictions and action-conditioning are critical for accurate belief representations in visually complex environments. The ability of neural representations to capture the belief information has the potential to spur new advances for learning and planning in partially observable domains, where leveraging uncertainty is essential for optimal decision making.
[ "belief states", "representation learning", "contrastive predictive coding", "reinforcement learning", "predictive state representations", "deep reinforcement learning" ]
https://openreview.net/pdf?id=ryfz73C9KQ
https://openreview.net/forum?id=ryfz73C9KQ
ICLR.cc/2019/Conference
2019
{ "note_id": [ "Sylhc8PgeE", "S1gS823KCQ", "BJg0Jn3t07", "rklIio2tCX", "SylDwo2F0X", "HJgKV__a2X", "Hkgl4dA32Q", "BJxAFigP27" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544742548127, 1543257165503, 1543257061940, 1543256990267, 1543256927065, 1541404721349, 1541363751987, 1540979589703 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1339/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1339/Authors" ], [ "ICLR.cc/2019/Conference/Paper1339/Authors" ], [ "ICLR.cc/2019/Conference/Paper1339/Authors" ], [ "ICLR.cc/2019/Conference/Paper1339/Authors" ], [ "ICLR.cc/2019/Conference/Paper1339/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1339/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1339/AnonReviewer1" ] ], "structured_content_str": [ "{\"metareview\": \"This paper proposed an unsupervised learning algorithm for predictive modeling. The key idea of using NCE/CPC for predictive modeling is interesting. However, major concerns were raised by reviewers on the experimental design/empirical comparisons and paper writing. Overall, this paper cannot be published in its current form, but I think it may be dramatically improved for a future publication.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Concerns about the experiments and paper clarity.\"}", "{\"title\": \"General comments and change list.\", \"comment\": \"We would like to thank the reviewers for their comments. We address those comments by the following changes in the paper. In addition, we provide a specific reply for each reviewer.\\n\\n1.We have consolidated our review of CPC in Sec. 2.2. This addresses a concern raised by AnonReviewer2.\\n\\n2. We have corrected the definition of b_t---it is the output of a recurrent neural network. What we show experimentally is that b_t contains enough information to parameterize accurate beliefs over state variables. This addresses a concern raised by AnonReviewer2.\\n\\n3. We adjusted the description of the architecture so that it is clear and precise by moving the algorithm description to the main text and adding more description, as much as space allowed us. This addresses a concern raised by AnonReviewer2.\\n\\n4. We referred to a more detailed description of the ground-truth MLP predictor in the main text. More implementation details can be found in the appendix. This addresses a concern raised by AnonReviewer2.\\n\\n5. We have corrected the statement about the inputs of the ground truth MLP in the appendix. Only the discretized initial position and orientation was being fed to the MLP, and only for the levels 'fixed', 'room', 'maze', and 'terrain'. This addresses a concern raised by AnonReviewer1. We have also changed the main text to answer the reviewer's question (cf. our response to AnonReviewer1).\\n\\n6. We added the citations requested by AnonReviewer3 in the related work section.\"}", "{\"title\": \"Thanks for the review.\", \"comment\": \"We would like to thank the reviewer for their review and comments.\\n\\n1. \\\"The authors mention that other works have already shown that learned representations can improve agent performance.\\\"\\n\\nFor this work, we have chosen to understand neural belief representations. In this sense, the problem we are studying is representation learning, and whether the neural representations we propose to learn are effective belief representations. To motivate why one should care about learning effective belief representations, we presented examples of previous work where better representations led to improved performance in different tasks. However, one cannot say that these works have used predictive belief representations, and the degree to which each studies the learned representations varies.\\n\\n2. \\\"I think it could be interesting to check that the accuracy in belief prediction is correlated with an improvement in agent performance or in transfer learning tasks.\\\"\\n\\n There are three different problems being conflated here. First, solving a specific task. For this, task performance is the clear success criterion. Second, using neural belief representations for solving a specific task. Here, too, task performance is the criterion of interest, but it is only indirectly informative about how good the neural belief representation is, as a belief representation. Third, learning strong neural belief representations. Here, looking at the quality of the learned beliefs is a more sensible criterion than performance in any tasks, although looking at performance across tasks could be reassuring.\\n\\n3. If the initial position is given to the MLP, how can there be any uncertainty?\\n\\nWe have made an imprecise statement in that regard, which we will fix: Only the discretized initial position and orientation was being fed, and the agent may require several actions to move from one square in the grid to another, or to change its discretized orientation. There are some additional comments to be made about the reviewer's remark.\\n\\nThe agent's initial discretized position and orientation were only fed to the MLP (alongside the RNN output) for the levels 'fixed', 'room', 'maze', and 'terrain'. Thus the uncertainty on the agent's position in 'two hallways' is natural (and for 'teleport' and 'non-teleport' it can be easily eliminated because the environment is fixed).\\n\\nWe can speculate some additional reasons for the uncertainty in 'fixed', 'room', 'maze', and 'terrain'. In principle, with the full sequence of actions as well as the initial position, on a fixed environment, the agent's position and orientation could be maintained. This kind of simulation is unlikely to be happening though, as we can see from Figures 4 and 5, by looking at the history predictions: The MLP cannot decode the full history out of the representation.\\n\\nMoreover, in the case of 'terrain' , the terrain affects the agent's position, so the simulation could not be used for accurate position/orientation prediction. (This is also aggravated by the fact that 'terrain' is not fixed, but the terrain is randomly drawn at each episode from a fixed set of instances.)\"}", "{\"title\": \"Thanks for the review.\", \"comment\": \"We would like to thank the reviewer for their review and comments. The reviewer has positive comments concerning the empirical evaluation and the new algorithm presented. Its major concern is with respect to the lack of theoretical analysis and raises also some minor concerns regarding the different architectures. We will try to address those comments in the remaining.\", \"concerning_the_lack_of_theoretical_analysis\": \"This works is an empirical evaluation via a glass-box approach of three different algorithms that shape the representation of a belief state. This approach is new to our knowledge. The evaluation clearly illustrates that the representations learnt with the different architectures are able to encode information related to the belief state of the agent such as its position and orientation.\", \"concerning_the_missing_citations\": \"We agree with the reviewer. We will add those citations and a discussion in the related work section. The main difference between our work and those works is that we are not trying to learn an explicit belief probability vector, and we go more in-depth with our glass-box evaluation in quantifying what sorts of information is being learned and why.\", \"concerning_the_questions\": \"We agree with the reviewers on the 2 first points. You could indeed learn a distribution for the frame prediction and encoding the objects in the representation of the belief need a more clever choice of negative examples. To expand on that, the choice of negative examples is critical to what information is encoded, and indeed we can add more prior knowledge in shaping this choice; but there is always a trade-off in how much we can affect the design. Using a simulator that can remove objects from the background is only feasible when one has complete control over the environment, and so is very task-specific. We are more generally interested in broader approaches.\\n\\nFor the third point, CPC(|Action) should have no problem with sensor noise, as long as the noise is not overwhelming to the point where positive and negative examples are indistinguishable. The result will be simply that they learn a noisier estimation of all encoded information.\"}", "{\"title\": \"Thanks for the review.\", \"comment\": \"Thank you for reading and commenting on our work. We address the following main concerns.\\n\\nThe reviewer has raised the issue that the paper is hard to follow and recommends a list of changes to improve the quality of the manuscript. In the revised manuscript we will have addressed these issues. (Items 1-4 the change list in general response.) \\n\\n1. Little reviewing of CPC, CPC|Action: We will add a more detailed review to address this issue.\\n\\n2. The probability interpretation for b_t is stretched: We have been imprecise here and we will fix it. Indeed, b_t is the output of a recurrent neural network. We want to clarify that the output of the neural network is itself not a probability vector, but just the output of a neural network; however the information contained in this output is quite rich, and our experiments show how this output is encoding various pieces of information about the history, approximating an encoded sufficient statistic of the history.\\n\\n3. The architecture description should be precise: We will fix the presentation accordingly.\\n\\n4. The MLP to predict the ground truth is not sufficiently described in the main text: We present it as an implementation detail in the appendix, but if after all the other changes there is space left, we will move it to the main text. In any case we will make sure that the main text clearly points to the implementation details.\\n\\n5. The reviewer has also raised the concern that our results only provide evidence for the hypothesis \\u201cthat CPC|Action performs better than CPC and FP in a set of well controlled toy environments' and \\u2018it falls short from evaluating in more challenging environments\\u2019.\\n\\n Our goal in this work is to investigate the effectiveness of various unsupervised training schemes to learn belief representations. This paper fits in the literature as a \\\"proof-of-concept\\\" work where we do something non-trivial that has not been done before, and we take a careful look at we have developed to give the community a good understanding of it.\\n\\n Hence, our main purpose was not to find out whether CPC|Action would perform better than CPC/FP, but whether it would be at all possible to learn representations that can encode meaningful beliefs about the environment state. Part of understanding these belief representations requires measuring performance and comparing different approaches (CPC|Action, CPC and FP in our case), but the message is broader than the outcome of testing whether one method performs better than another one.\\n\\n Our contributions are of broad interest because the task of learning belief representations using unsupervised prediction tasks is more challenging and less demanding than doing so with supervision. It is worth pointing out that there is no evidence that learn belief representations with supervision on the state is easy, so what we have demonstrated was by no means obvious or easy.\\n\\n6. More comparisons, e.g., particle filtering and MDN-RNNs.\\n\\n While we recognize that adding these comparisons would strengthen the experimental results, we believe that their absence is not detrimental to the contributions of the paper, taking our goal into consideration. There are some remarks to be made about each of the suggested approaches, and we will add these to the main paper, so that we clarify the role of the experimental study in the paper and in the literature.\\n\\n Particle filtering would presumably give better beliefs, but for a price: It would require more human intervention in the training than the kind of learning regime we are interested in. We wish learning to be as close to end-to-end as possible. Nevertheless, the performance of PF would be a good \\\"ceiling line\\\" for the performance of the different end-to-end approaches.\", \"we_see_mixture_density_networks_being_used_in_three_places\": \"In the probes (belief decoders), in the CPC variants, or in the frame prediction. For the first two possibilities, we have already represented the distribution explicitly, which sidesteps the need for networks that can encode probabilities. For the frame prediction, especially multistep, MDNs and other networks (e.g., VAEs or IQNs) might make a significant difference, especially if we were to consider frame prediction with action-conditioning. We are considering exploring these in future work, to know how far CPC|Action can go, and whether it can be used to train representations that are as rich as those one could conceivably train with networks that predict distributions.\"}", "{\"title\": \"Review for \\\"Neural Belief Representations\\\"\", \"review\": \"# Review for \\\"Neural Belief Representations\\\"\\n\\n\\n\\nThe authors argue in the favor of belief representations for partial observable Markov decision processes. The central argument is that uncertainty needs to be represented to make optimal decision making. For that aim, three belief representations based on sufficient statistics of the future are evaluated and compared in a set of disective studies. Those studies find that predicting the future results in uncertainty being represented in the state representations, although they differ in quality.\\n\\nI found the paper hard to follow for various reasons. \\n\\n- NCE is reviewd, while CPC is not. I would have found a review of CPC as well to help my understanding, especially to draw the line between CPC and CPC|Action.\\n- In 2.1., $b_t$ is defined as a probability, while it is the output of a neural network later. This is formally incompatible, and I found the connection not well explained. From my understanding, $b_t$ is a vector that represents the sufficient statistics if learning works. The probability interpretation is thus stretched.\\n- The architecture description (starting from the second paragraph on page 4) feels cluttered. It was clearly written as a caption to Figure 1 and hence should be placed as such. Still, stand alone texts are important and in my humble opinion should be augmented with equations instead of drawings. While the latter can help understanding, it lacks precision and makes reproduction hard.\\n- The MLP to predict the ground truth is not sufficiently described in the main text. I think it needs to go there, as it is quite central to the evaluation.\\n\\nSince the manuscript is half a page under the limit, such improvements would have been feasible.\\n\\nApart from the quality of the manuscipt, I like the fact that a disective study was done in such a way. \\n\\nHowever, I would have liked to see more comparisons, e.g. in $(x, y, \\\\theta)$\\u00a0environments it is also possible to obtain quite good approximations of the true posterior via particle filtering. Also, other more straightforward approaches such as MDN-RNNs can represent multiple maxima in the probability landscape; this would have enabled to examine the benefit of conditioning on actions in a different context.\\n\\nRight now, it is unclear what the paper is about. On the one hand, it does a focused disective study with well controlled experiments, which would be a good fit if many different models were considered. On the other hand, it advertsises CPC|Action; but then it falls short in evaluating the method in more challenging environments.\\n\\nTo sum it up, I feel that the paper needs to be clearer in writing and in experimental structure. The currently tested hypothesis, \\\"does CPC|Action perform better than CPC and FP in a set of well controlled toy environments\\\" is, imho, not of broad enough interest.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Good paper, some details missing or unclear\", \"review\": \"** Summary **\\nThe authors evaluate three different representation learning algorithms for partially observable environments. In particular, they investigate how well the learned representation encodes the true belief distribution, including its uncertainty. \\nThey propose an extension to a previous algorithm and evaluate all three algorithms on a range of tasks.\\n\\n** Clarity **\\nThe paper is well written and overall easy to follow.\\n\\n** Quality **\\nThe paper evaluates the described algorithms on a sufficiently large set of tasks. There is no theoretical analysis.\\n\\n** Originality & Significance **\\nWhile the authors propose a novel extension to an existing algorithm, I believe the value of this work lies in the detailed empirical analysis.\\n\\n** Missing Citations **\\n\\nI believe two recent papers (this year's ICML) should be mentioned in the related work section as they propose two representation learning algorithms for POMDPs that, as far as I can tell, are not yet mentioned in the paper but quite relevant to the discussed topic. [1] Because it also uses PSRs and [2] because it explicitly learns a belief state. It would be interesting to see how [2] compares in terms of performance to FP and CPC(|Action).\\n\\n[1] Hefny, A., Marinho, Z., Sun, W., Srinivasa, S. & Gordon, G.. (2018). Recurrent Predictive State Policy Networks. Proceedings of the 35th International Conference on Machine Learning, in PMLR 80:1949-1958\\n\\n[2] Igl, M., Zintgraf, L., Le, T.A., Wood, F. & Whiteson, S.. (2018). Deep Variational Reinforcement Learning for POMDPs. Proceedings of the 35th International Conference on Machine Learning, in PMLR 80:2117-2126\\n\\n** Question **\\n\\nI have several questions where I'm not sure I understand the paper correctly:\\n\\n1.) Why can FP only predict the mean? For example, one could use a PixelCNN as decoder, which would allow to learn an entire distribution, not just the mean over images.\\n2.) The problem that CPC and CPC|Action is unable to predict objects if they don't influence the future trajectory doesn't seem surprising to me because whether an image is a positive or negative example can usually be determined by the background, the object is not necessary to do so. In other words, this is a problem of how the negative samples are chosen: If they were constructed using a simulator that shows the same background but without the objects, the belief would need to start encoding the presence of objects. Is this correct or am I missing something?\\n3.) Am I correct in thinking that CPC(|Action) would not be applicable to properly estimate the belief distribution in the presence of noise, i.e. for example when estimation the exact location based on sensors with Gaussian noise?\\n\\n** Overall **\\n\\n* Pros:\\n- Extensive, interesting evaluation\\n- Novel CPC|Action algorithm\\n\\n* Cons:\\n- No theoretical analysis/justification for claims\\n- There are several subtleties that I am not sure are sufficiently discussed in the paper (see my questions above)\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Interesting ideas, excellent experiments but a big question mark and significance hard to assess\", \"review\": \"This paper learns a deep model encoding a representation of the state in a POMDP using one-step frame prediction or a contrastive predictive coding loss function. They evaluate the learned representation and shows it can be used to construct a belief of the state of the agent.\\n\\nUsing deep networks in POMDP is not new, as the authors pointed out in their related work section. Thus I believe the originality of the paper lies in the type of loss used and the evaluation of the learned representation trough the construction of a belief over current and previous states. I think this method to evaluate the hidden state has the potential to be useful should one wish to evaluate the quality of the hidden representation by itself. In addition, I found the experimental evaluation of this method to be rather extensive and well conducted, as the authors experimented on 3 different (toy) environments and use it to quantify and discuss the performance of the three model architecture they develop.\\n\\nOn the other hand, the authors mention that other works have already shown that learned representations can improve agent performance, can be learned by supervised learning (= predicting future observations) or can be useful for transfer learning. So in that context, I am not sure the contributions of this paper are highly significant as they are presented. To better highlight the strength the evaluation method, I think it could be interesting to check that the accuracy in belief prediction is correlated with an improvement in agent performance or in transfer learning tasks. To better highlight the interest of the CPC loss, I think it could be interesting to compare it to similar approaches, for example the one by Dosovitskiy & Koltun (2017).\\n\\nI found the paper reasonably clear. However, the following sentence in the appendix puzzled me. \\\"The input to the [evaluation] MLP is the concatenation of b_t and a one-hot of the agent\\u2019s initial discretised position and orientation.\\\" I may have missed something, but I do not understand how the model can contain the uncertainty shown in the experimental results if the agent's initial position is provided to the current or past position predictor.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}" ] }
BkgGmh09FQ
Understanding Opportunities for Efficiency in Single-image Super Resolution Networks
[ "Royson Lee", "Nic Lane", "Marko Stankovic", "Sourav Bhattacharya" ]
A successful application of convolutional architectures is to increase the resolution of single low-resolution images -- a image restoration task called super-resolution (SR). Naturally, SR is of value to resource constrained devices like mobile phones, electronic photograph frames and televisions to enhance image quality. However, SR demands perhaps the most extreme amounts of memory and compute operations of any mainstream vision task known today, preventing SR from being deployed to devices that require them. In this paper, we perform a early systematic study of system resource efficiency for SR, within the context of a variety of architectural and low-precision approaches originally developed for discriminative neural networks. We present a rich set of insights, representative SR architectures, and efficiency trade-offs; for example, the prioritization of ways to compress models to reach a specific memory and computation target and techniques to compact SR models so that they are suitable for DSPs and FPGAs. As a result of doing so, we manage to achieve better and comparable performance with previous models in the existing literature, highlighting the practicality of using existing efficiency techniques in SR tasks. Collectively, we believe these results provides the foundation for further research into the little explored area of resource efficiency for SR.
[ "Super-Resolution", "Resource-Efficiency" ]
https://openreview.net/pdf?id=BkgGmh09FQ
https://openreview.net/forum?id=BkgGmh09FQ
ICLR.cc/2019/Conference
2019
{ "note_id": [ "ryxbNPPEeV", "BJl1kj0FCQ", "HklazuAY0X", "H1lUmvRFRX", "ryeByNe1aX", "HJlpdIWA37", "BJliGqDS2Q" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1545004841078, 1543264982785, 1543264277116, 1543264029647, 1541501916543, 1541441141138, 1540876819357 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1338/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1338/Authors" ], [ "ICLR.cc/2019/Conference/Paper1338/Authors" ], [ "ICLR.cc/2019/Conference/Paper1338/Authors" ], [ "ICLR.cc/2019/Conference/Paper1338/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1338/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1338/AnonReviewer3" ] ], "structured_content_str": [ "{\"metareview\": \"This paper targets improving the computation efficiency of super resolution task. Reviewers have a consensus that this paper lacks technical contribution, therefore not recommend acceptance.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"lack technical contributions\"}", "{\"title\": \"Revised paper to underline new insights proposed and add more visual comparisons. Techniques are agnostic to any task. Better & comparable results.\", \"comment\": \"Thanks for identifying the missing gaps from the paper. We have revised the paper to include more visual comparisons and make our objectives and writing clearer.\\n\\nThe focus of the paper is to understand the empirical effects of applying and comparing existing techniques that are popular in image discriminative tasks. \\n\\n> All techniques considered in this paper have been investigated in previous works\\n\\nAll techniques, apart from group convolutions which were investigated in [A], considered in this paper have not been investigated in super resolution networks. Due to the up-sampling structure of SR models, these efficiency methods may therefore produce potentially stronger side-effects to image distortion.\\n\\n[A] Ahn, N., Kang, B., & Sohn, K. A. (2018). Fast, Accurate, and, Lightweight Super-Resolution with Cascading Residual Network. arXiv preprint arXiv:1803.08664.\\n\\n> Thus no new idea is proposed in this work\\n\\nAlthough we do not propose a new way to perform compression, we show that these techniques differ greatly in trade-offs between efficiency and performance in different vision tasks, such as image classification and super-resolution. We derive a list of novel best practices from our results that can be used to efficiently construct or reduce any SR model.\\n\\n> Not clear why these improvement is particular suitable for the task for super resolution\\n\\nLow rank factorization is agnostic and is not specifically designed for any particular task. As long as the reconstruction error is small, the decomposition should follow the performance of the original model. Our results show that these techniques can also be practical and effective in super-resolution and can help existing practitioners construct or reduce their models though a list of recommendations that work better in terms of trade-off between image distortion (PSNR/SSIM) and size/operations. \\n\\n> these techniques actually can be used to improve a variety of network architectures in both high-level and low-level vision tasks\\n\\nYes, but the extent of improvement differs in both high-level and low-level vision tasks and the trade-offs when applied to SR are unclear prior to this study. For quantization, we obtain similar trade-offs in performance and efficiency. For convolutional approximations, we show that this is not the case for different vision tasks. For instance, unlike in image classification tasks, the use of low rank tensor decomposition, which we called bottleneck reduction, has better trade-offs than the use of grouped convolutions and/or channel shuffling in super-resolution tasks. Additionally, we also show that as more layers are compressed, the worse the trade-offs, an observation which is unlike previous observations in image classification tasks.\\n\\n> experimental results are weak\\n\\nWe managed to achieve better or comparable results with the models in recent existing literature. We are not aware of any model in the literature that is better in all aspects (performance, memory, and compute). To the best of our knowledge, there is always some trade-off made; if the model performs better, it is usually less efficient and vice versa. Additionally, our proposed best practices are complementary to any model in the existing literature.\\n\\nOnce again, we would like to thank you for the time and valuable comments.\"}", "{\"title\": \"both distortion and perceptual SR have their own advantages / revised paper to show our focus in SR literature\", \"comment\": \"Thank you for your valuable comments and suggestions. We revised the paper to follow your suggestions and make our points clearer.\\n\\n> metric that are known to not be well correlated with perceptual quality / these models are the current state-of-the-art in terms of perceptual quality\\n\\nWe disagree. The recent PIRM 2018 Challenge [A] provided the insight that structured images look perceptually better using models that were trained to reduce image distortion (PSNR/SSIM) and unstructured details were more visually pleasing using models that were trained to improve the perception metrics which you mentioned. Therefore, we believe that both lines of work have their own advantages. Furthermore, images that are better in terms of perceptual quality performed worse than images that are better in terms of distortion quality when used as inputs for image classification [B]. Therefore, we believe that both lines of work have their own advantages. \\n\\n[A] Blau, Y., Mechrez, R., Timofte, R., Michaeli, T., & Zelnik-Manor, L. (2018). 2018 PIRM Challenge on Perceptual Image Super-resolution. arXiv preprint arXiv:1809.07517.\\n\\n[B] Jaffe, L., Sundram, S., & Martinez-Nieves, C. (2017). Super-resolution to improve classification accuracy of low-resolution images. Tech. Rep. 19, Stanford University.\\n\\n>this perceptual line of work needs to be cited\\n\\nWe are aware of the perceptual track that you mentioned and only focus on the image distortion metrics. Hence, we previously kept the paper short and concise and did not cite the perceptual line of work as we did not use ideas such as the use of perpetual, contextual, adversarial losses etc. Following your advice, we have included a \\u2018Related work\\u2019 section to cover this and highlight the scope of our work and where it lies in the literature.\\n\\n> Not obvious to me that the insights obtained in this work would translate to the other case. / would the conclusions drawn on this work transfer to that setting? paper needs to provide a detailed justification on why models using these losses are not considered\\n\\nAs mentioned, we believe that both lines of work are important. Intuitively, as the models in our experiments are not trained to improve perception metrics and the compressed super-resolution images are less visually pleasing as compared to those produced by RCAN [C], our work would not improve the score based on perceptual tests. Regardless, you made a good suggestion to use perceptual tests and we agree that it would be interesting to perform these techniques on models that are trained to improve the perception metrics or both distortion and perception metrics to look at the trade-offs. Unfortunately, doing so would involve another huge set of systematic large-scale experiments due to the large variability of how these models can be trained, a change that would be significantly different from the original scope of the paper, which focuses on the trade-offs between efficiency and the image distortion metrics.\\n\\n[C] Zhang, Y., Li, K., Li, K., Wang, L., Zhong, B., & Fu, Y. (2018). Image super-resolution using very deep residual channel attention networks. arXiv preprint arXiv:1807.02758.\\n\\n> training details\\n\\nWe used the pretrained x2 scaling model as a starting point to train the x3 and x4 scaling models. This has been previous shown [D] to converge the model faster without affecting performance.\\n\\n[D] Lim, B., Son, S., Kim, H., Nah, S., & Lee, K. M. (2017, July). Enhanced deep residual networks for single image super-resolution. In The IEEE conference on computer vision and pattern recognition (CVPR) workshops (Vol. 1, No. 2, p. 4).\\n\\n> distillation techniques\\n\\nWe speculate that distillation will further reduce the performance as shown in other image restoration tasks such as image enhancement [E]. However, we agree that it will be interesting as a future work to experiment and compare it with the conclusions that are proposed in our paper.\\n\\n[E] Hui, Z., Wang, X., Deng, L., Gao, X.: Perception-preserving convolutional networks for image enhancement on smartphones. In: European Conference on Computer Vision Workshops (2018)\\n\\n> try scaling factors larger than x4\\n\\nWe did not try scaling factors larger than x4. However, as the trade-offs are consistent for x2, x3, and x4 scaling factors, we strongly speculate that the same conclusions hold for high scaling factors. \\n\\n> simpler methods can achieve quite competitive results (such as simple interpolation methods)\\n\\nTo the best of our knowledge, we are not aware of any simple interpolation methods that are comparable to the use of neural networks for single image super resolution.\\n\\n> overall writing could be improved. Citation style is not used properly.\\n\\nWe carefully proofread and made the appropriate modifications to the paper based on your feedback. Thank you once again for the detailed review.\"}", "{\"title\": \"known ideas' outcomes is unclear on SR prior to study / new observations / better & comparable results than other\", \"comment\": \"Thank you for your review and your positive comment. Our primary objective is to understand how compression techniques, that previously worked in image discriminative tasks, will work in a previously unstudied task for model compression: Super Resolution (SR). SR architectures differ significantly from those designed for image classification due to the up-sampling structure of SR models. Prior to our empirical study, it was unclear which methods that promote efficiency would perform best. Moreover, the magnitudes of gains were unknown without the extensive empirical analysis that we performed.\\n\\n> methods that do a comparable or better job in the same range\\n\\nWe are not aware of any model, including those that you pointed out, that beats our best model in both efficiency (memory and compute) and the image distortion metrics (PSNR/SSIM); there is always some trade-off made. Can you give some examples on such models?\\n\\n> does not lead to or bring new insights or ideas. does not reveal new operating points.\\n\\nOur results reveal a list of new insights and operating points in terms of trade-offs between operations/size and performance accuracy that are not previously found in the SR literature:\\n\\n1. In image discriminative tasks, the proposed architecture changes are comparable in terms of efficiency and accuracy trade-offs. In our work, we show varying effectiveness among these techniques. In particular, the use of low rank tensor decomposition/bottleneck reduction architectures provide the best trade-offs, followed by the use of grouped convolutions, and the use of channel shuffling & splitting. In other words, any usage of grouped convolutions increases image distortion quite significantly and any usage of channel shuffle drastically increase it even further.\\n\\n2.[A, B] have shown that it is possible to maintain a similar or slight drop in performance by decomposing tensors of known models in image classification. In contrast, we show that as more tensors are decomposed in the model, the worse the trade-offs are in SR. \\n\\n3. The use of ternary-weighted quantization in SR tasks results in trade-offs similar to that in image discriminative tasks. We are not aware of any other SR papers that try binary/ternary weighted architectures.\\n\\n[A] Bhattacharya, S., & Lane, N. D. (2016, November). Sparsification and separation of deep learning layers for constrained resource inference on wearables. In Proceedings of the 14th ACM Conference on Embedded Network Sensor Systems CD-ROM (pp. 176-189). ACM.\\n\\n[B] Kim, Y. D., Park, E., Yoo, S., Choi, T., Yang, L., & Shin, D. (2015). Compression of deep convolutional neural networks for fast and low power mobile applications. arXiv preprint arXiv:1511.06530.\\n\\n> expand their study, make some novel observations, propose some design that stand out\\n\\nAlthough we did not propose a novel design, we have made some novel observations and recommend a list of best practices for practitioners to construct or reduce any SR model in the literature.\\n\\n> recent papers\\n\\nAs far as we know, the models proposed in the recent PIRM mobile challenge did not use any of the techniques that we tried and are therefore complementary to our work. Moreover, although the smaller models are more efficient, they perform much worse in terms of the image distortion metrics (PSNR/SSIM) and therefore, not a fair comparison with our derived models.\\n\\nWe have addressed your points extensively in our revised paper attached.\"}", "{\"title\": \"summary of known ideas / no new ideas / no better results than other from the SR literature\", \"review\": \"The authors target single-image super-resolution (SR) task and study the efficiency (runtime, memory) of the current neural networks.\\n\\nOn the positive side, the paper is a good effort of bringing together works and insights related to efficient designs and efficient SR solutions.\\n\\nIf we report to the baseline architecture (RCAN) then the proposed efficient variants achieves large reductions in number of parameters or multiplications-additions, at the cost of lower accuracy. However, when the newly proposed trade-offs are compared with the existing literature, we see other methods that do a comparable or better job in the same range.\\n\\nOn the negative side, from my point of view, the study is far from being thorough and does not lead to or bring new insights or ideas. The experimental results does not reveal new operating points (trade-off between complexity/operations and performance accuracy).\\n\\nI would suggest to the authors to expand their study, to make some novel observations, and to propose some designs that can stand out in the literature.\", \"i_am_pointing_out_also_to_some_recent_papers_that_are_related_to_the_topic_and_can_be_or_are_applied_to_sr\": \"Gu et al, \\\"Multi-bin Trainable Linear Unit for Fast Image Restoration Networks\\\", arxiv 2018\\nIgnatov et al, \\\"Pirm challenge on perceptual image enhancement on smartphones: Report\\\", arxiv 2018\", \"and_some_works_proposed_for_that_challenge\": \"Vu et al, \\\"Fast and efficient image quality enhancement via desubpixel convolutional neural networks\\\", ECCVW 2018\\nLi et al, \\\"CARN: Convolutional Anchored Regression Network for Fast and Accurate Single Image Super-Resolution\\\", ECCVW 2018\\nPengfei et al, \\\"Range scaling global u-net for perceptual image enhancement on mobile devices\\\", ECCVW 2018\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Official review\", \"review\": \"The paper proposes a detailed empirical evaluation of the trade-offs achieved by various convolutional neural networks on the super resolution problem. The paper provides an extensive evaluation of different architectural changes and the trade-off between savings in terms of memory and computational cost and performance, measured in terms of PSNR and SSIM.\\n\\nThis is an empirical paper, thus it does not provide technical contributions. I do think that the insights obtained from such an empirical evaluation could be of interest for practitioners and researchers working on the problem. My main concern is the method only evaluates the trade-offs between model efficiency (in terms of memory and/or computation) and performance measured using metrics that are known to not be well correlated with perceptual quality. Thus it is not obvious to me that the insights obtained in this work would translate to the other case.\\n\\nIt is well known that PSNR favors blurry solutions over perceptually more appealing solutions. This comes from the fact that there is no information in the low resolution image to produce the missing high resolution details. Filling up plausible details in a way that is different from the original image would lead to high PSNR. Models that treat the super resolution problem as a regression task using similarity in pixel space, tend to produce blurry solutions and require very large models to improve the score. \\n\\nIn recent years, many works have been studying the use of perceptual losses to mitigate this issue or simply treating the super resolution problem as conditional generative modeling. For instance, models using L2 losses in a perceptually more relevant (or learned) feature spaces [A, B], or including GAN losses [C, D] (to list a few). To my knowledge, these models are the current state of the art in terms of perceptual quality. This has been evaluated empirically via perceptual tests [D]. \\n\\nThis line of work needs to be cited. In my view, the paper needs to provide a detailed justification on why models using these losses are not considered. Would the conclusions drawn on this work transfer to that setting? Furthermore, it would be good to perform perceptual tests to perform this evaluation. It would be good to provide some canonical examples in the appendix.\\n\\nThe overall writing of the paper could be improved. Several sentences are difficult to read, due to typos or the construction of the sentences. The paper evaluates many architectural modifications proposed by other works. It would be good to add an appendix with a small description of what these are. This would make the paper self-contained an easier to read (I had too look up a few of them).\\n\\nThe authors mentioned that they first train models for scaling factor of x2 and then use them for training settings higher magnification. How is this exactly done? Please provide details.\\n\\nI am curious of weather using some for of distillation techniques would be useful here.\\n\\nDid you try scaling factors larger than x4? Scaling factors of x2 does not seem very relevant, as simpler methods can achieve already quite competitive results (such as simple interpolation methods)\\n\\nThe authors seem to be citing Zhang et al (2018) as a reference to attention mechanisms. To my knowledge the paper that proposed these mechanisms is [E].\\n\\nThe citation style is not used properly throughout the manuscript. As an example:\\n\\n\\u201c\\u2026 proposed in StrassenNets Tschannen et al (2017).\\u201d Should be \\u201c\\u2026 proposed in StrassenNets (Tschannen et al, 2017).\\u201d Or \\u201c\\u2026 proposed in StrassenNets proposed by Tschannen et al (2017).\\u201d\\n\\n[A] Johnson, J. et al. \\\"Perceptual losses for real-time style transfer and super-resolution.\\\"\\u00a0ECCV, 2016.\\n[B] Bruna, J. et al \\\"Super-resolution with deep convolutional sufficient statistics.\\\"\\u00a0ICLR 2016.\\n[C] Ledig, C. et al. \\\"Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network.\\\"\\u00a0CVPR. Vol. 2. No. 3. 2017.\\n[D] S\\u00f8nderby, C. K., et al. \\\"Amortised map inference for image super-resolution.\\\"\\u00a0arXiv preprint arXiv:1610.04490(2016).\\n[E] Bahdanau, D. et al \\\"Neural machine translation by jointly learning to align and translate.\\\"\\u00a0arXiv (2014).\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"No new insight is proposed. The techniques are not specifically designed for super-resolution task. The experimental results are also weak.\", \"review\": \"This paper proposed to improve the system resource efficiency for super resolution networks.\\n\\nFirst, I am afraid all the techniques considered in this paper have been investigated in previous works. Thus no new idea is proposed in this work. Also, it is also not clear why these improvement is particularly suitable for the task of super resolution. In my viewpoint, these techniques actually can be used to improve a variety of network architectures in both high-level and low-level vision tasks.\\n\\nSecond, the experimental results are also weak. As this work is aiming to address the super resolution tasks, at least visual comparisons between the proposed methods and other state-of-the-art approaches should be included in the experimental part. But unfortunately, no such qualitative results are presented in the manuscript. \\n\\nFinally, the presentation of the paper should also be carefully proofread and revised.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}" ] }
ryxMX2R9YQ
CGNF: Conditional Graph Neural Fields
[ "Tengfei Ma", "Cao Xiao", "Junyuan Shang", "Jimeng Sun" ]
Graph convolutional networks have achieved tremendous success in the tasks of graph node classification. These models could learn a better node representation through encoding the graph structure and node features. However, the correlation between the node labels are not considered. In this paper, we propose a novel architecture for graph node classification, named conditional graph neural fields (CGNF). By integrating the conditional random fields (CRF) in the graph convolutional networks, we explicitly model a joint probability of the entire set of node labels, thus taking advantage of neighborhood label information in the node label prediction task. Our model could have both the representation capacity of graph neural networks and the prediction power of CRFs. Experiments on several graph datasets demonstrate effectiveness of CGNF.
[ "graph neural networks", "energy models", "conditional random fields", "label correlation" ]
https://openreview.net/pdf?id=ryxMX2R9YQ
https://openreview.net/forum?id=ryxMX2R9YQ
ICLR.cc/2019/Conference
2019
{ "note_id": [ "rklyOZaVx4", "SJxmaRYzJ4", "S1eg0jkc0X", "rygYcsk5Cm", "B1x5Ls1qR7", "Syx6o21R2Q", "HylhCmRp2Q", "SyeJdNZYhm" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1545027942800, 1543835323405, 1543269319598, 1543269264813, 1543269201684, 1541434532746, 1541428180308, 1541112934967 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1337/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1337/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1337/Authors" ], [ "ICLR.cc/2019/Conference/Paper1337/Authors" ], [ "ICLR.cc/2019/Conference/Paper1337/Authors" ], [ "ICLR.cc/2019/Conference/Paper1337/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1337/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1337/AnonReviewer2" ] ], "structured_content_str": [ "{\"metareview\": \"This paper introduces conditional graph neural fields, an approach that combines label compatibility scoring of conditional random fields with deep neural representations of nodes provided by graph convolutional networks. The intuition behind the proposed work is promising, and the results are strong.\", \"the_reviewers_and_the_ac_note_the_following_as_the_primary_concerns_of_the_paper\": \"(1) The novelty of this work is limited, since a number of approaches have recently combined CRFs and neural networks, and it is unclear whether the application of those ideas to GCNs is sufficiently interesting, (2) the losses, especially EBM, and the use of greedy/beam-search inference was found to be quite simple, especially given these have been studied extensively in the literature, and (3) analysis and adequate discussion of the results is missing (only a single table of numbers is provided).\\nAmongst other concerns, the reviewers identified issues with writing quality, lack of clear motivation for CRFs, and the selection of the benchmarks.\\n\\nGiven the feedback, the authors responded with comments, and a revision that removes the use of EBM loss from the paper, which the reviewers appreciated. However, most of the concerns remain unaddressed. Reviewer 2 maintains that CRFs+NNs still need to be motivated better, since hidden representations already take the neighborhood into account, as demonstrated by the fact that CRF+NNs are not state-of-art in other applications. Reviewer 2 also points out the lack of a detailed analysis of the results. Reviewer 2 focuses on the simplicity of the loss and inference algorithms, which is also echoed by reviewer 2 and reviewer 1. Finally, reviewer 1 also notes that the datasets are quite simple, and not ideal evaluation for label consistency given most of them are single-label (and thus need only few transition probabilities).\\n\\nBased on this discussion, the reviewers and the AC agree that the paper is not ready for acceptance.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Simple loss/inference, lack of thorough evaluation\"}", "{\"title\": \"Thanks for the revision\", \"comment\": \"Thanks for the revision. But my main concerns (weakness in the learning algorithm and experiment results) are not addressed.\\n\\nw.r.t. motivation: I can fully understand why combining CRF and graph neural nets might be helpful, and indeed such combinations have been explored before in other domains like sequence and image pixel labeling. However as far as I know at least for image segmentation, if you look at popular benchmarks like CityScapes or MS-COCO, the top-of-leaderboard entries are no longer using CRFs as the key component in the model. This is largely due to the fact that the deep neural net can already model the dependency across input through the representations.\\n\\nw.r.t. the question: As to \\u201ca lot of questions about the proposed model\\u201d and \\u201cstudies about this model from more angles\\u201d, could you elaborate some points?\\n\\nWhat I meant was, whenever you propose a new model with many new parts, we want to know as much information about it as possible from every aspect. Currently all we know about the model comes from the 7 numbers in Table 3. That's not enough to understand the properties of the model. Admittedly understanding deep models is hard but we can do better.\"}", "{\"title\": \"response, discussion and revision\", \"comment\": \"We appreciate your detailed comments very much. The following are the clarification of some questions mentioned in the review.\\n1. Motivation: Although deep neural networks achieve good results in most prediction tasks, to our best knowledge, the output dependency has not yet been modeled in the representations. We can easily take two examples into consideration: (1). RNN-CRF used for sequence tagging and dense-CRF used for image segmentation. These models add CRF layer on top of a deep neural architecture in order to capture the tag dependency, and both achieved the state-of-the-art in their tasks. Obviously the output dependency is not modeled in the original deep representation, so adding the CRF layer could largely improve the results. (2). Adding a regularization term for labels improves the original deep neural networks, e.g. \\u201cDeep CNN with Graph Laplacian Regularization for Multi-label Image Annotation (Mojoo et al. 2017)\\u201d, and \\u201cRegularizing Prediction Entropy Enhances Deep Learning with Limited Data (Dubey et al. 2017)\\u201d. These examples demonstrate the usefulness of including output dependency in a deep neural network. Similarly for graph neural network, we believe that the output dependency could be useful as well.\\n\\n2. Inference: We have removed the energy loss minimization section in the revised paper. One motivation of using pseudo-likelihood approximation (or even EBM loss) is to derive an exact form to approximate the objective function, thus making the back-propagation easily conducted in an end-to-end framework. Mean-field may be an alternative inference method, we will consider to adapt it to the graph structure and apply it to our model in future work.\\n\\n3. As to the benchmark data, it may be true that \\u201cmost of the effort will be put in to prevent overfitting\\u201d. In fact, the pairwise energy in our model can also be seen as a regularization, which prevents overfitting to some extent.\\n\\n4. Experiments: We tried a new pytorch implementation of GraphSAGE (https://github.com/williamleif/graphsage-simple) on our datasets and the accuracies seem to become normal. The new results are updated in the paper. As to \\u201ca lot of questions about the proposed model\\u201d and \\u201cstudies about this model from more angles\\u201d, could you elaborate some points?\"}", "{\"title\": \"response, discussion and revision\", \"comment\": \"Thank you for the review and useful suggestions. Please find our response as follows:\\n1. Why normalized A: \\nNotice that the pairwise energy is only calculated for two neighbor nodes, so the diagonal element of \\\\hat{A} is never used. Thus it has the same effect as using the original adjacency matrix (except that the learned weight matrix U will have a different scale). Using \\\\hat{A} instead of A is just for computational convenience.\\n\\n2. About the energy loss (10): \\nYes, we admit the loss (10) may not work but it is not necessarily incorrect. In practice sometimes they can still achieve good results, as we showed in our experiments. Moreover, to avoid the problem of using loss (10), we removed the section of EBM based optimization in the revised paper.\\n\\n3. The reason we did not choose the mean-field based inference algorithm in (Shuai Zheng et. al., 2015) was that it is originally used for CNN and the CRF inference is indeed transformed into a CNN module. But it cannot directly apply to our case, which has a different energy function and different data structures. For example, the message passing scheme cannot be treated as a convolution layer anymore. Using pseudo-likelihood approximation (or even EBM loss) could give us an exact form to approximate the objective function, thus easily making the framework end-to-end. But we do admit that the mean field inference may be adapted to the graph neural network and we will explore it in future work.\"}", "{\"title\": \"response, discussion and revision\", \"comment\": \"Thank you for the reviews. Here are the clarification about the novelty and experimental part:\", \"insights\": \"As we have claimed in the paper, our work have novelties from two perspectives.\\n(1) From the perspective of node classification, we improve the graph neural networks by consideration the label compatibility, which is a non-trivial problem. And combing GCN and CRF is the first work in this area. (2) From the perspective of neural network structure, we admit that adding CRF to GNN is similar to adding CRF to other deep neural networks, such as RNN-CRF for sequence tagging and dense-CRF for image segmentation. However, how to formulate the energy function, and how to optimize the parameters are quite different in each model. The inference method for a sequence model or a CNN based model cannot be directly applied to our case.\", \"experiments_abou_deepwalk\": \"For semi-supervised learning, we derive the embeddings of DeepWalk and Node2vec on all nodes and then train the node label classification using a one-layer MLP on the training data. This is almost identical to what is done for the semi-supervised graph neural networks. We have modified the paper and clarified this part.\"}", "{\"title\": \"Novelty is incremental.\", \"review\": \"This paper proposes a conditional graph neural network to explore the label correlation. In fact, this method combines graph convolutional neural network and CRF together. The novelty is incremental. No new insight is provided toward this topic.\\n\\nFor experiments, how did you train DeepWalk (Node2vec)? By using all nodes or the selected training set? It should be clarified in the paper. \\n\\nAdditionally, Table 3 says the result of semi-supervised methods. But how did you conduct semi-supervised learning for DeepWalk or Node2vec?\\n\\n===================\", \"after_feedback\": \"Thanks for authors' feedback. Some of my concerns have been addressed. But the novelty is still not significant. On the other hand, the dataset used in this paper is simple. Specifically, at least the first 3 datasets are single-label and the number of classes is not large. They are too simple to support the claim. It's better to use multi-label datasets to show that the proposed method can really capture the correlation of labels.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Interesting idea but not solid enough\", \"review\": \"The authors combine graph convolutional neural network and conditional random fields to get a new model, named conditional graph neural fields. The idea of the paper is interesting, but the work is not solid enough. Detailed comments come as follows,\\n\\n1. In the proposed model, the authors treat the pairwise energy as prior and they do not depend on any features. Unlike the usual graph model in Eq (4), the authors further use the normalized \\\\hat A as a scale factor in the pairwise term. What is the intuition for this?\\n\\n2. The loss (10), which the authors claim that they are using, may not work. In fact, the loss cannot be used to use for training most architectures: ``while this loss will push down on the energy of the desired answer, it will not pull up on any other energy.''(LeCun et. al. 2006, A tutorial on energy-based learning). For deep structured model learning, please using piecewise learning, or joint training using some common CRF loss, such as log-loss. In fact, the authors are not using the energy-based loss as they have constraints on unary and pairwise terms. In fact, if we ignore the pairwise term in (11), the loss becomes log-loss for GCN. With the pairwise term, the loss is somehow like the loss for piecewise learning but the constraints on U is wrong (for piecewise learning, U should sum to 1).\\n\\n3. The inference procedure is too simple that it can hardly find the near-optimal solutions. In fact, there exists an efficient mean-field based inference algorithm (Shuai Zheng et. al., 2015). Why did the authors choose a simple but poor inference procedure?\\n\\nComments After rebuttal\\n==========\\nThank you for adress my concerns.\\n\\nThe response and the revision resolved my concern (1). However, the most important part, the possibly problematic loss is not resolved. It is true that sometimes (10) can achieve good results with good regularizers or a good set of hyperparameters. However, theoretically, the loss is ]only pushed down the desired answer, which may make the training procedure quite unstable. Thus I still think that a different loss should be used here.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Reviewer comment\", \"review\": \"This paper proposed a combination of graph neural networks and conditional random field to model the correlation between node labels in the output. In typical graph neural nets the predictions for nodes are conditionally independent given the node representations. This paper proposes to use a CRF to compensate for that. In terms of the approach, the authors used GCNs to produce unary potentials for the CRF, and have the pairwise potentials on each edge to model the correlation of labels of neighboring nodes. Learning is done by optimizing pseudo-likelihood and the energy loss, while inference is performed through a couple heuristic processes.\\n\\nCombining neural nets with CRFs is not a new idea, in particular this has been tried before on image and sequence CRFs. It is therefore not surprising to see an attempt to also try it for graph predictions. The main argument for using a CRF is its ability to model the correlations of output labels which was typically treated as independent. However this is not the case for deep neural networks, as it already fuses information from all over the input, and therefore for most prediction problems it is fine to be conditionally independent for the output, as the dependence is already modeled in the representations. This is true for graph neural networks as well, if we have a deep graph neural net, then the GNN itself will take care of most of the dependencies between nodes and produce node representations that are suitable for conditionally independent output predictions. Therefore I\\u2019m not convinced that CRFs are really necessary for solving the prediction tasks tried in this paper.\\n\\nThe learning and inference algorithms proposed in this paper are also not very convincing. CRFs has been studied for a long time, and there are many mature algorithms for learning them. We could do proper maximum conditional likelihood learning, and use belief propagation to estimate the marginals to compute the gradients. Zheng et al. (2015) did this for convnets, we could also do this for graph CRFs as belief propagation can be easily converted into message passing steps in the graph neural network. Pseudo-likelihood training makes some sense, but energy loss minimization doesn\\u2019t really make sense and has serious known issues.\\n\\nOn the other hand, the proposed inference algorithms does not have good justifications. Why not use something standard, like belief propagation for inference again? Our community has studied graphical models a lot in the last decade and we have better algorithms than the ones proposed in this paper.\\n\\nLastly, the experiments are done on some standard but small benchmarks, and my personal experience with these datasets are that it is very easy to overfit, and most of the effort will be put in to prevent overfitting. Therefore more powerful models typically cannot be separated from overly simple models. I personally don\\u2019t care a lot about the results reported on these datasets. Besides, there are a lot of questions about the proposed model, but all we get from the experiment section are a few numbers on the benchmarks. I expect studies about this model from more angles. One more minor thing about the experiment results: the numbers for GraphSAGE are definitely wrong.\\n\\nOverall I think this paper tackles a potentially interesting problem, but it isn\\u2019t yet enough to be published at ICLR due to its problems mentioned above.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}" ] }
HJxfm2CqKm
Discovering General-Purpose Active Learning Strategies
[ "Ksenia Konyushkova", "Raphael Sznitman", "Pascal Fua" ]
We propose a general-purpose approach to discovering active learning (AL) strategies from data. These strategies are transferable from one domain to another and can be used in conjunction with many machine learning models. To this end, we formalize the annotation process as a Markov decision process, design universal state and action spaces and introduce a new reward function that precisely reflects the AL objective of minimizing the annotation cost We seek to find an optimal (non-myopic) AL strategy using reinforcement learning. We evaluate the learned strategies on multiple unrelated domains and show that they consistently outperform state-of-the-art baselines.
[ "active learning", "meta learning", "reinforcement learning" ]
https://openreview.net/pdf?id=HJxfm2CqKm
https://openreview.net/forum?id=HJxfm2CqKm
ICLR.cc/2019/Conference
2019
{ "note_id": [ "ByxHsYL3J4", "HyecsNFYJV", "BylT2JLYA7", "S1l6yTrtAX", "BkxIFhSKRX", "HkeE1fkTpQ", "rJxMVWxQTQ", "SJgx1F79hX", "SJxio9y9hX" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review", "official_review" ], "note_created": [ 1544477084827, 1544291489941, 1543229365040, 1543228644564, 1543228542121, 1542414812471, 1541763370010, 1541187799693, 1541171875232 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1336/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1336/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1336/Authors" ], [ "ICLR.cc/2019/Conference/Paper1336/Authors" ], [ "ICLR.cc/2019/Conference/Paper1336/Authors" ], [ "ICLR.cc/2019/Conference/Paper1336/AnonReviewer4" ], [ "ICLR.cc/2019/Conference/Paper1336/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1336/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1336/AnonReviewer1" ] ], "structured_content_str": [ "{\"metareview\": \"This paper provides further insight into using RL for active learning, particularly by formulating AL as an MDP and then using RL methods for that MDP. Though the paper has a few insights, it does not sufficiently place itself amongst the many other similar strategies using an MDP formulation. I recommend better highlighting what is novel in this work (e.g., more focus on the reward function, if that is key). Additionally, avoid general statements like \\u201cTo this end, we formalize the annotation process as a Markov decision process\\u201d, which suggests that this is part of the contribution, but as highlighted by reviewers, has been a standard approach.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"More focus is needed on what is novel in this work\"}", "{\"title\": \"Thanks for the response\", \"comment\": \"Thanks to the authors for their clarifications.\\n\\nIn general my feeling is still \\\"not quite good enough\\\" on novelty grounds.\\n\\nGiven the similarities in the current approach and various recent papers, the analysis and evaluation should be top notch to quality for a top tier ICLR publication. \\n\\nVarious issues design choices and parameters could be better justified and evaluated as highlighted by other reviewers. \\nAlso, when I mentioned CIFAR, I did not mean transfer across categories within CIFAR. I meant include datasets like CIFAR among the training datasets. And/Or show that once trained on the UCI datasets used, evaluate whether it can transfer to CIFAR. This might work given the type of representations used in this paper, and would be a clearer plus on Pang'18 whose dataset embedding is not ideal for images. \\nAlso, when I said transfer across classifiers. I meant evaluate whether a model trained for SVM can successfully test on LogReg, and vice-versa, etc. Not whether a unique RL model can be trained for each, as seems to be the case in the current result.\"}", "{\"title\": \"Response to minor comments\", \"comment\": \"To conclude, we briefly mention the other smaller comments by the reviewers.\\n\\nDistance measure \\nWe used the cosine distance as explained in the experimental setup. It was chosen as its scale is independent of the dimensionality of the data. Other distance measures could be used, but even this simple distance measure allows to reach promising results.\\n\\n\\\"For example, the probability that the classifier assigns to a datapoint suits this purpose because most classifiers estimate this value.\\\" \\nBy \\\"the probability that the classifier assigns to a datapoint\\\" we mean p(y|x) and not p(x) or p(y,x).\\n\\nZhang and Chaudhuri, 2015\\nThank you for bringing the paper by Zhang and Chaudhuri, 2015 to our attention, it is an important piece of work to discuss in the related work. This related work analyses the setting with a strong and weak labeller, while in our work we study a general case with one annotator. Besides, the focus of our work is more on the empirical aspects of AL.\\n\\nTransfer between classifiers\\nSo far, we have show that our results hold for logistic regression and SVM (two different families of methods), but it could be interesting to extend it to other classifiers too. Besides, our proposed model can seamlessly transfer to various problems such as regression (by modifying the stopping condition) or multi-class classification (by extending the state representation to a matrix instead of vector), that could be interesting to study as well.\\n\\nRelating to supervised learning\\nIt would be interesting to study the contribution of our state and action representation if they are applied with a supervised active learner (like LAL), however, the reward structure is only possible when applied with reinforcement learning.\"}", "{\"title\": \"Response to common concerns (continuation)\", \"comment\": \"Finally, we would like to answer the common questions and concerns regarding the choice of parameters and the experimental study.\\n\\nPARAMETERS\\n\\nWe have chosen 30 datapoints for validation as smaller values result in slightly decreased performance and bigger values do not help to improve the result but make the execution slower. However, the method is not sensitive to this choice of parameter and one value was suitable across all datasets.\\n1000 iterations were chosen by tracking the performance of RL policy on a validation set and stopping the training when the rewards stops growing. In the vast majority of cases it happened before 1000 iterations and thus we set a rule to stop the execution at this point.\\n\\nEXPERIMENTS\\n\\nThe setting of transfer between various classes of complex datasets (such as CIFAR) is well studied in the literature (Ravi & Larochelle, 2018; Liu et al., 2018; Bachman et al., 2017; Fang et al., 2017; Contardo et al., 2017) and that is why we have chosen to concentrate on transfer between datasets. In this case we need many datasets of comparable difficulty level. Although we have tried to demonstrate the benefits of our method on close-to-realistic settings, we acknowledge that the experiments could be run with even more complex data. \\n\\nHaving classification problems of varying difficulties could lead to some problems contributing more to the training procedure. This is why we performed training on the problems of comparable difficulty (UCI datasets). In practice, the target of general-purpose AL is to perform the best in expectation for any new coming datasets. Then, this bias during training could be beneficial because it concentrates on more difficult cases that can bring more advantage at test time. Nevertheless, as training is performed on datasets of varying difficulty, at test time the AL strategy tries to understand (based on state and action features) how difficult the problems is and the selection policy is adjusted accordingly.\\n\\nThank you for bringing our attention to the issue with table 2, the strange behaviour is caused by a typo where we swapped 2-d and 4-th lines of Table 2. With this typo corrected, the explanation in Section 4.3 is consistent with the table.\\n\\nAs stated, we ran 500 trials in every experiment. Often the difference between our method and the second best method is quite small. Despite this fact, the average results in table 2 show the benefit of the proposed method. In addition to this, we ignored small differences in methods (and considered both of them winning) when reporting the ranking results.\\n\\nOur stopping criteria is motivated by the practical scenario when achieving 98% of prediction quality is enough for a final user if it results in significant cost savings. We varied the stopping conditions when we increase the size of the total target set (98% of quality obtained when trained on 100 datapoint, 200 datapoints or 500 datapoints).\"}", "{\"title\": \"Response to common concerns\", \"comment\": \"We would like to thank the reviewers for their feedback that will help to improve our manuscript. In this reply, we will try address the reviewer\\u2019s common concerns on the novelty, representation of states and actions and the algorithm that is chosen for reinforcement learning.\\n\\nNOVELTY\\n\\nActive learning with reinforcement learning has become trendy recently, and several works have led to similar ideas with different problem formulations and models. We believe that the literature on this subject is not yet complete and the question of general-purpose AL strategies deserves additional treatment. We would like to mention two features of our method: 1) simplicity and 2) view on the problem from the perspective of budget minimisation given quality constraints. 1) The simplicity of state and action representations in our proposed method make our model conceptually easy to understand and implement and it could be of interest to practitioners. Besides, our method is transparent as it optimises for the same metric that is used for testing. 2) Although the view on the problem as minimising the amount of annotations to reach a given performance was used before in theoretical literature on AL, it opens a new perspective on AL formulation in the form of an MDP with a very simple and domain independent reward function. The reward function is conceptually different from prior work on AL with RL.\\n\\nSTATE AND ACTION\\n\\nOur state representation is just the simplest representation that can be obtained given a classifier and a dataset. First, we just apply a classifier to validation datapoints. Then, we sort the scores. It is needed because the scores serve as an input to a fully connected layer of neural network that does not allow for permutations. Intuitively, the active learner can extract the information on the confidence of the classifier or proportions of various class predictions. Combined with a score of a potential datapoint (part of action) active learner can estimate from which part of the distribution the datapoint comes from.\\nIn Figure 1 the datapoints are the same in three experiments. We did not use the prediction error in our visualisation. Although it would definitely be more informative than the predicted score, such information is not available to the learner at test time (because validation set in unlabelled).\", \"the_action_representation_in_the_form_of_three_statistics_is_again_just_the_simplest_representation_that_relates_three_components_of_al_problem\": \"classifier, labelled and unlabelled datasets. It has been noticed by reviewers that two of these statistics are related to the sparcity of data and they represent the heuristic approximation for density of the data. We did not explore other statistics because our primary motivation was to keep our method as simple as possible.\\n\\nREINFORCEMENT LEARNING ALGORITHM\\n\\nIn fact, we cannot use a standard off-the-self Q-learner, such as DQN procedure because it is not designed to deal with continuous actions which we encounter in our problem formulation. We modified the way how optimisation is performed, instead of having discrete actions, we have actions that are represented by vectors and they serve as an input to DQN instead of being related to several outputs. Although this is a small modification, it was not done in other AL-RL methods which considered only policy gradient optimisation for the task. Adapting DQN for the use in this scenario directly allows to exploit the benefits of bootstrapping for AL task.\"}", "{\"title\": \"An intuitive combination of reinforcement learning and active learning.\", \"review\": \"\", \"i_do_not_recommend_this_paper_for_publication_in_iclr_because_i_believe\": \"1) the work is too incremental\\n2) the comparison to baseline and competing methods is incomplete\\n3) some design decisions of the proposed method are not well motivated.\\n\\nI appreciated the clarity of the writting, and the paper organization. I also believe that the proposed method is quite intuitive, and is a good addition to the field. Finally, I appreciate that sufficient experimental details are available within the paper to be able to easily reproduce the results.\", \"details\": \"My points (1) and (2) are highly related, so I will discuss both simultaneously. I find that this paper makes only incremental forward progress from the Pang 2018 paper and the Konyushkova 2017 paper. The methodology here looks very similar to the SingleRL method, which Pang 2018 notes can be considered a special case of Konyushkova 2017's method. I think that the work in this paper would be sufficient to stand on its own if it performed a convincing comparison to SingleRL and/or MLP-GAL from Pang 2018. I recognize that this paper references why no such comparison currently exists, but I think this comparison would be extremely valuable to the paper.\\n\\nA further comment on my point (2), I do not find the comparisons to baseline methods to be entirely convincing. Of note, only the average performance for each method is reported. I'm curious of the variance---and more specifically the standard error and number of independent runs---of each of the reported results. On many of the datasets, the performance difference between the proposed method and uncertainty sampling is quite small in table 1.\\n\\nA final comment on point (2): I would have liked to see more exploration of different models. I think table 2 is quite informative, showing notable differences between simple baseline AL methods. I would have liked to see table 2 with more classifiers and with more competing AL methods. Because logistic regression is a simple model, the differences between AL methods may be more subtle. Perhaps a more complex model (say a single hidden layer NN) would show more notable differences.\\n\\nFor point (3), I would have liked to see either an exploration of other design decisions or an explanation of given design decisions. For instance, why only use 30 hold-out samples for the state? I imagine the proposed method would be fairly sensitive to this choice. Another unexplained design decision was using a maximum budget of 100 datapoints. Table 2 shows some extremely interesting interactions with this budget in its comparison between LogReg-100 and LogReg-200, and further explanation would have been useful. Finally, I would have liked to see some motivation for choice of stopping condition. Using the stopping condition of 98% of maximum performance may have some biasing effect of each method, and it would helpful to have some motivation behind this choice.\", \"questions\": [\"Why did uncertainty sampling have such limited benefits on LogReg-200 in table 2? This was a surprising result to me, as uncertainty sampling consistently outperformed most other methods.\", \"Why is there a disparity between the results for the SVM in table 2 and the discussion in the first paragraph of section 4.3?\", \"How does choice of final performance metric affect all methods? Choosing final performance to be 98% of maximum performance could have a major effect on each method. Because the proposed method is non-myopic, I would expect that it performs well when this value is large but would perform poorly with a smaller percentage of maximum performance.\", \"Is the proposed method sensitive to number of samples used to compute the state?\", \"What does figure 1 show? Are the same 30 samples used for all three subfigures? Perhaps this would more interpretable if, instead of showing the predicted class, this figure showed the prediction error.\", \"Minor nitpicks (did not influence decision):\", \"The datasets are 1-based indexed sometimes and 0-based indexed sometimes, even with disparities within a single paragraph.\", \"Figure 1 appears a long time before it is discussed, which made it difficult to understand what was going on.\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"A Reinforcement Learning approach to Active Learning\", \"review\": \"The authors suggest to model active learning (AL) as a Markov Decision Process to try to learn the best possible AL strategy across related domains.\\n\\nThe paper is well-written and structured -- although the background section could be expanded. Sec 3 presents the method in a clear and straightforward manner. \\n\\nMy main concern with regards to the paper is novelty. The authors mention two main contributions, the first one being to defined the AL objective to minimize the number of annotations required to achieve a given prediction quality, instead of maximizing performance given an annotation budget. There has been AL approaches from that perspective in the past (e.g., https://arxiv.org/pdf/1510.02847.pdf). \\n\\nThe second contribution has to do with a procedure to learn the AL strategy using data from different domains (with available labels). Again, the literature in transfer learning in Reinforcement Learning is extensive and should be discussed.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"OK paper. Well written, but weak novelty.\", \"review\": \"Summary: This paper studies the recently problem of learning active learning (LAL). It sets up a MDP where the the state is determined by the labeled, unlabelled datasets and classifier, the acton is to query a point, the reward is linked to classifier test set performance improvement and the transition is to update the base classifier. Recent Q-learning algorithms are used to perform the optimisation. The results show that it outperforms some classic handcrafted AL algorithms and some prior LAL algorithms. A feature of this paper is that the method is relatively simple compared to some prior LAL methods, and also that it learns policies that can transfer successfully across diverse heterogenous datasets.\", \"strengths\": [\"Good results.\", \"Nice that it works well while being simpler and faster than prior transferrable method MLP-GAL.\", \"Generally well written.\", \"Fig 4 is interesting.\"], \"weaknesses\": [\"Novelty/originality is rather incremental.\", \"Experiments are still on toy datasets.\"], \"specifics\": \"1. Novelty: The concept of formulating AL as a MDP for optimisation is now a standard idea. The optimisers used are recent off-the-shelf Q-learners. The result is that this method is similar to a non-myopic extension of LAL (Konyushkova\\u201917) but several papers already did non-myopic AL. In particular it\\u2019s very similar to the SingleRL method in (Pang\\u201918). The only differences are smallish design parameters like: slightly different reward function definition, use Q-learning instead of policy-gradient optimiser, and slightly different state featurisation. The improved sample/speed-efficiency vs SingleRL is likely relatively automatic due to use of recent Q-learning optimisers, rather than vanilla PG optimiser of SingleRL. Not clear that benefit comes from something uniquely contributed here. Other limitations of various prior LAL work, such as binary classifier only, are not alleviated here.\\n2. Experiments: The experiments are on toy datasets. Particularly given the small novelty, then evaluation should be much more. For example: 1. How well does it work when transferred to a relatively less toy dataset such as CIFAR. 2. To what extent can it transfer across classifiers rather than only across datasets? \\n3. The state representation as a sorted list of scores is rather unintuitive. Is there any intuition on what smart decisions the model could be using this to make?\\n4. The featurisations used are not very standard: Like the classifier state sorted score list, and the action featurisation (instance score, instance distance to class, instance distance to unlabelled). It would be good to evaluate this featurisation with a supervised active learner (like LAL), in order to disambiguate whether the good performance comes from these feature choices, or from the recent RL algorithms used to optimise. Similarly for the choice of reward function.\\n5. How does the proposed method deal with a suite of training datasets for AL that are of greatly varying difficulty. A relatively very easy dataset needing << 100 examples to reach threshold would generate few AL training examples due to early stopping. A very hard dataset might use all 100 examples. Does it mean that easy datasets contribute less to training than hard ones?\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"This paper describes the use of reinforcement learning to learn active learning strategies. This paper attempts to increase the scope of the learning of active learning strategies to transfer across very different datasets.\", \"review\": \"This paper presents a laudable attempt to generalize the learning of active learning strategies to learn general strategies that apply across many different datasets that have variables of different, not pre-determined, types, and apply the learned active learning strategies to datasets that are different from what they have been learned with. The paper is written quite clearly and is clear in its discussion of what its advance is beyond the current state of the art.\\n\\nUnfortunately, the motivation of the details of the algorithm and the experiment analysis leave the paper short of what is needed to truly assess the value of this area of work and; therefore, short of what is needed for publication in ICLR. The most notable shortcoming is on page 4, at the bottom, where the actions are described. Among the components of the actions are statistics related to the dataset---the average distance from the chosen point to all the labeled data, and the average distance from the chosen point to all the unlabeled data. The authors do not provide a motivation for the use of these particular statistics. Additionally, the authors did not explore any other statistics. I should think that statistics relevant to the sparsity of the data (e.g., how well they cluster). Additionally, what distance measure is being used? A variety of distance metrics should be explored, such as d-separation for continuous variables and Hamming distance for discrete variables, should be tested, as they intuitively seem likely to affect the results. Additionally, many values are chosen for the experiments without motivation and without testing a variety of values (e.g., 30 for the size of the dataset used to calculate the reward, 1000 RL iterations, and others).\\n\\nIn the experiments, there needs to be discussion of how much variety there is in the different datasets in terms of their statistical properties that are relevant to active learning, such as how well the data cluster? That would help in understanding why the new algorithm performs as it does relative to the baseline.\", \"one_relatively_minor_point\": \"The authors state on page 3, \\\"For example, the probability that the classifier assigns to a datapoint suits this purpose because most classifiers estimate this value.\\\" This is a bit misleading---only generative classifiers would do this, not discriminative classifiers.\", \"pros\": \"1. Very clear writing.\\n2. Good motivation for the general problem.\\n3. Precise description of algorithm.\", \"cons\": \"1. Poor motivation for the particular algorithm implementation---features used in the actions, parameter values chosen.\\n2. Lack of experiments with different choices for features and parameter values.\\n3. Lack of assessment of the dataset characteristics and how they relate to algorithm performance.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
HkezXnA9YX
Systematic Generalization: What Is Required and Can It Be Learned?
[ "Dzmitry Bahdanau*", "Shikhar Murty*", "Michael Noukhovitch", "Thien Huu Nguyen", "Harm de Vries", "Aaron Courville" ]
Numerous models for grounded language understanding have been recently proposed, including (i) generic models that can be easily adapted to any given task and (ii) intuitively appealing modular models that require background knowledge to be instantiated. We compare both types of models in how much they lend themselves to a particular form of systematic generalization. Using a synthetic VQA test, we evaluate which models are capable of reasoning about all possible object pairs after training on only a small subset of them. Our findings show that the generalization of modular models is much more systematic and that it is highly sensitive to the module layout, i.e. to how exactly the modules are connected. We furthermore investigate if modular models that generalize well could be made more end-to-end by learning their layout and parametrization. We find that end-to-end methods from prior work often learn inappropriate layouts or parametrizations that do not facilitate systematic generalization. Our results suggest that, in addition to modularity, systematic generalization in language understanding may require explicit regularizers or priors.
[ "systematic generalization", "language understanding", "visual questions answering", "neural module networks" ]
https://openreview.net/pdf?id=HkezXnA9YX
https://openreview.net/forum?id=HkezXnA9YX
ICLR.cc/2019/Conference
2019
{ "note_id": [ "HJgm-a_WlE", "BklMFEOy1V", "Byg9M9Vyy4", "Hkei5tV1JE", "B1xq62y0CX", "SylZPzWcA7", "BJe48Db6Tm", "rkeU3aB86X", "H1xbP6B8TX", "S1xJ92HIaX", "rylJdHwn2Q", "Hyere82c2m", "rJe-UgPqnX" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544813819037, 1543631994281, 1543617041869, 1543616914870, 1543531713995, 1543275096975, 1542424396311, 1541983662488, 1541983576764, 1541983367475, 1541334375197, 1541223916595, 1541201992582 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1335/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1335/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1335/Authors" ], [ "ICLR.cc/2019/Conference/Paper1335/Authors" ], [ "ICLR.cc/2019/Conference/Paper1335/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1335/Authors" ], [ "ICLR.cc/2019/Conference/Paper1335/Authors" ], [ "ICLR.cc/2019/Conference/Paper1335/Authors" ], [ "ICLR.cc/2019/Conference/Paper1335/Authors" ], [ "ICLR.cc/2019/Conference/Paper1335/Authors" ], [ "ICLR.cc/2019/Conference/Paper1335/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1335/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1335/AnonReviewer2" ] ], "structured_content_str": [ "{\"metareview\": \"This paper generated a lot of discussion. Paper presents an empirical evaluation of generalization in models for visual reasoning. All reviewers generally agree that it presents a thorough evaluation with a good set of questions. The only remaining concerns of R3 (the sole negative vote) were lack of surprise in findings and lingering questions of whether these results generalize to realistic settings. The former suffers from hindsight bias and tends to be an unreliable indicator of the impact of a paper. The latter is an open question and should be worked on, but in the opinion of the AC, does not preclude publication of this manuscript. These experiments are well done and deserve to be published. If the findings don't generalize to more complex settings, we will let the noisy process of science correct our understanding in the future.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"meta-review\"}", "{\"title\": \"Thanks for the response!\", \"comment\": \"Thanks for the response, and sorry for the slow reply!\\n\\nAfter reading the response and revised paper, I'm leaving my review score unchanged, because I think my main concerns still stand. I didn't find the results surprising, and I don't see evidence that these results would generalize to more complex tasks. I think if the paper is only reporting experiments on a toy task, it would need to uncover something really interesting. That said, I would encourage the authors to keep working on this exciting topic.\\n\\n> Reading prior work on visual reasoning may lead a researcher to conclude, roughly speaking, that NMNs are a lost cause, since a variety of generic models perform comparably or better. In contrast, our rigorous investigation highlights their strong generalization capabilities and relates them to the specific design of NMNs.\\n\\nI don't find this argument convincing. For example, we could easily design a rule-based system that would show very strong generalization abilities on your task. However, that would not persuade me that it rule-based methods are not a lost cause for visual reasoning. I would really like to see some evidence that your results would generalize to more realistic tasks.\\n\\n> Notably, chain-structured NMNs were used in the literature prior to this work (e.g. in the model of Jonshon et al multiple filter_...[...] modules are often chained), so the fact that tree-structured NMNs show much stronger generalization was not obvious prior to this investigation and should be of a high interest to the research community.\\nAs mentioned by another reviewer, \\u201cNeural Compositional Denotational Semantics for Question Answering\\u201d shows systematic generalization with tree structured NMNs, and goes much further with structure learning. I think you should at least explain how your results relate to this paper.\\n\\n> We are not sure if we fully understand the question \\u201cCould you somehow test for if a given trained model will show systematic generalization?\\u201d that R3 asked. \\nSorry that this was unclear. I was wondering if you could test for this property without actually running on test data (maybe it converges faster, or the norm of the weights is lower; I have no idea). Knowing that might help us to regularize models properly during training.\\n\\n> All these experiments are repeated at least 5 times each, like you suggested in your review, although it\\u2019s worth noting that results the original version of the paper also reported results after multiple runs. \\n\\nBy \\\"large numbers of runs\\\", I was thinking more like thousands than five (I don't know if that is computational practical). The question I was curious about is whether these models will ever find the right solution, or perhaps if they even have an inductive bias against finding it. This would be very helpful to know.\"}", "{\"title\": \"Kind request to respond for Reviewer 3\", \"comment\": \"Dear Reviewer 3,\\n\\nWe thank you again for your informative review that you wrote before the revision period. In our response and the revised version of the paper we tried our best to address your concerns. We would highly appreciate to get some feedback from you regarding the changes that we have made and the arguments that we have presented. In particular, we report that NMN-Chains (with a lot of inductive bias built-in and also used in prior work such as Johnson et al. 2017) generalize poorly compared to even generic modules, and that layout/parameterization induction often fails to converge to the correct solution. We believe both these findings are quite surprising. We also report new experiments with the MAC model, including a hyperparameter search, a comparison against end-to-end NMNs, and a qualitative exploration of the failure modes of this model. All these experiments are repeated at least 5 times each, like you suggested in your review, although it\\u2019s worth noting that results the original version of the paper also reported results after multiple runs. \\n\\nWe would highly appreciate a response on our newest revision and suggestions on how it could be improved. If you still think that paper is uninteresting or not well executed, could you then suggest what specifically it is lacking?\\n\\nWe are sincerely hoping to hear from you.\"}", "{\"title\": \"Kind request to respond for Reviewer 2\", \"comment\": \"Dear Reviewer 2,\\n\\nThank you once again for the thoughtful and thorough review that you wrote before the revision period. Our understanding of your review is that overall, you find the paper interesting and useful, but certain presentation and evaluation decisions, as well as the fact that we use a new dataset, did not allow you to recommend it stronger. Since then we have improved the paper a lot by incorporating a lot of your suggestions, including but not limited to reporting mean performance on at least 5 runs in all experiments, comparing MAC and Attention N2NMN, investigating different version of the MAC models. We have also argued extensively why we think our decision to build SQOOP from scratch, rather than rely on Blender or ShapeWorld\\u2019s rendering, will not have any negative consequences on our research field. \\n\\nA response from you on the updated version of our paper would be highly valuable to help us improve this work in the future. We would highly appreciate if you could take a look at the revised paper and let us know if you think it is still merely marginally above the acceptance threshold, or if perhaps you find that it already deserves a higher rating. We would be grateful even for a short response from you, highlighting what issues in the paper have not been addressed, or what arguments in our response are still are unconvincing.\\n\\nWe are sincerely hoping to hear from you.\"}", "{\"title\": \"Thanks for the detailed rebuttal, makes it a stronger paper\", \"comment\": \"Thank you for your detailed responses and updates to the paper. I do think the updates made in the paper makes it more clear and above acceptance threshold. I am convinced that it successfully analyzes an interesting set of questions and carefully studies this in a specific (albeit slightly narrow) notion of generalization.\\n\\nTherefore, I am updating the rating to above acceptance threshold.\"}", "{\"title\": \"a greatly improved revision has been uploaded\", \"comment\": \"We are happy to present a new, substantially improved revision of the paper. We have polished our experimental setup (see details in the end of the message), performed many additional experiments as requested by the reviewers and improved presentation of the results.\", \"most_important_changes_in_the_revision_include\": \"1) We report means and standard deviation for at least 5 (and at least 10 in some comparisons due to variance in performance) runs of each of the models. We switched to reporting error rates instead of accuracies in all tables in order to make our results easier to understand.\\n2) Performance of MAC baseline has somewhat improved, compared to what we reported in the original submission, but this model is still far from solving SQOOP for #rhs/lhs of 1, 2, 4, 8, and it fails sometimes even on #rhs/lhs=18. We performed an ablation study of MAC as requested by R2 and R3, in which we varied the number of hidden units, the number of modules and the level of weight decay (see Appendix B). Results for all hyperparameters settings that we tried are still hopelessly far from systematic generalization of the kind exhibited by NMN-Tree, although on average MAC models with 256 hidden units performed somewhat better (barely statistically significantly) than the default version with 128 hidden units that we used in our experiments. We also now report qualitative analysis of rare (3 out of 15) cases when MAC does generalize, showing that this is likely to be due to a lucky initialization. \\n3) As suggested by R1, we added a DAG-like NMN-Chain-Shortcut model to the comparison. We found that its generalization performance is in between those of NMN-Chain and NMN-Tree and is in general quite similar to the performance of generic models. \\n4) We present additional results for NMN-Chain, showing that it does not generalize even when #rhs/lhs=18! We find this drastic lack of generalization highly surprising and not at all easily predictable without performing our study. \\n5) We performed an analysis of the responses produced by an NMN-Chain model to answer R1\\u2019s question as to why it performs so much worse than NMN-Tree. Our analysis has shown that there is almost no agreement in test set responses of several NMN-Chain models, allowing us to conclude that NMN-Chain essentially predicts randomly on the test set.\\n6) The results of layout induction experiments have somewhat improved, without major changes to the conclusions.\\n7) Perhaps the most significant changes have occured in our parametrization induction results. We found that Attention N2NMN may generalize quite well (9 times out of 10) even for #rhs/lhs=2, and most unexpectedly, even when attention weights are not very sharp. The results on #rhs/lhs=1 have remained the same. Our new results suggest that Attention N2NMN lends itself to systematic generalization more than MAC, supporting the hypothesis expressed by R2.\", \"other_changes_include\": \"1) We cite \\u201cNeural Compositional Denotational Semantics for Question Answering\\u201d, as suggested by R2.\\n2) We state explicitly in the text that our Find module outputs feature maps instead of attention maps, somewhat differently from the original Find modules from Hu et al.\\n3) Appendix A with training details has been added. \\n4) Appendix B with some qualitative analysis about why some MAC runs generalized successfully and others failed. We also report an attempt to hard-code control scores (as requested by R2) in MAC but that did not improve performance.\\n5) We explain the motivation for the dataset generation procedure more clearly in Section 2, and also follow a suggestion by R3 and explain better why lower rhs/lhs is harder for generalization.\\n\\nWe thank all reviewers for their valuable suggestions that allowed us to greatly improve the paper. We believe that the revised paper should be of a high interest for anyone working on language understanding, and we sincerely hope that reviewers will consider revisiting their evaluations.\\n\\nP. S. The changes in the results were caused by the following improvements in the experimental setup:\\n1) We disabled the weight decay of 0.00001 that was the default in the codebase on top of which we start our project. This change allows for rare convergence to systematic solutions on the #rhs/lhs=1 split for MAC (3/15 runs). .\\n2) We found that the publicly available codebase for the FiLM model had redundant biases before batch normalization, and removing this redundancy has stabilised training on NMNs with Find module, including Attention NMNs.\\n3) In our preliminary experiments we set the learning rate for structural parameters to be higher than the one used for regular weights (0.001 vs 0.0001). To simplify our setup, we reran all experiments with the same learning rate for all parameters.\"}", "{\"title\": \"Response to Reviewer 2\", \"comment\": \"We thank Reviewer 2 (R2) for their excellent and thorough review and for raising several particularly interesting points about modeling and evaluation.\\n\\nWhile we do agree with the reviewer\\u2019s concerns that the proliferation of synthetic datasets may be counterproductive, we chose to create SQOOP instead of directly using existing datasets to keep things simple. R2 suggests that we could\\u2019ve defined new objects out of (color, shape) tuples. We believe though, that even if we used Blender (CLEVR) or ShapeWorld rendering to build a dataset for out studies, this would not make further experimentation any simpler, because even though the rendering would be the same, this would still constitute a new dataset. The entire code for generating SQOOP is merely 550 lines, and comes with an extremely simple set of command line arguments. This is to be contrasted with ~9500 lines of code in ShapeWorld codebase, which aims to be universally usable, and hence is highly convoluted. Furthermore, in order to help researchers avoid the burden of \\u201cdownloading and re-running a huge amount of code\\u201d, we will release our codebase that contains implementations of all the models used in this study and comes with ready-to-use CLEVR and SQOOP bindings. \\n\\nWe thank R2 for their thoughtful suggestion to consider splits other then the one with heldout right-hand sides (rhs). We fully agree that other options exist, for example a split where different lhsand rhs objects are used for each relation, and that investigating such options would be interesting. At the same time, we do not think that these extra experiments would radically change the conclusions, and we note that even in the current form our paper hits ICLR page limit. Our specific split was chosen based on the following considerations: we wanted to uniformly select a subset of object pairs for training set questions, in a way that guarantees that all objects word are seen at the training time. If we sampled a certain percentage of object pairs for training questions randomly, it could happen that certain words just never occur in the training set. Hence, we came up with the idea of having a fixed number of rhs objects for each lhs object. We note that this very split can also be seen as allowing a random (possibly zero) number of lhs objects for each rhs object, exhibiting sparsity on the lhs like R2 suggested. We will better explain the considerations above in the upcoming paper revision.\\n\\nApart from the above points of R2, we fully agree with their suggested changes and experiments and will incorporate almost all of these in the updated version of the paper. \\n\\n1) We follow R2\\u2019s suggestions and improved the presentation in Table-1: we will report means and standard deviations for 5 runs for all our models. \\n2) CNN+LSTM and RelNet baselines are being re-run with higher #rhs/lhs.\\n3) We have run experiments with varying number of MAC cells (3,6,12,24) and found that using 12 cells performed best (and as well as using 24 cells). We believe that this has to do with lucky control score initializations. This, along with some new interesting qualitative investigations about the nature of control parameters that result in successful generalization, will be elaborated on in our updated manuscript. \\n4) In our initial experiments, we found that conceptually simpler homogenous NMNs (of the form proposed by Johnson et al.) are already sufficient to solve even the hardest version of SQOOP. Hence, we chose to focus our study on this, arguably, more generic approach, and we adapted the Find module from (Hu et al) to output a full feature map, instead of an attention map. We believe it is highly interesting to include such a model in comparison, as Residual and Find represent two very distinct paradigms of conditioning modules on their language inputs. We agree that extra studies of NMNs with attention bottlenecks would be a interesting direction of the future work, but we also think that our paper is quite complete without this investigation and has enough interesting findings.\\n5) We will report performance of all baseline models on the #rhs/lhs=18 version of our dataset as well.\\n6) We also fully agree with R2\\u2019s excellent observation about the nature of supervision in MAC vs hard-coded parameter NMN models. We are now running MAC experiments with hardcoded control attention where the control scores are hard-coded such that some of the modules focus entirely on the LHS object and some focus entirely on the RHS object. This particular hard-coding strategy was a result of our qualitative understanding of successful learnt attention for MAC. We will elaborate on this in the paper.\\n7) We agree with R2\\u2019s comment that studying seq2seq learning in our setting would add an interesting new dimension to this work, and this is something we\\u2019ll consider for future work. \\n8) We also note R2\\u2019s feedback on strong language, presentation issues and a missing citation and will improve the paper in these aspects.\"}", "{\"title\": \"Response to Reviewer 1 (part 2 of 2)\", \"comment\": \"We would like to conclude our response by replying to the higher-level concern of R1 that the findings of our study may not \\u201cgeneralize to other more complex datasets where the network layout NMN might be more complex, the number of modules and type of modules might also be more\\u201d. While we fully agree that more complex datasets with more complex questions would bring new challenges, these are ones we purposely put aside (such as the general unavailability of ground-truth layouts for vanilla NMN, the need to consider an exponentially large set of possible layouts for Stochastic N2NMN, etc.) We believe that it is highly valuable for the research community to know what happens in the simple ideal case of SQOOP, where we can precisely test our specific generalization criterion. This knowledge (e.g. the superiority of trees to chains, the sensitivity of layout induction to initialization, the emergence of spurious parameterization in end-to-end learning), will guide researchers in choosing, designing and troubleshooting their models, as they now know what to expect modulo the optimization challenges that they may face. The field of language understanding with deep learning is not easily amenable to mathematical theoretical investigations and, with that in mind, rigorous minimalistic studies like ours are arguably very important. To some extent, they play the role of the former: they inform researcher intuition and lay a solid foundation for scientific dialogue. We purposely traded breadth for depth in our investigations, and we will go even deeper in the additional experiments that the upcoming revision will contain. We believe that the total of our results makes a complete conference paper. All that said, we would welcome specific suggestions of additional experiments that we could carry out in order to better validate our claims.\\n\\nWe hope that this response has clarified to R1 what our paper was insufficiently clear about. A new revision with additional experiments and fixed typos will soon be uploaded to OpenReview, and we hope that R1 takes this response and the changes that we will make to the paper into account.\"}", "{\"title\": \"Response to Reviewer 1 (part 1 of 2)\", \"comment\": \"We thank Reviewer 1 (R1) for their review and for asking interesting questions that helped us to understand where our paper may have been unclear. In our response below we will try our best to better explain our motivation for building and using SQOOP, as well as address R1\\u2019s other questions and concerns.\\n\\nA key concern that R1 expressed in their review is that we perform our study on the new SQOOP dataset, instead of using an available one (for example CLEVR or Abstract Scenes VQA). Though we appreciate the concern (it has spurred us to rethink and rephrase how we justify SQOOP) we still believe that the SQOOP dataset is the best choice for precisely testing our ideas. We kindly invite R1 to consider the following arguments in favor of doing so:\\n\\nThe goal of our study was to perform a thorough investigation of systematic generalization of language understanding models. To that end, we wanted a setup that is as simple as possible, while still being challenging by testing the ability to extend the relational reasoning learned to unseen combinations of seen words. We therefore choose to focus on simplest relational questions of the form XRY, as they also allow us to factor out challenges of discrete optimization in choosing the right module layout (required for Stochastic N2NMN). The simplicity is also useful because most models get to 100% accuracy on the training set of SQOOP, which allowed us to put aside any remaining optimization challenges and just focus our study on systematic generalization. \\nIn contrast, we find that the popular CLEVR dataset does not satisfy our requirements and if we did modify it sufficiently, we believe that it would only differ from SQOOP in the actual rendering and would not affect our conclusions. Though visually more complex, CLEVR has only 3 object types: cylinder, sphere and cube. Therefore, it would only allow for 3x4x3=36 different XRY relational questions. This is arguably not enough to sufficiently represent real world situations, and would definitely hinder our experiments. Specifically, we would not be able to sufficiently vary the difficulty of our generalization challenge when allowing 1,2,4,8 or 18 possible right hand-side objects in the questions (we clarify why splits with lower #rhs/lhs are more difficult than those with higher #rhs/lhs later in this response). Hence, we did not find the original CLEVR readily appropriate for our study. We could, in theory, introduce new object types to CLEVR and rerender a new dataset in 3D using Blender (the renderer that was used to create CLEVR) with different lighting conditions and partial occlusions. Though enticing, we believe that such a 3D version of SQOOP would lead to exactly same conclusions, because the vision required to recognize the objects in the scene would still be rather trivial. \\nThe Ying and Yang dataset is clearly a valuable resource (and we thank the reviewer for the pointer), but we do not think it is readily suitable for the kind of study that we aim to perform. The dataset, to the best of our understanding, uses crowd-sourced questions (as the questions are taken from Abstract VQA dataset, whose captions were entered by a human, according to the original VQA paper https://arxiv.org/pdf/1505.00468v6.pdf). Using crowd-sourced questions would not allow us to control our experiments at the level of precision that we wanted to achieve (e.g. we would not know the ground-truth layouts, it would be harder to construct splits of varying difficulty, etc.). As well, Abstract VQA contains only 50k scenes, and from our experience with SQOOP we know that this number would be not sufficient to rule out overfitting to training images as a factor. \\n\\nWe thank R1 for their constructive suggestion to consider NMNs that form a DAG. We are currently investigating a chain-structured NMN with shortcuts from the output of the stem to each of the modules, and we will soon report these additional results in the upcoming revision of the paper. We hope that these results, combined with further qualitative investigations we are conducting, will answer the legitimate question of R1 as to why Chain-NMN performs so much worse than Tree-NMN.\\n\\nWe acknowledge that the text of the paper can be improved to explain better why splits with lower #rhs/lhs are generally harder than those with higher #rhs/lhs, and we thank R1 for pointing this out. Our reasoning is that lower #rhs/lhs are harder because the training admits more spurious solutions in them. In such spurious regimes models adapt to the specific lhs-rhs combinations from the training and can not generalize to unseen lhs-rhs combinations (i.e. generalizing from questions about \\u201cA\\u201d in relation with \\u201cB\\u201d to \\u201cA\\u201d in relation to \\u201cD\\u201d (as in #rhs/lhs=1) is more difficult than generalizing from questions about \\u201cA\\u201d in relation to \\u201cB\\u201d and \\u201cC\\u201d to the same \\u201cA\\u201d in relation to \\u201cD\\u201d (as in #rhs/lhs=2). We will update the paper to be more explicit in explaining these considerations.\"}", "{\"title\": \"Response to Reviewer 3\", \"comment\": \"We thank Reviewer 3 (R3) for their review and for clearly articulating their concerns regarding the paper. In our response below, we will clarify the design and results of our experiments as well as argue why we believe that these results should be of interest and are not, indeed, that predictable.\\n\\nR3 asked why training performance of many models is 100% when they do not generalize and suggested us to perform a large number of training runs to see if occasionally the right solution is found. First, we agree that from the point of view of training there are many equally good solutions, and in fact, this is the main and the only challenge of SQOOP. We designed the task with the goal of testing which models are more likely to converge to the right solution, with which they can handle all possible combinations of objects, despite being trained only on a small subset of objects. We argued extensively in the introduction that such an ability to find the systematic solution despite other alternatives being available is highly desirable for language understanding approaches. We fully agree with R3 that in investigations of whether or not a particular model converges to the right solution repeating every experiment several times is absolutely necessary, and we would like to emphasize that we did repeat each experiment 3, 5, or 10 times (see \\u201cdetails\\u201d in Table 1 and the paragraph \\u201cParametrization Induction\\u201d on page 8). In most cases we saw a consistent success or consistent failure, one exception being the parametrization induction results, where 4 out of 10 runs were successful (see Table 4, row 1 for the mean and the confidence interval). We hope that 3 takes this fact into account, and we will furthermore improve on the current level of rigor in the upcoming revision by repeating each experiment at least 5 times. \\n\\nWe are not sure if we fully understand the question \\u201cCould you somehow test for if a given trained model will show systematic generalization?\\u201d that R3 asked. We test the systematic generalization of a model by evaluating it on all SQOOP questions that were not present in the training set. We hope that this answers the question of R3 and we would be happy to engage in a further discussion regarding this and make edits to the paper if necessary. \\n\\nWe thank R3 for the suggestion to investigate the influence of model size and regularization on systematic generalization. It is indeed a very appropriate question in the context of our study, however, we note that there exists a wide variety of regularization methods and trying them all (and all their combinations) would be infeasible. In the upcoming update of the paper we will report results of an on-going ablation study for the MAC model, in which we vary the module size, the number of modules and experiment with weight decay. We would welcome any other specific experiment requests R3 may have.\\n\\nFinally, we would like to discuss the significance of our investigation and its results. While we agree that the results that we report may not shock the reader (although perhaps hindsight bias plays a role in what people find surprising or not after reading an article) we find them highly interesting and not at all easily predictable. Reading prior work on visual reasoning may lead a researcher to conclude, roughly speaking, that NMNs are a lost cause, since a variety of generic models perform comparably or better. In contrast, our rigorous investigation highlights their strong generalization capabilities and relates them to the specific design of NMNs. Notably, chain-structured NMNs were used in the literature prior to this work (e.g. in the model of Jonshon et al multiple filter_...[...] modules are often chained), so the fact that tree-structured NMNs show much stronger generalization was not obvious prior to this investigation and should be of a high interest to the research community. Last but not least, an important part of our investigation (which the review does not discuss) is the systematic generalization analysis of popular end-to-end NMN versions, that shows how making NMNs more end-to-end makes them more susceptible to finding spurious solutions. As we argued in our conclusion, these findings should be of a highest importance to researchers working on end-to-end NMNs, which is a very popular research direction nowadays. \\n\\nWe conclude our response by announcing that an updated version of the paper, that among others incorporates valuable suggestions by R3, will soon be uploaded to OpenReview. We are currently performing a lot of additional experiments, the results of which will make our investigation even more rigorous and complete. We sincerely hope that R3 takes into account the arguments we have made here and the new results that we will publish soon and reevaluates our paper more positively.\"}", "{\"title\": \"Interesting, but please add more experiments like this\", \"review\": \"The paper explores how well different visual reasoning models can learn systematic generalization on a simple binary task. They create a simple synthetic dataset, involving asking if particular types of objects are in a spatial relation to others. To test generalization, they lower the ratio of observed combinations of objects in the training data. The authors show the result that tree structured neural module networks generalize very well, but other strong visual reasoning approaches do not. They also explore whether appropriate structures can be learned. I think this is a very interesting area to explore, and the paper is clearly written and presented.\\n\\nAs the authors admit, the main result is not especially surprising. I think everyone agrees that we can design models that show particular kinds of generalization by carefully building inductive bias into the architecture, and that it's easy to make these work on the right toy data. However, on less restricted data, more general architectures seem to show better generalization (even if it is not systematic). What I really want this paper to explore is when and why this happens. Even on synthetic data, when do or don't we see generalization (systematic or otherwise) from NMNs/MAC/FiLM? MAC in particular seems to have an inductive bias that might make some forms of systematic generalization possible. It might be the case that their version of NMN can only really do well on this specific task, which would be less interesting.\\n\\nAll the models show very high training accuracy, even if they do not show systematic generalization. That suggests that from the point of view of training, there are many equally good solutions, which suggests a number of interesting questions. If you did large numbers of training runs, would the models occasionally find the right solution? Could you somehow test for if a given trained model will show systematic generalization? Is there any way to help the models find the \\\"right\\\" (or better) solutions - e.g. adding regularization, or changing the model size? \\n\\nOverall, I do think the paper has makes a contribution in experimentally showing a setting where tree-structured NMNs can show better systematic generalization than other visual reasoning approaches. However, I feel like the main result is a bit too predictable, and for acceptance I'd like to see a much more detailed exploration of the questions around systematic generalization.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Interesting observations but limited experiments; also doubtful how experiments and learning can be generalized to more complex tasks\", \"review\": \"Summary: The paper focuses on comparing the impact of explicit modularity and structure on systematic generalization by studying neural modular networks and \\u201cgeneric\\u201d models. The paper studies one instantiation of this systematic generalization for the setting of binary \\u201cyes\\u201d or \\u201cno\\u201d visual question answering task. They introduce a new dataset called in which model has to answer questions that require spatial reasoning about pairs of randomly scattered letters and digits in the image. While the models are evaluated on all possible object pairs, they are trained on a smaller subset. They observe that NMNs generalize better than other neural models when an appropriate choice of layout and parametrization is made. They also show that current end-to-end approaches for inducing model layout or learning model parametrization fail to generalize better than generic models.\", \"pros\": [\"The conclusions of the paper regarding the generalization ability of neural modular networks is timely given the widespread interest in these class of algorithms.\", \"Additionally, they present interesting observations regarding how sensitive NMNs are to the layout of models. Experimental evidence (albeit on specific type of question) of this behaviour will be helpful for the community and hopefully motivate them to incorporate regularizers or priors that steer the learning towards better layouts.\", \"The authors provide a nice summary of all the models analyzed in Section 3.1 and Section 3.2.\"], \"cons\": [\"While the results on SQOOP dataset are interesting, it would have been very exciting to see results on other synthetic datasets. Specifically, there are two datasets which are more complex and uses templated language to generate synthetic datasets similar to this paper:\", \"CLEVR environment or a modification of that dataset to reflect the form of systematic the authors are studying in the paper.\", \"Abstract Scenes VQA dataset introduced in\\u201cYin and Yang: Balancing and Answering Binary Visual Questions\\u201d by Zhang and Goyal et al. They provide a balanced dataset in which there are a pairs of scenes for every question, such that the answer to the question is \\u201cyes\\u201d for one scene, and \\u201cno\\u201d for the other for the exact same question.\", \"Perhaps because the authors study a very specific kind of question, they limit their analysis to only three modules and two structures (tree & chain). However, in the most general setting NMN will form a DAG and it would have been interesting to see what form of DAGs generalize better than other.\", \"It is not clear to me how the analysis done in this paper will generalize to other more complex datasets where the network layout NMN might be more complex, the number of modules and type of modules might also be more. Because, the results are only shown on one dataset, it is harder to see how one might extend this work to other form of questions on slightly harder datasets.\", \"Other Questions / Remarks:\", \"Given that the accuracy drop is very significant moving from NMN-Tree to NMN-Chain, is there an explanation for this drop?\", \"While the authors mention multiple times that #rhs/#lhs = 1 and 2 are more challenging than #rhs/#lhs=18, they do not sufficiently explain why this is the case anywhere in the paper.\", \"Small typo in the last line of section 4.3 on page 7. It should say: This is in stark contrast with \\u201cNMN-Tree\\u201d \\u2026..\", \"Small typo in the \\u201cLayout induction\\u201d paragraph, line 6 on Page 7: \\u2026 and for $p_0(tree) = 0.1$ and when we use the Find module\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Review\", \"review\": \"This paper presents a targeted empirical evaluation of generalization in models\\nfor visual reasoning. The paper focuses on the specific problem of recognizing\\n(object, relation, object) triples in synthetic scenes featuring letters and\\nnumbers, and evaluates models' ability to generalize to the full distribution of\\nsuch triples after observing a subset that is sparse in the third argument. It\\nis found that (1) NMNs with full layout supervision generalize better than other\\nstate-of-the art visual reasoning models (FiLM, MAC, RelNet), but (2) without\\nsupervised layouts, NMNs perform little better than chance, and without\\nsupervised question attentions, NMNs perform better than the other models but\\nfail to achieve perfect generalization.\\n\\nSTRENGTHS\\n- thorough analysis with a good set of questions\\n\\nWEAKNESSES\\n- some peculiar evaluation and presentation decisions\\n- introduces *yet another* synthetic visual reasoning dataset rather than\\n reusing existing ones\\n\\nI think this paper would have been stronger if it investigated a slightly\\nbroader notion of generalization and had some additional modeling comparisons.\\nHowever, I found it interesting and think it successfully addresses the set of\\nquestions it sets out to answer. If it is accepted, there are a few things that\\ncan be done to improve the experiments.\\n\\nMODELING AND EVALUATION\\n\\n- Regarding the dataset: the proliferation of synthetic reasoning datasets is\\n annoying because it makes it difficult to compare results without downloading\\n and re-running a huge amount of code. (The authors have, to their credit, done\\n so for this paper.) I think all the experiments here could have been performed\", \"successfully_using_either_the_clevr_or_shapeworld_rendering_engines\": \"while the\\n authors note that they require a \\\"large number of different objects\\\", this\\n could have been handled by treating e.g. \\\"red circle\\\" and \\\"red square\\\" as\\n distinct atomic primitives in questions---the fact that redness is a useful\\n feature in both cases is no different from the fact that a horizontal stroke\\n detector is useful for lots of letters.\\n\\n- I don't understand the motivation behind holding out everything on the\\n right-hand side. For models that can't tell that the two are symmetric, why\\n not introduce sparsity everwhere---hold out some LHSs and relations?\\n \\n- Table 1 test accuracies: arbitrarily reporting \\\"best of 3\\\" for some model /\\n dataset pairs and \\\"confidence interval of 5\\\" for others is extremely\", \"unhelpful\": \"it would be best to report (mean / max / stderr) for 5. Also, it's\\n never stated which convidence interval is reported.\\n\\n- Table 1 baselines: why not run Conv+LSTM and RelNet with easier #rhs/lhs data?\\n\\n- How many MAC cells are used? This can have significant performance\\n implications. I think if you used their code out of the box you'll wind up\\n with way bigger structures than you need for this task.\\n\\n- I'm not sure how faithful the `find` module used here is to the one in the\\n literature, and one of the interesting claims in this work is that module\\n implementation details matter! The various Hu papers use an attentional\\n parameterization; the use of a ReLU and full convolution in Eq. 14 suggest\\n that that one here can pass around more general feature maps. This is fine but\\n the distinction should be made explicit, and it would be interesting to see\\n additional comparisons to an NMN with purely attentional bottlenecks.\\n\\n- Why do all the experiments after 4.3 use #rhs/lhs of 18? If it was 8 it would\\n be possible to make more direct comparisons to the other baseline models.\\n\\n- The comparison to MAC in 4.2 is unfair in the following sense: the NMN\\n effectively gets supervised textual attentions if the right parameters are\\n always plugged into the right models, while the MAC model has to figure out\\n attentions from scratch. A different way of structuring things would be to\\n give the MAC model supervised parameterizations in 4.2, and then move the\\n current MAC experiment to 4.3 (since it's doing something analogous to\\n \\\"parameterization induction\\\".\\n \\n- The top-right number in Table 4---particularly the fact that it beats MAC and\\n sequential NMNs under the same supervision condition---is one of the most\\n interesting results in this paper. Most of the work on relaxing supervision\\n for NMNs has focused on (1) inducing new question-specific discrete structures\\n from scratch (N2NMN) or (2) finding fixed sequential structures that work well\\n in general (SNMN and perhaps MAC). The result this paper suggests an\\n alternative, which is finding good fixed tree-shaped structures but continuing\\n to do soft parameterization like N2NMN.\\n\\n- The \\\"sharpness ratio\\\" is not super easy to interpret---can't you just report\\n something standard like entropy? Fig 4 is unnecessary---just report the means.\\n\\n- One direction that isn't explored here is the use of Johnson- or Hu-style\\n offline learning of a model to map from \\\"sentences\\\" to \\\"logical forms\\\". To the\\n extent that NMNs with ground-truth logical forms get 100% accuracy, this turns\\n the generalization problem studied here into a purely symbolic one of the kind\\n studied in Lake & Baroni 18. Would be interesting to know whether this makes\\n things harder (b/c no grounding signal) or easier (b/c seq2seq learning is\\n easier.)\\n\\nPRESENTATION\\n\\n- Basically all of the tables in this paper are in the wrong place. Move them\\n closer to the first metnion---otherwise they're confusing.\\n\\n- It's conventional in this conference format to put all figure captions below\\n the figures they describe. The mix of above and below here makes it hard to\\n attach captions to figures.\\n\\n- Some of the language about how novel the idea of studying generalization in\\n these models is a bit strong. The CoGenT split of the CLEVR dataset is aimed\\n at answering similar questions. The original Andreas et al CVPR paper (which btw\\n appears to have 2 bib entries) also studied generalization to structurally\\n novel inputs, and Hu et al. 17 notes that the latent-variable version of this\\n model with no supervision is hard to train.\\n\\nMISCELLANEOUS\\n\\n- Last sentence before 4.4: \\\"NMN-Chain\\\" should be \\\"NMN-Tree\\\"?\\n\\n- Recent paper with a better structure-induction technique:\", \"https\": \"//arxiv.org/abs/1808.09942. Worth citing (or comparing if you have\\n time!)\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}" ] }
ByxZX20qFQ
Adaptive Input Representations for Neural Language Modeling
[ "Alexei Baevski", "Michael Auli" ]
We introduce adaptive input representations for neural language modeling which extend the adaptive softmax of Grave et al. (2017) to input representations of variable capacity. There are several choices on how to factorize the input and output layers, and whether to model words, characters or sub-word units. We perform a systematic comparison of popular choices for a self-attentional architecture. Our experiments show that models equipped with adaptive embeddings are more than twice as fast to train than the popular character input CNN while having a lower number of parameters. On the WikiText-103 benchmark we achieve 18.7 perplexity, an improvement of 10.5 perplexity compared to the previously best published result and on the Billion Word benchmark, we achieve 23.02 perplexity.
[ "Neural language modeling" ]
https://openreview.net/pdf?id=ByxZX20qFQ
https://openreview.net/forum?id=ByxZX20qFQ
ICLR.cc/2019/Conference
2019
{ "note_id": [ "S1xIRD66yN", "SJgnkDkMCm", "ryloqL1G0Q", "Hke6aByfA7", "S1gSrFe-R7", "S1ePmKlW0Q", "SkxTKpJwa7", "H1xamJD0hQ", "HJxeCIut2Q", "H1lJ9-Pdn7" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544570830225, 1542743779926, 1542743699094, 1542743493365, 1542682940900, 1542682911081, 1542024581100, 1541463845243, 1541142215690, 1541071239338 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1334/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1334/Authors" ], [ "ICLR.cc/2019/Conference/Paper1334/Authors" ], [ "ICLR.cc/2019/Conference/Paper1334/Authors" ], [ "ICLR.cc/2019/Conference/Paper1334/Authors" ], [ "ICLR.cc/2019/Conference/Paper1334/Authors" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper1334/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1334/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1334/AnonReviewer3" ] ], "structured_content_str": [ "{\"metareview\": \"There is a clear consensus among the reviews to accept this submission thus I am recommending acceptance. The paper makes a clear, if modest, contribution to language modeling that is likely to be valuable to many other researchers.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"clear consensus to accept\"}", "{\"title\": \"Response to Reviewer #3\", \"comment\": \"The primary goal of the projections is to project all embeddings into the model dimension d so that we can have variable sized embeddings. Our goal was not to make the model model expressive. Compared to the rest of the model, these projections add very little overhead compared to the rest of the model. Doing without them is an interesting future direction though!\"}", "{\"title\": \"Response to Reviewer #2\", \"comment\": \"We thank the reviewer for the comments!\", \"q\": \"\\u201cADP and ADP-T runtimes were very close on WikiText-103 dataset but very different on Billion Word corpus (Table 3 and 4)\\u201d\", \"the_differences_in_training_time_are_due_to_the_size_of_the_models\": \"Weight tying saves a lot more parameters for the Billion Word model due to the larger vocab compared to the WikiText-103 models which have a smaller vocab. On WikiText-103, tying saves 15% of parameters (Table 3, ADP vs ADP-T, 291M vs 247M) and training time is reduced by about 13%. On Billion Word, tying saves 27% of parameters (Table 4) and training time is reduced by about 34%. The slight discrepancy may be due to multi-machine training for Billion Word compared to the single machine setup for WikiText-103.\", \"q1\": \"\\\"I am curious about what would you get if you use ADP on BPE vocab set?\\\"\\nWe tried adaptive input embeddings with BPE but the results were worse than softmax. This is likely because 'rare' BPE units are in some sense not rare enough compared to a word vocabulary. In that case, the regularization effect of assigning less capacity to 'rare' BPE tokens through adaptive input embeddings is actually harmful.\", \"q2\": \"\\\"How much of the perplexity reduction of 8.7 actually come from ADP instead of the transformer and optimization?\\\"\\nFor WikiText-103 (Table 3) we measured 24.92 on test with a full softmax model (a 5.2 PPL improvement over the previous SOTA). This corresponds to a Transformer model including our tuned optimization scheme. Adding tied adaptive input embeddings (ADP-T) to this configuration reduces this perplexity to 20.51, which is another reduction of 4.4 PPL.\"}", "{\"title\": \"Response to Reviewer #1\", \"comment\": \"We thank the reviewer for the comments!\", \"q\": \"\\u201cthoughts as to why full-softmax BPE is worse than adaptive softmax word level\\u201d\\nFull-softmax BPE is worse because we measure perplexity on the word-level. This involves multiplying the probabilities of the individual BPE tokens. BPE token-level perplexity itself is actually significantly lower than word-level PPL (around 21.5 for GBW and around 18 for WikiText-103 for the models presented in the paper) but the two are not comparable.\"}", "{\"title\": \"open sourcing\", \"comment\": \"We are planning to open source the code and pre-trained models in the future.\"}", "{\"title\": \"Updated version of the paper\", \"comment\": [\"We updated the paper with the following changes:\", \"Table 3 contains new (better) validation results for WikiText-103. Note that only the validation numbers are updated, the test results were not affected. As described in the paper, we form training examples by taking 512 contiguous words from the training data with no regard for sentence boundaries. Evaluation is the same except that we require blocks to contain complete sentences of up to 512 tokens. Previously reported validation numbers did not always contain complete sentences because samples were built the same way as during training. We have corrected this so that validation is conducted the same way as testing.\", \"We also added new (and better) Billion word results with a bigger model achieving 23.7 perplexity.\", \"We added a comparison to Merity et al. fixed size adaptive softmax to the Appendix (Table 6).\", \"Clarified discussion around tying and not tying projections/word embeddings.\"]}", "{\"comment\": \"Code and pre-trained models are available at http://anonymized.\\n\\nit is not available, would you fix it and I am very interested in your paper.\", \"title\": \"Hello the source code link in your paper is unaccessible?\"}", "{\"title\": \"Solid contribution to the language modeling literature\", \"review\": [\"The authors extend an existing approach to adaptive softmax classifiers used for the output component of neural language models into the input component, once again allowing tying between the embedding and softmax. This fills a significant gap in the language modeling architecture space, and the perplexity results bear out the advantages of combining adaptively-sized representations with weight tying. While the advance is in some sense fairly incremental, the centrality of unsupervised language modeling to modern deep NLP (ELMo, BERT, etc.) implies that perplexity improvements as large as this one may have meaningful downstream effects on performance on other tasks. Some things I noticed:\", \"One comparison that I believe is missing (I could be misreading the tables) is comparing directly to Merity et al.'s approach (adaptive softmax but fixed embedding/softmax dimension among the bands). Presumably you're faster, but is there a perplexity trade-off?\", \"The discussion/explanation of the differing performance of tying or not tying each part of the embedding weights for the different datasets is confusing; I think it could benefit from tightening up the wording but mostly I just had to read it a couple times. Perhaps all that's complicated is the distinction between embedding and projection weights; it would definitely be helpful to be as explicit about that as possible upfront.\", \"The loss by frequency-bin plots are really fantastic. You could also try a scatterplot of log freq vs. average loss by individual word/BPE token.\", \"Do you have thoughts as to why full-softmax BPE is worse than adaptive softmax word level? That goes against the current (industry) conventional wisdom in machine translation and large-scale language modeling that BPE is solidly better than word-level approaches because it tackles the softmax bottleneck while also sharing morphological information between words.\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"simple model architecture changed and extensive experiments\", \"review\": \"This paper introduced a new architecture for input embeddings of neural language models: adaptive input representation (ADP). ADP allowed a model builder to define a set of bands of input words with different frequency where frequent words have larger embedding size than the others. The embeddings of each band are then projected into the same size. This resulted in lowering the number of parameters.\\n\\nExtensive experiments with the Transformer LM on WikiText-103 and Billion Word corpus showed that ADP achieved competitive perplexities. While tying weight with the output did not benefit the perplexity, it lowered the runtime significantly on Billion Word corpus. Further analyses showed that ADP gained performance across all word frequency ranges.\\n\\nOverall, the paper was well-written and the experiments supported the claim. The paper was very clear on its contribution. The variable-size input of this paper was novel as far as I know. However, the method, particularly on the weight sharing, lacked a bit of important background on adaptive softmax. The weight sharing was also needed further investigation and experimental data on sharing different parts.\\n\\nThe experiments compared several models with different input levels (characters, BPE, and words). The perplexities of the proposed approach were competitive with the character model with an advantage on the training time. However, the runtimes were a bit strange. For example, ADP and ADP-T runtimes were very close on WikiText-103 dataset but very different on Billion Word corpus (Table 3 and 4). The runtime of ADP seemed to lose in term of scaling as well to BPE. Perhaps, the training time was an artifact of multi-GPU training.\", \"questions\": \"1. I am curious about what would you get if you use ADP on BPE vocab set?\\n2. How much of the perplexity reduction of 8.7 actually come from ADP instead of the transformer and optimization?\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Reasonable increment from Grave et al. (2017)\", \"review\": \"This article presents experiments on medium- and large-scale language modeling when the ideas of adaptive softmax (Grave et al., 2017) are extended to input representations.\\n\\nThe article is well written and I find the contribution simple, but interesting. It is a reasonable and well supported increment from adaptive softmax of Grave et al. (2017).\", \"my_question_is_a_bit_philosophical\": \"The only thing which I was concerned about when reading the paper is projection of the embeddings back to the d-dimensional space. I understand that for two matrices A and B we have rank(AB) <= min(rank(A), rank(B)), and we are not making the small-sized embeddings richer when backprojecting to R^d, but have you thought about how it would be possible to avoid this step and keep the original variable-size embeddings?\\n\\nReferences\\nJoulin, A., Ciss\\u00e9, M., Grangier, D. and J\\u00e9gou, H., 2017, July. Efficient softmax approximation for GPUs. In International Conference on Machine Learning (pp. 1302-1310).\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
SJ4Z72Rctm
Composing Entropic Policies using Divergence Correction
[ "Jonathan J Hunt", "Andre Barreto", "Timothy P Lillicrap", "Nicolas Heess" ]
Deep reinforcement learning (RL) algorithms have made great strides in recent years. An important remaining challenge is the ability to quickly transfer existing skills to novel tasks, and to combine existing skills with newly acquired ones. In domains where tasks are solved by composing skills this capacity holds the promise of dramatically reducing the data requirements of deep RL algorithms, and hence increasing their applicability. Recent work has studied ways of composing behaviors represented in the form of action-value functions. We analyze these methods to highlight their strengths and weaknesses, and point out situations where each of them is susceptible to poor performance. To perform this analysis we extend generalized policy improvement to the max-entropy framework and introduce a method for the practical implementation of successor features in continuous action spaces. Then we propose a novel approach which, in principle, recovers the optimal policy during transfer. This method works by explicitly learning the (discounted, future) divergence between policies. We study this approach in the tabular case and propose a scalable variant that is applicable in multi-dimensional continuous action spaces. We compare our approach with existing ones on a range of non-trivial continuous control problems with compositional structure, and demonstrate qualitatively better performance despite not requiring simultaneous observation of all task rewards.
[ "maximum entropy RL", "policy composition", "deep rl" ]
https://openreview.net/pdf?id=SJ4Z72Rctm
https://openreview.net/forum?id=SJ4Z72Rctm
ICLR.cc/2019/Conference
2019
{ "note_id": [ "HJxJD6AgeV", "B1lKIlfJlE", "S1xE-pEnkV", "ByglHlE31N", "ByeCGb_vRX", "Byl3o8kR6Q", "rJgcmIkRaQ", "B1lgxSkATX", "BJgmOxy0am", "SyeAsTCaam", "BkgRW5786Q", "rkxl5ZFc2m", "r1l45Kv92X" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544772950793, 1544654928637, 1544469755871, 1544466487883, 1543106837555, 1542481571821, 1542481441749, 1542481127759, 1542479978742, 1542479269892, 1541974534499, 1541210504476, 1541204363829 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1333/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1333/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1333/Authors" ], [ "ICLR.cc/2019/Conference/Paper1333/AnonReviewer4" ], [ "ICLR.cc/2019/Conference/Paper1333/Authors" ], [ "ICLR.cc/2019/Conference/Paper1333/Authors" ], [ "ICLR.cc/2019/Conference/Paper1333/Authors" ], [ "ICLR.cc/2019/Conference/Paper1333/Authors" ], [ "ICLR.cc/2019/Conference/Paper1333/Authors" ], [ "ICLR.cc/2019/Conference/Paper1333/Authors" ], [ "ICLR.cc/2019/Conference/Paper1333/AnonReviewer4" ], [ "ICLR.cc/2019/Conference/Paper1333/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1333/AnonReviewer3" ] ], "structured_content_str": [ "{\"metareview\": \"Multiple reviewers had concerns about the clarity of the presentation and the significance of the results.\", \"confidence\": \"3: The area chair is somewhat confident\", \"recommendation\": \"Reject\", \"title\": \"Multiple reviewers had concerns about the clarity of the presentation and the significance of the results.\"}", "{\"title\": \"Response to Response\", \"comment\": \"Thank you for the updates and clarifications. My rating remains the same. I consider this paper to have enough novelty to be interesting. A limiting factor is that ICLR may not be the best venue for this work.\"}", "{\"title\": \"Further clarification on optimality\", \"comment\": \"Thank you for your comment and the opportunity for further clarification.\\n\\nWe cannot change the submission here at the moment but we will clarify the task and are happy to make changes when we are next able to. We currently state that \\\"One approach to this challenge is via the composition of task rewards. Specifically, we are interested in the following question: If we have previously solved a set of tasks with similar transition dynamics but different reward functions, how can we leverage this knowledge to solve new tasks which can be expressed as a convex combination of those rewards functions?\\\" and formalize this in the methods. We agree that it is a good idea to emphasize this point state this earlier in the manuscript and will incorporate this change.\\n\\nWhen we say optimal, we mean the \\u201ctrue\\u201d optimal policy under the maximum entropy objective on the transfer task ... that is the \\\"true optimal policy\\\" under this objective. Indeed, one of the exciting findings of this work is that, with the divergence correction method, it is possible to zero-shot recover the \\\"true\\\" optimal policy under the max-ent objective.\\n\\nTo understand why this is possible, note that: (1) we use the max-ent objective so that the base policies always assign some likelihood to all actions and, importantly and (2) the transfer tasks are always convex combinations of the base task rewards. The divergence correction term captures exactly the difference between the existing composed value function (Q^CO) and the optimal value function for the transfer task (Q^*). The innovation here is that we do not need any experience on the transfer task to learn the divergence correction, hence we can zero-shot transfer. The divergence corrected policy with this correction term is not simply a product of existing policies, and in some cases takes actions that are unlikely under the base policies (Figure 2 provides an example of this). Appendix A2 contains the formal conditions and proof for this optimality claim (in particular, we assume we know the optimal base policies and action-values along with the correction term C_b). Theorem 3.2 is specifically stating that under these assumptions we CAN guarantee that we recover the true optimal value function during and we have provided a proof there.\"}", "{\"title\": \"About optimality\", \"comment\": \"Thank you for explaining. I think it would be helpful if you pasted \\\"We are interested in 0-shot transfer where we combine existing policies trained on other tasks to provide a solution for a new task.\\\" as the first line of your introduction.\\n\\nThe policies that can be composed are fixed {\\\\pi_1, \\\\pi_2, ... , \\\\pi_n}. This means that by optimal policy, you mean the best policy that can be expressed by the value functions of these n policies. This may not be the true optimal value function. This is an important detail that should be clearly defined. For example, if there are m > n actions and each policy corresponds to always choosing one single action. To guarantee that you can recover the true optimal value function, more assumptions are needed about the base set of policies.\"}", "{\"title\": \"Please take a second look\", \"comment\": \"Dear Reviewers,\\n\\nThank you for you time looking our submission.\\n\\nWe've tried to respond the issues you raise, including significant improvements to the paper such as rewriting the algorithm section and adding an additional experiment, along with many clarifications.\\n\\nWe'd really appreciate if you could take another look and reevaluate after our changes.\"}", "{\"title\": \"Response to AnonReviewer4 minor comments (b)\", \"comment\": \"> 5. In Theorem 3.1, the authors should introduce Q^1, Q^2, ... , Q^n and define the\\n> policies in terms of the action-value functions. Also, the statement of this theorem is not self\\n> contained, what is the reward function of the MDP? The proof below should be\\n> called a proof sketch.\\n\\nIn this theorem, all policies and action-values Q are for the same MDP (i.e. all action-values are for the same reward function but different policies). In this theorem, the policies are not, in general, defined by their action-values Q (e.g. its not required that the policies are Boltzmann policies). The action-values do need to be the true action-values, so they are defined by their corresponding policies. We\\u2019ve tried to add a clarifying note before the theorem (note this statement of the GPI theorem is isomorphic to the standard RL GPI theorem in Barreto et al., 2017). There is no other constraints on the MDP, any valid reward function for an MDP will be valid.\\n\\nCould you explain why the proof for theorem 3.1 (max-ent GPI) is merely a sketch?. As referenced in the main text, the proof is in appendix A, and 1.5 pages long. Is there a particular step you feel is unclear or missing? We do make use of standard definitions and Soft Q iteration without explanation, but we reference prior work that defines these.\\n\\n> 6. The paper mentions that extending to multiple tasks is possible. Is it trivial?\\n> What is the basic idea? It seems straightforward but it might be helpful to explicitly\\n> state the idea.\\n\\nMax-Ent GPI can be extended to multiple policies straightforwardly as is. The theorem is given for n policies. However, in the original submission we derived the divergence correction only for pairs of base policies.\\n\\nAs suggested by reviewer 2, we have now also added a derivation of the DC term for n policies. Deriving the correction term for more than 2 policies is straightforward, but it may become more difficult to learn for large n. This is one advantage of GPI (no additional complexity is required to make use of n policies) over DC. We had included a discussion of this point in an earlier draft of the paper but we had to remove it due to space constraints. We\\u2019ve now added a brief discussion in the appendix where we derive the n-policy DC term.\\n\\n> 7. In Theorem 3.2, how was C derived? Please add some commentary explaining the\\n> conceptual idea.\\n\\nWe address this above. We have re-structured and edited the text to motivate theorem 3.1 prior to introducing it.\\n\\n> 8. In Table 1, what is f(s, a|b)? I don't see where this was defined?\\n\\nThis notation is only used for the label and is explained in the caption. We\\u2019ve modified the caption to explicitly denote this f(s, a|b) to make this clearer. This refers to fact that CondQ and DC both require learning some function that is conditional on b (in DC this is C^\\\\infty_b, in CondQ this is just directly Q(s, a|b).\\n\\n> 9. CondQ is usually referred to as UVFA in the literature.\\n\\nWe debated whether to refer to this approach as CondQ or UVFA, when we introduce the term CondQ (a contraction of conditional Q) we cite Schaul et al (i.e. the source of the UVFA terminology). However, we felt that many people would understand UVFA\\u2019s to refer to a specific architecture (with a dot-product between the task embedding and state embedding to compute value), and the idea of conditioning the value function on a task variable predates UVFAs, so we felt that CondQ was a more neutral term in this context. We certainly acknowledge UVFAs as a recent demonstration of the scalability of this idea and cite them.\\n\\n> 10. Section 3 really needs a conclusion statement.\\n\\nWe have added a conclusion statement.\\n\\n> 11. Section 4 is very unclear and hard to follow.\\n\\nWe agree. We have revised this section substantially. We welcome further of feedback.\\n\\n> 12. In figure 1f, what is LTD? It's never defined. I'm guessing it's DC.\\n\\nThat is correct. We originally referred to this method as LTD. We have now fixed this. Apologies.\\n\\n> 13. All of the figures are too small and some are not clear in black and white.\\n\\nIt has been challenging to fit the figures and display all the results concisely. In order allow for discussion we have uploaded a new version of this work, but we will continue to iterate on making the figures clearer.\\n\\nWe thank the reviewer for their time and extensive feedback. We hope we have addressed many of your concerns in this response and revision, and clarified many points of this work. We hope in light of these clarifications you may consider amending your rating.\"}", "{\"title\": \"Response to AnonReviewer4 minor comments (a)\", \"comment\": \"Firstly, we apologize for the lengthy response, but we wanted to ensure we addressed all of your comments.\\n\\n> Minor Comments:\\n> 1. In the abstract, \\\"requiring less information\\\" is very imprecise.\\n> Are you referring to sample complexity?\\n\\nThis is addressed in the section of the response above to the major concerns.\\n\\n> 2. In the introduction, \\\"can consistently achieve good performance\\\" is imprecise.\\n> What is the notion of near-optimality? What does consistent mean?\\n> Having experimental results on 3 tasks doesn't seem to be enough to me to justify this claim.\\n\\nWe have clarified above that we are using the standard notion of optimality in RL, but during the transfer task. We\\u2019ve also edited the text to make this clearer. Our claims regarding performance are based on two items: as discussed above, our theoretical results show that DC recovers the optimal policy (if all terms are known exactly). Our empirical results show on 6 (original submission 5) tasks DC performing well.\\n\\nFor our empirical evaluation on continuous control tasks we further focused experiments on the case that emerged as the most difficult in the theoretical analysis and the tabular domains, namely the case when the desired transfer behavior is distinct from that of any of the base policies.\\n\\nBesides the \\u201ctricky\\u201d tasks there are two other extremes we considered in the tabular case.\\n\\nDC method is a correction term to CO, so in the case where CO performs well this implies the correction term to DC is negligible, and we\\u2019d expect DC to perform well too (as outlined above, our theoretical results prove DC is optimal on the transfer task with the assumption that all components are exact, so here we mean practical performance with function approximators).\\n\\nThe other extreme is where the two tasks are completely incompatible. In the tabular case, as expected, DC performs well but, one could imagine this task could be challenging for DC in practice, since it implies that the correction term must be large, and potentially challenging to learn. To address this we have added an additional task to the appendix (supplementary figure 5), examining this situation. We find that DC performs as well as GPI, slightly better than CondQ and still much better than CO in this situation.\\n\\nWe use the term \\u201cnear optimal\\u201d in the control tasks to indicate that DC transfer policy trajectories are qualitatively different from the other approaches on the tricky task (i.e. they go towards the optimal joint solution to the task). We\\u2019ve now restated this as ``qualitatively better performance,\\u2019\\u2019 to be more precise, since we don\\u2019t have access to the true optimal solution.\\n\\nIn conclusion, Our theoretical analysis demonstrates that DC recovers the optimal policy during transfer (under the strong assumption that the underlying action-value functions, and of C are known exactly; this analysis is supported by the tabular results). Empirically we have demonstrated that across the most challenging form of transfers tasks (``tricky\\u2019\\u2019) in a variety of bodies, DC recover qualitatively different and better policies than other approaches. Finally, our new addition shows in another type of transfer situation (incompatible) tasks, DC performs as well as GPI, and better than CondQ and OC.\\n\\n> 3. In the introduction (and rest of the paper), please don't call Haarnoja et al.'s\\n> approach optimistic. Optimism already has another widely used meaning in\\n> RL literature. Maybe call it \\\"Uncorrected\\\".\\n\\nThank you for pointing out optimism has another, related, meaning in RL. We do like the term optimism for this approach as it implies the sign of the \\u2018\\u2019uncorrected\\u2019 Q (i.e. it always over-estimates the value). For clarity we have now termed this \\u2018Compositional Optimism\\u2019 and used CO throughout, which will hopefully avoid confusion with optimism in the exploration sense.\\n\\n> 4. In section 2.2, the authors introduce \\\\pi_1, \\\\pi_2, ... , \\\\pi_n but never actually use\\n> that notation. This section does not clearly explain how GPI works.\\n\\nUnfortunately we are limited by the page limit. We have attempted to clarify this somewhat within the space constraints. The set of policies $\\\\pi_1, \\u2026$ are used to define the action-values in eq (2). We also try and show in section 3 how we are making use of max-ent GPI in our setup (and of course rely on reference to Barreto et al., for a fuller explanation).\\n\\n\\nGPI assumes that for a given reward function we have access to the action value functions associated with policies a set pi_1, pi_2, .... In this case we could act according to any of the policies (and would achieve the reward indicated by the associated action value function), but GPI suggests that we can achieve a higher value in all states by acting according to the GPI policy which performs a max over the individual policies' value functions.\"}", "{\"title\": \"Response to AnonReviewer 4 major concerns\", \"comment\": \"> their approximate algorithm in two continuous control problems.\", \"to_clarify\": \"our submission included results for 4 continuous control tasks: 2-D point mass, 5 DOF planar manipulator, 3 DOF jumping ball and 8 DOF ant. We have now added an additional ant task in the appendix (see response later).\\n\\n> While this paper has some interesting ideas \\u2026 these ideas are not properly motivated.\\n> The main problem seems to be clarity. One big problem is that the paper never\\n> defines the notion of a notion of optimality (or near-optimality).\\n\\nWe are interested in 0-shot transfer where we combine existing policies trained on other tasks to provide a solution for a new task. Our notion of optimality is hence with respect to the performance of the optimal policy for the new task. In that sense a policy composition for the transfer task is optimal if it achieved the same performance as the optimal policy for the transfer task. When we say near-optimal, we simply mean the return is nearly that of an optimal policy. We have tried to clarify this notion in our formalism of the transfer problem (section 2.1).\\n\\nFor the tabular tasks we can solve exactly (within numerical precision) for the optimal policy. For the control tasks, we do not have access to the true optimal policy, so we compare the returns of different compositional approaches with one another. We do, however, know roughly the optimal trajectory (i.e. in the \\u201ctricky\\u201d tasks this corresponds to heading towards the upper right square). We have edited the text to try and clarify these claims.\\n\\n> \\u2026 considering that the DC algorithm is one of the main contributions of the paper\\n> it is barely motivated.\\n\\n> Theorem 3.2 is presented with almost no explanation about how DC was derived...\\n> why they believe DC should perform well in these cases. \\n\\nWe have added a motivating paragraph before the introduction of DC and have edited this section to provide an intuitive notation of DC before introducing it formally. In short, we want a method that is, in principle (if all components are known exactly) optimal.\\n\\nTheorem 3.2 is our basis for believing that DC should perform well. It shows that, if all of the terms of Q^{DC} are known exactly, the resulting policy is the optimal policy on the transfer task. \\n\\nThe intuition is that the correction term to (compositional optimism) CO is (roughly) the expected divergences between policies along their trajectories. If the two policies have low-divergences, the CO assumption is approximately correct. If they have high-divergences, this means the policies don\\u2019t agree about what actions to take, and thus cannot both achieve their expected returns simultaneously.\\n\\n> The authors make the unjustified claim in the abstract that their approach has \\\"near-optimal\\n> performance and requires less information\\\"...\\n\\nFirstly, we agree the claim regarding \\u2018less information\\u2019 was ambiguous. The information was not data efficiency (all methods here are trained using the same data). We do not refer here to sample complexity (all methods are trained on the same amount of data, and tested on 0-shot transfer). What is meant by information here is that, under the formalism for transfer used here, GPI and CondQ/UVFA\\u2019s require that, while learning policies for each task i, the rewards on all tasks be observed (i.e. \\\\phi is observable). Compositional Optimism and DC do not require access to this information, hence the claim of less information. We have now modified the abstract to explicitly state ``despite not requiring simultaneous observation of all task rewards.\\u2019\\u2019\\n\\nAs discussed in our response above, we have edited the text to clarify the notion of optimality, which is the standard RL definition (a policy is optimal if it has an expected return the same as the optimal policy for task), but on the compositional transfer task.\\n\\nThe DC theorem, like many RL results, makes the claim of optimality when the components are known exactly. That is, we state in the theorem conditions that $Q^i$, $Q^j$ are the action-value functions of the optimal policies of the base tasks and the correction term C_b^\\\\infty is the solution to the given fixed point. Thus, in some sense, the known hardness results are inside this assumption that we known optimal solutions to the base tasks and the need to know C everywhere. Again, we want to highlight that prior methods do not recover the optimal transfer policy, even when assuming all of their components are known exactly. In the tabular case, where we can (within numerical precision) compute all the components, we do indeed see DC recovers the optimal policy in all tasks we considered.\\n\\nPractically, our experimental results show that DC does generate better transfer policies than GPI and CO. In these experiments, as with almost all DeepRL, of course we do not have access to the exact action-value\\u2019s, but that approximating the DC correction term can result in qualitatively better transfer policies.\"}", "{\"title\": \"Response to anon reviewer 3\", \"comment\": \"> ... neither of them seems to contain *significant* novelty. The derivations of the theoretical results\\n> (Theorem 3.1 and 3.2) are also relatively straightforward. The experiment results in Section 5 are interesting.\\n\\nWhile we acknowledge that our results build on prior work we feel that the reviewer\\u2019s assessment undervalues our contributions. To clarify: \\n\\nSuccessor Representations/Features have only ever been used in small, discrete action spaces. To the best of our knowledge, we are the first to provide a method for using successor features in continuous action spaces.\\nThe extension of the GPI theorem to the max-ent RL objective is a non-trivial extension. This brings an important and general idea on value iteration to a new, and important RL framework. It shows that there is a simple, principled way to combine max-ent policies in a way that ensures the result composition at worst retains the performance of the best policy.\\n\\nThe derivation of theorem 3.2 is, as we state in the paper, similar to the approach used in Haarnoja et al., 2018. However, there is an important conceptual leap to show that this term can be practically learned and used to improve performance. Indeed the final lines of Haarnoja are \\u201cAn interesting avenue for future work would be to further study the implications of this bound on compositionality. For example, can we derive a correction that can be applied to the composed Q-function to reduce bias? Answering such questions would make it more practical to construct new robotic skills out of previously trained building blocks, making it easier to endow robots with large repertoires of behaviors learned via reinforcement learning.\\u201d We have taken a first step in this direction and demonstrated that this correction term can be learned in a practical manner.\\n\\nAdditionally, we introduced a very simple heuristic, DC-Cheap. In many cases, this heuristic suffices to get similar performance to DC.\\n\\nFinally, we also introduce an algorithm for practically using these methods, including, to our knowledge the first example of online zero-shot transfer (in the context defined here) in continuous action spaces. The only other work we are aware of that does this (Haarnoja et al., 2018), requires an offline retraining of the sampler.\\n\\n>1) Section 4 of the paper is not well written and is hard to follow.\\n\\nWe have re-written section 4. Hopefully it is clearer now. We have also edited the rest of the paper to improve readability. We welcome additional feedback.\\n\\n> 2a) ... notation \\\\delta has not been defined.\\n\\nThis was the Dirac delta to communicate the GPI policy is deterministic (in this prior work). We have modified this to simplify the notation by explicitly stating the policy is deterministic.\\n\\n>2b) ... notation V_{\\\\theta'_V} and V'_{\\\\theta_V} have been used. I do not think \\n>either of them has been defined. \\n\\nV_{\\\\theta\\u2019_V} is the target network for V_{\\\\theta_V}. This was defined in the paragraph below eq 14. We have now re-phrased this to make it more clear this is the definition. V\\u2019_{\\\\theta_V} was a notational mistake, it should have been V_{\\\\theta\\u2019_V}. Thank you for pointing that out.\\n\\n>Pros:\\n\\n>1) The proposed approaches and the experiment results are interesting.\\n\\n>Cons:\\n\\n>1) Neither the algorithm design nor the analysis has sufficient novelty, compared to the typical standard\\n> of a top-tier conference.\\n\\nWe addressed this point in detail above, where we enumerated what we believe are the contributions of this work. At the risk of belaboring the point: many papers are simply a more scalable algorithm, or novel theoretical ideas. Here we introduced 2 new theoretical ideas, and a new practical algorithm, is to the best of our knowledge the first demonstration of online zero-shot transfer for task composition in continuous action spaces.\\n\\n>2) The paper is not very well written, especially Section 4.\\n\\nWe have edited the paper throughout to provide additional clarify and substantially revised section 4 and are happy to make further revisions. Our paper builds upon and extends several lines of work and it has been a challenge to introduce and discuss all relevant concepts in the limited space available. \\n\\n>3) For Theorem 3.2, why not prove a variant of it for the general multi-task case?\\n\\nThank you for the suggestion. We have added a proof in the appendix for the n-policy case of Theorem 3.2. Due to space constraints we have included this in the appendix. he derivation is very similar to the two policy case which we discuss in the main text..\\n\\n>4) It would be better to provide the pseudocode of the proposed algorithm in the main body of > the paper.\\n\\nAs part of our revision of section 4, we have moved this algorithm into the main text (and modified it to include all the losses.\\n\\nWe thank the reviewer for their time and feedback. We hope we have been able to address many of your concerns and you may reconsider your rating in light of our changes. We welcome additional feedback.\"}", "{\"title\": \"Response to AnonReviewer1\", \"comment\": \"We thank the reviewer for their time and feedback. We address the minor corrections below.\\n\\n>- Figure 1.e: Why does the Optimistic transfer have high regret when the caption says that \\\"on the LU task, optimistic transfers well\\\"\\n\\nOptimistic transfers much better than GPI on this task. The regret scale is logarithmic, so while Optimistic (red) does not recover the optimal policy, it has a regret approximately 10^4 lower than GPI (blue). We\\u2019ve edited the caption to make this clearer. Note in response to another reviewer we have renamed Optimistic to Compositional Optimism (CO) to avoid confusion.\\n\\n>- Figure 1.i states \\\"Neither GPI nor the optimistic policies (j shows GPI, by the Optimistic policy is similar)\\\" but Figure1.j is labeled DC T, is this a typo?\\n\\nApologies, that is indeed a typo. We\\u2019ve now reworded this caption.\\n\\n>- Figure 2: Many typos: \\\"(b) Finger position at the en (of the trajectoriesstard ting from randomly sampled start states)\\\"\\n\\nThank you for pointing this out, this has been fixed.\\n\\nIn general, we made a number of minor edits throughout, and particularly revised section 4 to improve the readability of the paper.\"}", "{\"title\": \"Need clearer motivation for algorithm. Lots of little issues need fixing\", \"review\": \"The authors introduce Divergence Correction (DC) for the problem of transfer learning by composing policies. There approach builds on GPI with a maximum entropy objective. They also prove that DC solves for the max-entropy optimal interpolation between two policies and derive a practical approximation for this algorithm. They provide experimental results in a gridworld problem and study their approximate algorithm in two continuous control problems.\\n\\nWhile this paper has some interesting ideas (combining GPI with a Max-Entropy objective and DC), these ideas are not properly motivated. The main problem seems to be clarity. One big problem is that the paper never defines the notion of a notion of optimality (or near-optimality). Also, considering that the DC algorithm is one of the main contributions of the paper it is barely motivated. Theorem 3.2 is presented with almost no explanation about how DC was derived. Why do the authors believe that DC is a good idea on a conceptual level? It's very interesting that the paper presents cases where previous approaches (Optimistic and GPI) don't perform well. But the authors don't explain why they believe DC should perform well in these cases. \\n\\nThe authors make the unjustified claim in the abstract that their approach has \\\"near-optimal performance and requires less information\\\". I say this is unjustified because they only try this approach on three benchmarks. In addition, there should be situations where DC also performs poorly since there are known hardness results for solving MDPs. Admittedly, those results may not apply if the authors are making assumptions that are not being clearly discussed in the paper.\", \"minor_comments\": \"1. In the abstract, \\\"requiring less information\\\" is very imprecise. Are you referring to sample complexity?\\n2. In the introduction, \\\"can consistently achieve good performance\\\" is imprecise. What is the notion of near-optimality? What does consistent mean? Having experimental results on 3 tasks doesn't seem to be enough to me to justify this claim.\\n3. In the introduction (and rest of the paper), please don't call Haarnoja et al.'s approach optimistic. Optimism already has another widely used meaning in RL literature. Maybe call it \\\"Uncorrected\\\".\\n4. In section 2.2, the authors introduce \\\\pi_1, \\\\pi_2, ... , \\\\pi_n but never actually use that notation. This section does not clearly explain how GPI works.\\n5. In Theorem 3.1, the authors should introduce Q^1, Q^2, ... , Q^n and define the policies in terms of the action-value functions. Also, the statement of this theorem is not self contained, what is the reward function of the MDP? The proof below should be called a proof sketch.\\n6. The paper mentions that extending to multiple tasks is possible. Is it trivial? What is the basic idea? It seems straightforward but it might be helpful to explicitly state the idea.\\n7. In Theorem 3.2, how was C derived? Please add some commentary explaining the conceptual idea.\\n8. In Table 1, what is f(s, a|b)? I don't see where this was defined?\\n9. CondQ is usually referred to as UVFA in the literature.\\n10. Section 3 really needs a conclusion statement.\\n11. Section 4 is very unclear and hard to follow.\\n12. In figure 1f, what is LTD? It's never defined. I'm guessing it's DC.\\n13. All of the figures are too small and some are not clear in black and white.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Good Paper\", \"review\": \"This paper proposes using Divergence Correction to compose max ent policies. Based on successor features, this method corrects the optimistic bias of Haarnoja 2018. The motivation for composing policies is sound. This paper addresses the problem statement where policies must accomplish different linear combinations of different reward functions. This method does not require observation the reward weights.\\n\\nAs shown in the experiments, this method outperforms or equally performs past work in both tabular and continuous environments. The paper is well written and discusses prior work in an informative manner. The tabular examples provide good visualizations of why the methods perform differently.\", \"minor\": [\"Figure 1.e: Why does the Optimistic transfer have high regret when the caption says that \\\"on the LU task, optimistic transfers well\\\"\", \"Figure 1.i states \\\"Neither GPI nor the optimistic policies (j shows GPI, by the Optimistic policy is similar)\\\" but Figure1.j is labeled DC T, is this a typo?\", \"Figure 2: Many typos: \\\"(b) Finger position at the en (of the trajectoriesstard ting from randomly sampled start states)\\\"\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Interesting work, but need further improvement\", \"review\": \"-- Contribution, Originality, and Quality --\\n\\nThis paper has presented two approaches for transfer learning in the reinforcement learning (RL) setting: max-ent GPI (Section 3.1) and DC (Section 3.2). The authors have also established some theoretical results for these two approaches (Theorem 3.1 and 3.2), and also demonstrated some experiment results (Section 5).\\n\\nThese two developed approaches are interesting. However, based on existing literature (Barreto et al. 2017; 2018, Haarnoja et al. 2018a), neither of them seems to contain *significant* novelty. The derivations of the theoretical results (Theorem 3.1 and 3.2) are also relatively straightforward. The experiment results in Section 5 are interesting.\\n\\n-- Clarity --\\n\\nI have two major complaints about the clarity of this paper. \\n\\n1) Section 4 of the paper is not well written and is hard to follow.\\n\\n2) Some notations in the paper are not well defined. For instance\\n\\n2a) In page 3, the notation \\\\delta has not been defined.\\n2b) In page 6, both notation V_{\\\\theta'_V} and V'_{\\\\theta_V} have been used. I do not think either of them has been defined. \\n\\n-- Pros and Cons --\", \"pros\": \"1) The proposed approaches and the experiment results are interesting.\", \"cons\": \"1) Neither the algorithm design nor the analysis has sufficient novelty, compared to the typical standard of a top-tier conference.\\n\\n2) The paper is not very well written, especially Section 4.\\n\\n3) For Theorem 3.2, why not prove a variant of it for the general multi-task case?\\n\\n4) It would be better to provide the pseudocode of the proposed algorithm in the main body of the paper.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
H1MW72AcK7
Optimal Control Via Neural Networks: A Convex Approach
[ "Yize Chen", "Yuanyuan Shi", "Baosen Zhang" ]
Control of complex systems involves both system identification and controller design. Deep neural networks have proven to be successful in many identification tasks, however, from model-based control perspective, these networks are difficult to work with because they are typically nonlinear and nonconvex. Therefore many systems are still identified and controlled based on simple linear models despite their poor representation capability. In this paper we bridge the gap between model accuracy and control tractability faced by neural networks, by explicitly constructing networks that are convex with respect to their inputs. We show that these input convex networks can be trained to obtain accurate models of complex physical systems. In particular, we design input convex recurrent neural networks to capture temporal behavior of dynamical systems. Then optimal controllers can be achieved via solving a convex model predictive control problem. Experiment results demonstrate the good potential of the proposed input convex neural network based approach in a variety of control applications. In particular we show that in the MuJoCo locomotion tasks, we could achieve over 10% higher performance using 5 times less time compared with state-of-the-art model-based reinforcement learning method; and in the building HVAC control example, our method achieved up to 20% energy reduction compared with classic linear models.
[ "optimal control", "input convex neural network", "convex optimization" ]
https://openreview.net/pdf?id=H1MW72AcK7
https://openreview.net/forum?id=H1MW72AcK7
ICLR.cc/2019/Conference
2019
{ "note_id": [ "SJeeVtwt_B", "HylsgDCzeV", "B1ewimxjkV", "Hke3H7li1N", "BkgQZvtByN", "SJeInZNZR7", "r1lZ9YkkC7", "r1gu-tJJRQ", "r1lKWukkCQ", "S1eqHIJyCm", "HklIprkJRm", "B1e8JrJ1Am", "S1ljLTYjnm", "H1eRNOBqn7", "rJxXxUZ92X", "ryxdDdm3F7" ], "note_type": [ "comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review", "comment" ], "note_created": [ 1570498855900, 1544902387019, 1544385438923, 1544385347595, 1544029946812, 1542697390392, 1542547848625, 1542547712113, 1542547456598, 1542547009715, 1542546877758, 1542546654442, 1541279058683, 1541195830079, 1541178859330, 1538173024233 ], "note_signatures": [ [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper1332/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1332/Authors" ], [ "ICLR.cc/2019/Conference/Paper1332/Authors" ], [ "ICLR.cc/2019/Conference/Paper1332/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1332/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1332/Authors" ], [ "ICLR.cc/2019/Conference/Paper1332/Authors" ], [ "ICLR.cc/2019/Conference/Paper1332/Authors" ], [ "ICLR.cc/2019/Conference/Paper1332/Authors" ], [ "ICLR.cc/2019/Conference/Paper1332/Authors" ], [ "ICLR.cc/2019/Conference/Paper1332/Authors" ], [ "ICLR.cc/2019/Conference/Paper1332/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1332/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1332/AnonReviewer2" ], [ "(anonymous)" ] ], "structured_content_str": [ "{\"comment\": \"Is the source code available?\", \"title\": \"Source code available?\"}", "{\"metareview\": \"The paper makes progress on a problem that is still largely unexplored, presents promising results, and builds bridges with\\nprior work on optimal control. It designs input convex recurrent neural networks to capture temporal behavior of \\ndynamical systems; this then allows optimal controllers to be computed by solving a convex model predictive control problem.\\n\\nThere were initial critiques regarding some of the claims. These have now been clarified.\\nAlso, there is in the end a compromise between the (necessary) approximations of the input-convex model and the true dynamics, and being able to compute an optimal result. \\n\\nOverall, all reviewers and the AC are in agreement to see this paper accepted.\\nThere was extensive and productive interaction between the reviewers and authors.\\nIt makes contributions that will be of interest to many, and builds interesting bridges with known control methods.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"strong paper; nomination for oral presentation; nomination for best reviewer\"}", "{\"title\": \"Great insights and suggestions for our future work [2/2]\", \"comment\": \"3) I have a minor concern/question on the locomotion experiment: If I understand it correctly, there are two differences to Nagabandi et al.: the dynamics are input-convex, and the inference procedure uses gradient descent over the action space of the control problem that Nagabandi does with just random search. It's not clear which one of these is improving the accuracy, as taking a few gradient steps over the non-convex MPC problem in Nagabandi et al. is also reasonable and would likely also improve the performance, even if the true optimum is not reached. Did you try comparing to this as a baseline?\\n\\nWe agree with the reviewer\\u2019s insights that both convexity and gradient descent steps could bring benefits in the locomotion tasks. Similar to the energy consumption case, we also tried to directly do gradient steps using a normal neural network to represent dynamics. It would improve the reward than the random search method but the reward is lower than our proposed ICNN method. We are still working on more experiments to gain a deeper understanding of the contributions from these two factors, and the general tradeoff between model accuracy and solution traceability in model predictive control (MPC) would be an important direction for our future work. Moreover, since of computation efforts of Nagabandi et al\\u2019s method have been taken on the random search step, to make a fair comparison in the paper we only show the comparison with their original method on the task rewards and computation time.\\n\\n4) On the energy consumption experiment, is the RC model a linear dynamics model that only looks at single-step states and actions as g(x_t, u_t)? If so, comparing an ICNN that uses a previous trajectory as g(x_{t-n_w:t}, u_{t-n_w,t}) to this seems somewhat unfair as another reasonable baseline would be a linear model that also uses the previous trajectory. It could be interesting to include some portions in the paper about the ways you've seen the non-convex MPC with the RNN dynamics model fail that the ICNN model overcomes. For example, does the lack of smoothness cause the control problem to get stuck in bad local minimum?\\n\\nThe RC model is linear dynamics using single-step states, yet has been used as a standard method in building energy management. In order to demonstrate where the non-convex MPC with the RNN dynamics model fails and the ICNN model overcomes, we actually included a comparison of the control performance of ICRNN with normal RNN in Figure. 4. We think the most interesting result is shown in Fig. 4(c). By using ICRNN, the final control actions (in red) is stable, while control signals founds by normal RNN (in green) have many oscillations, which seems to be stuck in some local minima, and such drastic control variations is not desirable for physical system control.\\n\\n5) As a minor comment, please add parenthetical citations where appropriate to the paper.\\n\\nWe thank the reviewer for this helpful suggestion and we have modified the citation format in the revised paper.\"}", "{\"title\": \"Response to Reviewer 1- Great insights and suggestions for our future work [1/2]\", \"comment\": \"We thank the reviewer again for carefully reading our manuscript and providing so many valuable feedbacks. We address reviewer\\u2019s concern as follows:\\n\\n1) One limitation that is now present in the revised version of the paper that was not present in the original submission is that these dynamics models do *not* subsume linear dynamics models. This is because the dynamics are being approximated with convex and non-decreasing functions over the state space, while linear models *are* able to model decreasing functions over the state space while retaining convexity of the overall control problem. I would like an updated version of this paper to highlight this limitation of the method as I expect it to hurt some applications (although it is fine in other contexts).\\n\\nWe thank the reviewer for this insightful comment and we agree that the proposed input convex neural networks do not subsume linear dynamics models completely. Specifically, the proposed ICNN/ICRNN could only capture the dynamics convex and non-decreasing over the state space. But since we are not restricting the control space (system state at any time can be viewed as a function of the initial system state and all previous control inputs if one unrolls the system dynamics equation entirely), and we have explicitly included multiple previous states in the state transition dynamics $s_t = g(s_{t-n_w:t-1}, u_{t-n_w:t})$, so the non-decreasing constraint should not hurt the representation capacity by much.\\n\\nIn the revised manuscript, we add the following discussion in page 5 under Eq. (5) to emphasize the differences between input convex neural networks and linear models. \\u201cNote that as a general formulation, we do not include the duplication tricks on state variables, so the dynamics fitted by Eq. (5b) and (5c) are non-decreasing over state space, which are not equivalent to those dynamics represented by linear systems. However, since we are not restricting the control space (dynamics can be both increasing or decreasing on control variables), and we have explicitly included multiple previous states in the system transition dynamics, so the non-decreasing constraint over state space should not restrict the representation capacity by much. In Section 3, we theoretically prove the representability of proposed networks.\\u201d\\n\\n2) I think it should be made clearer that the motivation of the \\u2018input duplication trick\\u2019 over the control space is to restrict the networks to be non-decreasing over the state space while not restricting the control space. I think the duplicated inputs unnecessarily complicates the presentation at parts e.g. the end of Section 2.1.\\n\\nWe thank the reviewer for pointing out this misleading presentation at the end of Section 2.1 about the duplication. In the revised manuscript, we write out explicitly the convex and non-decreasing properties over the expanded control variable \\\\hat{u} rather than u.\"}", "{\"title\": \"Excellent revised paper.\", \"comment\": \"Thanks for the thorough response and revised version of the paper.\\nThe updates are commendable and I apologize for the delays from my end\\nas I needed the time to thoroughly look over the new manuscript.\\nI have updated my score from a 1 to a 6.\\nThe revised paper no longer has the significant errors with convexity\\nthat I found in my original review and I think that the models,\\nexperimental tasks, and analysis provide a useful contribution\\nto the community.\\n\\nOne limitation that is now present in the revised version of the\\npaper that was not present in the original submission is that\\nthese dynamics models do *not* subsume linear dynamics models.\\nThis is because the dynamics are being approximated with\\nconvex and non-decreasing functions over the state space,\\nwhile linear models *are* able to model decreasing functions\\nover the state space while retaining convexity of the overall\\ncontrol problem. I would like an updated version of this paper\\nto highlight this limitation of the method as I expect it to\\nhurt some applications (although it is fine in other contexts.)\\n\\nOn the presentation of the work, I think it should be made clearer\\nthat the motivation of the \\\"input duplication trick\\\" over the\\ncontrol space is to restrict the networks to be non-decreasing\\nover the state space while not restricting the control space.\\n\\nI think the duplicated inputs unnecessarily complicates the\\npresentation at parts, such as the borderline-misleading\\nstatement at the end of Section 2.1 that says:\\n\\n Note that such construction guarantees that the\\n network is convex and non-decreasing with respect\\n to the expanded inputs \\\\hat u = [u -u]\\n\\nThis part is almost misleading because the model is\\n*not* non-decreasing with respect to the controls u,\\n\\nI have a minor concern/question on the locomotion experiment:\\nIf I understand it correctly, there are two differences to\\nNagabandi et al.:\\n\\n1) The dynamics are input-convex, and\\n2) The inference procedure uses gradient descent over\\n the action space of the control problem that\\n Nagabandi does with just random search\\n\\nIt's not clear which one of these is improving the accuracy,\\nas taking a few gradient steps over the non-convex MPC problem\\nin Nagabandi et al. is also reasonable and would likely also\\nimprove the performance, even if the true optimum is not reached.\\nDid you try comparing to this as a baseline?\\n\\nOn the energy consumption experiment, is the RC model a\\nlinear dynamics model that only looks at single-step\\nstates and actions as g(x_t, u_t)?\\nIf so, comparing an ICNN that uses a previous trajectory\\nas g(x_{t-n_w:t}, u_{t-n_w,t}) to this seems somewhat\\nunfair as another reasonable baseline would be a linear\\nmodel that also uses the previous trajectory.\\n\\nIt could be interesting to include some portions in the paper\\nabout the ways you've seen the non-convex MPC with the RNN\\ndynamics model fail that the ICNN model overcomes.\\nFor example, does the lack of smoothness cause the control problem to\\nget stuck in bad local minimum?\\n\\nI still see section 3 as being very out-of-place within the\\nbroader context of this paper and I have not reviewed this\\nportion of the paper.\\n\\nAs a minor comment, please add parenthetical citations where\\nappropriate to the paper.\"}", "{\"title\": \"reviewers: comments on responses and revised-paper improvements?\", \"comment\": \"The detailed reviews and responses are commendable. Thanks to all.\", \"reviewers\": \"can you comment on whether the revised-paper and author responses have addressed your concerns?\\nIn particular, for reviewer 1, this would be important. Note that the revised version can also be viewed in a way that lets one easily see the differences.\\n\\n-- area chair\"}", "{\"title\": \"Revised Paper Uploaded to Address Reviewers' Feedback\", \"comment\": \"We would like to thank all the reviewers for their constructive comments. We have responded to each reviewer\\u2019s comments individually, and in summary, we have made the following clarifications or changes:\\n-Clarify the inputs and constraints on the input convex neural network weights \\n-Update texts, equations accordingly to avoid notation confusions\\n-Explain and add details for simulation results.\"}", "{\"title\": \"Response to Reviewer 2\", \"comment\": \"We are grateful to the reviewer for thoroughly reading our paper and providing these encouraging\\nwords. Below, we respond to the comments in detail.\\n\\n1) For Lemma 1 and Theorem 1, I wonder whether similar results can be established for non-convex functions. Intuitively, it seems that as long as assuming Lipschiz continuous, we can always approximate a function by a maximum of many affine functions, no matter it is convex or not. Is this right or something is missing?\\n\\t\\nThis is an interesting and subtle question. If we restrict ourselves to \\u201cmaximum\\u201d of affine functions, then we cannot construct functions that are not convex. This is from the fact that a pointwise max of convex functions (which include affine functions) is convex. As the reviewer points out, if we allow other types of operations, we can construct other types of functions. For example, if we change the pointwise max to the pointwise min, then we can approximate all Lipschiz concave functions. If we allow both max and min, we get all Lipschiz functions, but this just recover the result that neural networks can approximate most function types. We anticipate that different applications may require different function types to be approximated, and this is an active research direction for us. \\n\\n2) In the main paper, all experiments were aimed to address ICNN and ICRNN have good accuracy, but not they are easier to optimize due to convexity. In the abstract, it is mentioned \\\"... using 5X less time\\\", but I can only see this through appendix. A suggestion is at least describing some results on the comparison with training time in the main paper.\\n\\nWe thank the reviewer for pointing out this piece of missing information on running time in the main text. In the revised manuscript, we have added discussions on computation time in Section 4.1 to show our controller design would achieve both computation efficiency and performance improvement.\\n\\n3) In Appendix A, it seems the NN is not trained very well as shown in the left figure. Is this because the number of parameters of NN is restricted to be the same as in ICNN? Do training on both spend the same resource, ie, number of epoch? Such descriptions are necessary here.\\n\\nIn the toy example on classifying points in 2D grid, we used a 2-layer neural networks for both conventional neural networks and ICNN, with 200 neurons in each layer. We simulate the case when training data is small (100 training samples). We observe the results given by conventional neural networks are quite unstable by using different random seeds and are prone to be overfitting. On the contrary, by adding constraints on model weights to train the ICNN, fitting result is better using this small-size training data, while the learned landscape is also beneficial to the optimization problem. \\nIn the revised manuscript, we added more details on the model and training setup, the learning task, and the optimization task to address the confusion.\\n\\n4) In Table 2 in appendix, why the running time of ICNN increases by a magnitude for large H in Ant case?\\n\\nWe apologize for the typo in the case of Ant for computation time and the confusion it caused. We wanted to report everything in minutes but forgot to convert the time for the Ant case from seconds to minutes. In the revised version we have unified the running time under minutes. \\n\\nWe also thank the reviewer for carefully proofreading the paper and extracting the typos.\"}", "{\"title\": \"Response to Reviewer 3\", \"comment\": \"We thank the reviewer for the encouraging words, and we are also expecting our work would be able to serve as an efficient framework for incorporating deep learning into real-world control problems. The reviewer\\u2019s comments on the robustness of proposed convex MPC are also quite valuable. We would try to explore in details about the learning errors and control robustness in the future work.\\n\\nHere are some responses to the reviewer\\u2019s comments:\\n\\n-The miss-replacement of Figure in Page 18\\nWe thank the reviewer for pointing this out. In the revised version, we have added the fitting result comparison (Fig. 7 in Appendix D.4) for ICNN and a normal neural network, which shows that ICNN is able to learn the MuJoCo dynamics efficiently.\\n\\n-The comparison with end-to-end RL approach\\n\\nWe thank the reviewer for this helpful suggestion. Conventional end-to-end RL approach directly learns the mapping from observations to actions without learning a system dynamics model. Such algorithms could achieve better performances, but are at the expense of much higher sample complexity. The model-free approach we compare with is the rllab implementation of trust region policy optimization (TRPO) [JL], which has obtained state-of-the-art results on MuJuCo tasks. We added Fig. 9 in Appendix D of the revised paper to compare our results with TRPO method and random shooting method [Nagabandi et al]. TRPO suffers from very high sample complexity and often requires millions of samples to achieve good performance. But here we only provided very few rollouts (since for physical system control, the sample collection might be limited by real-time operations, or it is difficult to explore the whole design space because suboptimal actions would lead to disastrous results), therefore, the performance by ICNN is much better than TRPO. Similarly to the model-based, model-free (MBMF) approach mentioned in [Nagabandi et al], the learned controller via ICNN could also provide good rollout samples and serve as a good initialization point for model-free RL method. \\n\\n\\nReferences\\n[JL] Schulman, John, Sergey Levine, Pieter Abbeel, Michael Jordan, and Philipp Moritz. \\\"Trust region policy optimization.\\\" In International Conference on Machine Learning, pp. 1889-1897. 2015.\"}", "{\"title\": \"Theoretical guarantees on the convexity of neural networks-MPC [3/3]\", \"comment\": [\"More information should be added to the last paragraph of Section 1 as it's claimed that the representational power of ICNNs and \\\"a nice mathematical property\\\" .\"], \"we_have_added_the_following_discussion_to_the_end_of_section_1\": \"Our method enjoys computational efficiency in two perspectives. Firstly, as stated in Theorem 2, compared to model based method which often employs piecewise linear functions, we could train ICNN or ICRNN (with exponentially less variables/pieces) using off-the-shelf deep learning packages such as PyTorch and Tensorflow, while the optimal control can be achieved by solving convex optimization; Secondly, compared to model-free (deep) reinforcement learning algorithms, which usually takes an end-to-end settings and requires lots of samples and long training time, our model is learning and controlling based on the system dynamics \\u2013 this can be much more sample efficient. There is also an ongoing debate on the model-free and model-based reinforcement learning algorithms [BR], and we look forward to incorporating learning into control tasks with optimality guarantees.\\n\\n-\\tWhat method are you using to solve the control problems in Eq (5) and (6)?\\n\\nIn Eq (5) and (6), since both the objectives and the constraints contain neural networks, we set up our networks with Tensorflow and solve the control problem using projected gradient descent method with adaptive step size. The gradients can be calculated via existing modules in Tensorflow for backpropagation. In both cases the optimization problems can be solved fairly fast and we observe the solution convergence after around 20 iterations. As shown in Figure 4 (c) of the original manuscript, the control actions outputted by solving (5) are stable and much better than the results achieved from regular neural network + MPC (which has no optimality guarantee). In the revised paper, we have included more details on the solution algorithm of Eq. (5) in the last paragraph of Section 2.\\n\\n-\\tWhy does Figure 6 of [Nagabandi et al.] have significantly higher rewards for their method, even in the K=5 case?\\n\\nWe thank the reviewer for carefully proofreading the figure, and we also thank Nagabandi et al open sourced their code. We re-run their simulations using all the default parameters and observed the reward for their cases are all around 10x less than they showed in Figure 6. We also refer to [KC], where their rewards on swimmer case are significantly smaller than the case as [Nagabandi et al]. We are not sure what is causing the difference in the performances, although we believe there may be difference in hyperparameter settings and the random starting points between our and [Nagabandi et al]\\u2019s result. We make sure the comparison in Figure 3 of our paper are using the same hyperparameters and training data, except the differences on control methods.\\n\\n-\\tThe experimental results in Figure 5\\n\\nWe thank the reviewer for bringing up this interesting point. In the toy example on classifying points in 2D grid, we used a 2-layer neural networks for both conventional neural networks and ICNN, with 200 neurons in each layer. We simulate the case when training data is small (100 training samples). We observe the results given by conventional neural networks are quite unstable by using different random seeds and are prone to be overfitting. On the contrary, by adding constraints on model weights to train the ICNN, fitting result is better using this small-size training data, while the learned landscape is also beneficial to the optimization problem. \\nIn the revised manuscript, we added more details on the model and training setup, the learning task, and the optimization task to address the confusion.\\n\\nReferences\\n\\n[MP] Alessandro Magnani and Stephen P Boyd. \\u201cConvex piecewise-linear fitting\\u201d. Optimization and Engineering, 10(1):1\\u201317, 2009. \\n\\n[BR] Recht, Benjamin. \\\"A tour of reinforcement learning: The view from continuous control.\\\" arXiv preprint arXiv:1806.09460(2018).\\n\\n[KC] Kurutach T, Clavera I, Duan Y, Tamar A, Abbeel P. \\u201cModel-Ensemble Trust-Region Policy Optimization\\u201d. arXiv preprint arXiv:1802.10592. 2018 Feb 28.\"}", "{\"title\": \"Theoretical guarantees on the convexity of neural networks-MPC [2/3]\", \"comment\": \"-\\tA stronger and more formal argument should be used to show that Equation (5) is a convex optimization problem as claimed.\\n\\nWe thank the reviewer for this helpful suggestion and agree that a more rigorous argument should be used to show that Equation (5) is a convex optimization problem. In the revised manuscript, we update Equation (5) to reflect the fact that we are using input-convex neural networks with all non-negative weights and the input negation trick. Equations (5d) and (5e) are added in the revised formulation which denote the augmented input variables and the consistency condition between u and its negation v.\\n\\nThen, in order to show Equation (5) is a convex optimization problem, we need to both the objective function and constraints are convex. Specifically,\\n(i). The objective function J(\\\\hat{x},y) (Equation(5a)) is convex, non-decreasing with respect to \\\\hat{x} and y;\\n(ii). The functions f and g are parameterized as ICRNNs with all weight matrices non-negative, which ensures f and g are convex and non-decreasing. Therefore rolling it out over time, the compositions remain convex with respect to the input.\\n(iii). The consistency constraint (5e) that one variable is the negation of the other is linear, therefore it preserves the convexity of optimization problems.\\n\\nWe have clarified this discussion in the revised manuscript.\\n\\n-\\tConvexity on Equation (6)\\n\\nAs a similar case to the optimization problem in Equation (5), the system dynamics is governed by Equation (6b). By restricting all weight matrices in ICNN to be non-negative and expanding the inputs, the MPC formulation for MuJuCo case is convex with respect to control action vectors at different time. As shown in Fig. 3, such convex properties also guaranteed that our results on a series of control tasks outperformed current neural network based dynamical model. \\n\\n\\nResponse to reviewer\\u2019s other comments:\\n-\\tFigure 1 hides too much information\\nWe agree and have revised Figure 1 to include more information about problem setup related to modeling objective, control objective and constraints. In the left plot of revised Figure 1, we describe how an input convex neural network can be trained to learn the system dynamics. Then the right plot demonstrates the overall control framework, where we solve a convex predictive control problem to find the optimal actions. The optimization steps are also based on objectives and dynamics constraints represented by the trained networks.\\n\\n-\\tThe theoretical results in Section 3 seem slightly out-of-place within the broader context of this paper\\n\\nWe thank the reviewer for this question. The key idea for this section is by making the neural network convex from input to output, we are able to obtain both good predictive accuracies and tractable computational optimization problems. There are two main results presented in Section 3, Theorem 1 is about the representation capacity of ICNN (can represent all convex functions) and Theorem 2 is on the representation efficiency of ICNN (can be exponentially more efficient than conventional convex piecewise linear fitting [MP]).\", \"since_our_proposed_control_framework_involves_two_stages\": \"1) using ICNN/ICRNN for system identification; 2) design an optimal controller via solving a predictive control problem. For the system identification stage, obviously, one benefit of using input convex networks (instead of conventional neural networks) is its computational trackability and optimality guarantee for the subsequent optimization stage. However, besides the trackability, reasonable representation capacity to model complex relationships is also desired as a system identification model. Theorem 1 and 2, on this aspect, demonstrate such representability and efficiency of ICNN. In the revised manuscript, we have added the above discussion at the beginning of Section 3 to improve the coherence of the paper.\"}", "{\"title\": \"Response to Reviewer 1-Theoretical guarantees on the convexity of neural networks-MPC [1/3]\", \"comment\": \"We are grateful to the reviewer for carefully reading our paper and providing many helpful suggestions and comments that have significantly improved the revised version. We also appreciate the opportunity to clarify our presentation of theorems, figures and experiment setups, as well as some unclear writing in the manuscript. We agree with the reviewer that the original manuscript contained several parts that were not clear and some typos, and it resulted in some confusion on the technical results. Overall, we note that the results of the paper remain unchanged: deep (recurrent) neural networks can be made input convex and effectively used in control of complex systems. Based on the comments made by the reviewers, we have made the figures more illustrative, and the formulations and the theorems more rigorous. Our implementation of the algorithms is consistent with the updated manuscript, so we stress that these changes are made to clarify the writing of the paper and all of the simulation and numerical results remain unchanged. Below we provide a point-by-point account of the comments.\\n\\n-\\tConcerns on the correctness of Proposition 2\\n\\nWe thank the reviewer for bringing up this important question and agree this was a point of confusion in our original manuscript. In Proposition 2 of the original submission, we stated that we only need to keep V and W non-negative and this will result in a network that is convex from input to output. This is true for a single step, but as the reviewer correctly points out, negative weights cannot go through composition and maintain convexity. Actually, Proposition 1 and Proposition 2 in our original submission give the sufficient condition for a network to be input-convex for a single step; when used for control purpose, these network structures (both ICNN and ICRNN) should be modified to their equivalent variants: restricting all weight matrices to be non-negative (element-wise) and augmenting the input to include its negation. Such network structure variants and \\u201cduplicate trick\\u201d have been mentioned in Section 3.1 Sketch of proof for Theorem 1 in our original manuscript, \\u201cWe first construct a neural network with ReLU activation functions and both positive and negative weights, then we show that the weights between different layers of the network can be restricted to be nonnegative by a simple duplication trick. Specifically, since the weights in the input layer and passthrough layers can be negative, we simply add a negation of each input variable (e.g. both x and \\u2212x are given as inputs) to the network\\u201d. We apologize for not making this point clear and the notational confusions in our previous manuscript. To clarify, for both the MuJoCo locomotion tasks and the building control experiments, we used the modified input-convex network structures with all weights non-negative and input negation duplicates instead of the conventional input-convex structure for single step (but these two structures could be equivalently transformed). \\nIn the revised paper, we explicitly explain the sufficient conditions for ICNN/ICRNN variants that can be used for control purpose. We also update Proposition 1 and 2 to ease the confusions of convexity under control settings. Also, we have updated Figure 2 accordingly to demonstrate the modified ICNN/ICRNN structure, input duplication, operations and activation functions used for our control settings. For all the empirical experiments, we will release our code after the openreview process for result validation, which demonstrated that proposed control framework via input-convex networks obtain both good identification accuracies and better control performance compared with regular neural networks or linear models.\"}", "{\"title\": \"Well-motived but may have serious issues. EDIT: Serious issues have been fixed.\", \"review\": \"This is a well-motived paper that considers bridging the gap\\nin discrete-time continuous-state/action optimal control\\nby approximating the system dynamics with a convex model class.\\nThe convex model class has more representational power than\\nlinear model classes while likely being more tractable and\\nstable than non-convex model classes.\\nThey show empirical results in Mujoco continuous-control\\nenvironments and in an HVAC example.\\n\\nI think this setup is a promising direction but I have\\nsignificant concerns with some of the details and claims\", \"in_this_work\": \"1. Proposition 2 is wrong and the proposed input-convex recurrent\\n neural network architecture not input-convex.\\n To fix this, the D1 parameters should also be non-negative.\\n To show why the proposition is wrong, consider the convexity of y2\\n with respect to x1, using g to denote the activation function:\\n\\n z1 = g(U x1 + ...)\\n y2 = g(D1 z1 + ...)\\n\\n Thus making\\n\\n y2 = g(D1 g(U x1 + ...) + ...)\\n\\n y2 is *not* necessarily convex with respect to x1 because D1 takes\\n an unrestricted weighted sum of the convex functions g(U x1 + ...)\\n\\n With the ICRNN architecture as described in the paper not being\\n input-convex, I do not know how to interpret the empirical findings\\n in Section 4.2 that use this architecture.\\n\\n2. I think a stronger and more formal argument should be used to show\\n that Equation (5) is a convex optimization problem as claimed.\\n It has arbitrary convex functions on the equality constraints that\\n are composed with each other and then used in the objective.\\n Even with parts of the objective being convex and non-decreasing\\n as the text mentions, it's not clear that this is sufficient when\\n combined with the composed functions in the constraints.\\n\\n3. I have similar concerns with the convexity of Equation (6).\\n Consider the convexity of x3 with respect to u1, where g is\\n now an input-convex neural network (that is not recurrent):\\n\\n x3 = g(g(x1, u1), u2)\\n \\n This composes two convex functions that do *not* have non-decreasing\\n properties and therefore introduces an equality constraint that\\n is not necessarily even convex, almost certainly making the domain\\n of this problem non-convex. I think a similar argument can be\\n used to show why Equation (5) is not convex.\\n\\nIn addition to these significant concerns, I have a few other\\nminor comments.\\n\\n1. Figure 1 hides too much information. It would be useful to know,\\n for example, that the ICNN portion at the bottom right\\n is solving a control optimization problem with an ICNN as\\n part of the constraints.\\n\\n2. The theoretical results in Section 3 seem slightly out-of-place within\\n the broader context of this paper but are perhaps of standalone interest.\\n Due to my concerns above I did not go into the details in this portion.\\n\\n3. I think more information should be added to the last paragraph of\\n Section 1 as it's claimed that the representational power of\\n ICNNs and \\\"a nice mathematical property\\\" help improve the\\n computational time of the method, but it's not clear why\\n this is and this connection is not made anywhere else in the paper.\\n\\n4. What method are you using to solve the control problems in\\n Eq (5) and (6)?\\n\\n5. The empirical setup and tasks seems identical to [Nagabandi et al.].\\n Figure 3 directly compares to the K=100 case of their method.\\n Why does Fig 6 of [Nagabandi et al.] have significantly higher rewards\\n for their method, even in the K=5 case?\\n\\n6. In Figure 5, f_NN seems surprisingly bad in the red region of the\\n data on the left side. Is this because the model is not using\\n many parameters? What are the sizes of the networks used?\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"A good paper that bridges the gap between neural networks and MPC.\", \"review\": \"This paper proposes to use input convex neural networks (ICNN) to capture a complex relationship between control inputs and system dynamics, and then use trained ICNN to form a model predictive control (MPC) problem for control tasks.\\nThe paper is well-written and bridges the gap between neural networks and MPC.\\nThe main contribution of this paper is to use ICNN for learning system dynamics. ICNN is a neural network that only contains non-negative weights. Thanks to this constraint, ICNN is convex with respect to an input, therefore MPC problem with an ICNN model and additional convex constraints on control inputs is a convex optimization problem.\\nWhile it is not easy to solve such a convex problem, it has a global optimum, and a gradient descent algorithm will eventually reach such a point. It should also be noted that a convex problem has a robustness with respect to an initial starting point and an ICNN model itself as well. The latter is pretty important, since training ICNN (or NN) is a non-convex optimization, so the parameters in trained ICNN (or NN) model can vary depending on the initial random weights and learning rates, etc. Since a convex MPC has some robustness (or margin) over an error or deviation in system dynamics, while non-convex MPC does not, using ICNN can also stabilize the control inputs in MPC.\\nOverall, I believe that using ICNN to from convex MPC is a sample-efficient, non-intrusive way of constructing a controller with unknown dynamics. Below are some minor suggestions to improve this paper.\\n\\n-- Page 18, there is Fig.??. Please fix this.\\n-- In experiments, could you compare the result with a conventional end-to-end RL approach? I know this is not a main point of this paper, but it can be more compelling.\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"A great work in quality, originality, significance, some questions to authors\", \"review\": \"The paper proposes neural networks which are convex on inputs data to control problems. These types of networks, constructed based on either MLP or RNN, are shown to have similar representation power as their non-convex versions, thus are potentially able to better capture the dynamics behind complex systems compared with linear models. On the other hand, convexity on inputs brings much convenience to the later optimization part, because there are no worries on global/local minimum or escaping saddle points. In other words, convex but nonlinear provides not only enough search space, but also fast and tractable optimization. The compromise here is the size of memory, since 1) more weights and biases are needed to connect inputs and hidden layers in such nets and 2) we need to store also the negative parts on a portion of weights.\\n\\nEven though the idea of convex networks were not new, this work is novel in extending input convex RNN and applying it into dynamic control problems. As the main theoretical contribution, Theorem 2 shows that to have same representation power, input convex nets use polynomial number of activation functions, compared with exponential from using a set of affine functions. Experiments also show such effectiveness. The paper is clearly and nicely written. These are reasons I suggest accept.\", \"questions_and_suggestions\": \"1) For Lemma 1 and Theorem 1, I wonder whether similar results can be established for non-convex functions. Intuitively, it seems that as long as assuming Lipschiz continuous, we can always approximate a function by a maximum of many affine functions, no matter it is convex or not. Is this right or something is missing?\\n\\n2) In the main paper, all experiments were aimed to address ICNN and ICRNN have good accuracy, but not they are easier to optimize due to convexity. In the abstract, it is mentioned \\\"... using 5X less time\\\", but I can only see this through appendix. A suggestion is at least describing some results on the comparison with training time in the main paper.\\n\\n3) In Appendix A, it seems the NN is not trained very well as shown in the left figure. Is this because the number of parameters of NN is restricted to be the same as in ICNN? Do training on both spend the same resource, ie, number of epoch? Such descriptions are necessary here.\\n\\n4) In Table 2 in appendix, why the running time of ICNN increases by a magnitude for large H in Ant case?\", \"typos\": \"Page 1 \\\"simple control algorithms HAS ...\\\"\\n\\tPage 7 paragraph \\\"Baselines\\\": \\\"Such (a) method\\\".\\n\\tIn the last line of Table 2, 979.73 should be bold instead of 5577.\\n\\tThere is a ?? in appendix D.4.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"comment\": \"This paper proposed a powerful tool for an important category of model based control system. For model based control, there are usually two steps: 1. modeling the system as accurate as you can, and 2. Optimize over the fitted system to find the best control strategy. It is known that convex optimization is always tractable, so if the true system can be convex, or almost exactly regressed by a convex function, we can make use of it. However, for modeling complex systems, where neural network becomes more popular, there is no guarantee that step 1 outputs a convex system -- even if the system is convex, we do not know whether the model is convex unless we can prove that they are close enough, which is usually difficult. So paper such as https://arxiv.org/abs/1708.02596 use a non-convex model and have to search for the control strategy by griding and testing the entire space almost exhaustedly.\\n\\nThis paper propose an NN structure which is simply based on only Relu, but guarantees a convex modeling of the system. Compared with maximum piecewise linear modeling, it only introduces a fixed 2 piece linear activating module, but dramatically decreases the number of parameters from exponential to polynomial, which makes it realizable.\\n\\nBut if it's true, now something interesting might come up. The authors show that even for seemingly hard scenarios including Mujoco, it performances well to attempt a convex model, whose optimizer corresponds to good control scheme in true problems. It likely means that, despite the impossibility to show the model is everywhere accurate, the model well sketches the landscape of true loss around its minimizer (where is probably locally convex) and the trajectory of optimizing iterations, where people are interested. I'm not sure if that's true. NN always surprise people, but it's definitely worth rethinking and experimenting, based on the result on such complex tasks in this paper.\", \"title\": \"Efficient model based control of almost convex systems -- convexity comes in surprisingly\"}" ] }
r1fWmnR5tm
Learning to Search Efficient DenseNet with Layer-wise Pruning
[ "Xuanyang Zhang", "Hao liu", "Zhanxing Zhu", "Zenglin Xu" ]
Deep neural networks have achieved outstanding performance in many real-world applications with the expense of huge computational resources. The DenseNet, one of the recently proposed neural network architecture, has achieved the state-of-the-art performance in many visual tasks. However, it has great redundancy due to the dense connections of the internal structure, which leads to high computational costs in training such dense networks. To address this issue, we design a reinforcement learning framework to search for efficient DenseNet architectures with layer-wise pruning (LWP) for different tasks, while retaining the original advantages of DenseNet, such as feature reuse, short paths, etc. In this framework, an agent evaluates the importance of each connection between any two block layers, and prunes the redundant connections. In addition, a novel reward-shaping trick is introduced to make DenseNet reach a better trade-off between accuracy and float point operations (FLOPs). Our experiments show that DenseNet with LWP is more compact and efficient than existing alternatives.
[ "reinforcement learning", "DenseNet", "neural network compression" ]
https://openreview.net/pdf?id=r1fWmnR5tm
https://openreview.net/forum?id=r1fWmnR5tm
ICLR.cc/2019/Conference
2019
{ "note_id": [ "rJeoEo3A1E", "rJgkzj5Rn7", "SJe-bk0on7", "rkllOklK3Q" ], "note_type": [ "meta_review", "official_review", "official_review", "official_review" ], "note_created": [ 1544633139236, 1541479174944, 1541295864938, 1541107559624 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1331/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1331/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1331/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1331/AnonReviewer2" ] ], "structured_content_str": [ "{\"metareview\": \"The paper proposes to apply Neural Architecture Search for pruning DenseNet.\\n\\nThe reviewers and AC note the potential weaknesses of the paper in various aspects, and decided that the authors need more works to publish.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Limited contribution\"}", "{\"title\": \"Straightforward Idea with Limited Contribution\", \"review\": \"This paper proposes to apply Neural Architecture Search (NAS) for connectivity pruning to improve the parameter efficiency of DenseNet. The idea is straightforward and the paper is well organized and easy to follow.\\n\\nMy major concern is the limited contribution. Applying deep reinforcement learning (DRL) and following the AutoML framework for architecture/parameter pruning has been extensively investigated during the past two years. For instance, this work has a similar motivation and design \\\"AMC: AutoML for Model Compression and Acceleration on Mobile Devices.\\\"\\n\\nThe experimental results also show a limited efficiency improvement according to Table 1. Although this is a debatable drawback compared with the novelty/contribution concern, it worth to reconsider the motivation of the proposed method given the fact that the AutoML framework is extremely expensive due to the DRL design.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"RL based method for pruning a pre-trained network\", \"review\": \"This paper proposes a layer-based pruning method based on reinforment learning for pre-train networks.\", \"there_are_several_major_issues_for_my_rating\": [\"Lack of perspective. I do not understand where this paper sits compared to other compression methods. If this is about RL great, if this is about compression, there is a lack of related work and proper comparisons to existing methods (at least concenptual)\", \"Claims about the benefits of not needed expertise are not clear to me as, from the results, seems like expertise is needed to set the hyperparameters.\", \"experiments are not convincing. I would like to see something about computational costs. Current methods aim at minimizing training / finetuning costs while maintaining the accuracy. How does this stands in that regard? How much time is needed to prune one of these models? How many resources?\", \"Would it be possible to add this process into a training from scratch method?\", \"how would this compare to training methods that integrate compression strategies?\", \"Table 1 shows incomplete results, why? Also, there is a big gap between accuracy/number of parameters trade-of between this method and other presented in that table. Why?\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"The evaluation could be improved.\", \"review\": \"The paper introduces RL based approach to prune layers in a DenseNet. This work extends BlockDrop to DenseNet architecture making the controller independent form the input image. The approach is evaluated on CIFAR10 and CIFAR100 datasets as well as on ImageNet showing promising results.\\n\\nIn order to improve the paper, the authors could take into consideration the following points:\\n\\n1. Given the similarity of the approach with BlockDrop, I would suggest to discuss it in the introduction section clearly stating the similarities and the differences with the proposed approach. \\n2. BlockDrop seems to introduce a general framework of policy network to prune neural networks. However, the authors claim that BlockDrop \\\"can only be applied to ResNets or its variants\\\". Could the authors comment on this? \\n3. In the abstract, the authors claim: \\\"Our experiments show that DenseNet with LWP is more compact and efficient than existing alternatives\\\". It is hard to asses if the statement is correct given the evidence presented in the experimental section. It is not clear if the method is more efficient and compact than others, e. g. CondenseNet. \\n4. In the experimental section, addressing the following questions would make the section stronger: What is more important FLOPs or number of parameters? What is the accuracy drop we should allow to pay for reduction in number of parameters or FLOPs?\\n5. For the evaluation, I would suggest to show that the learned policy is better than a random one: e. g. not using the controller to define policy (in line 20 of the algorithm) and using a random random policy instead.\\n6. In Table 1, some entries for DenseNet LWP are missing. Is the network converging for this setups? \\n7. \\\\sigma is not explained in section 3.3. What is the intuition behind this hyper parameter?\\n8. I'd suggest moving related work section to background section and expanding it a bit.\\n9. In the introduction: \\\"... it achieved state-of-the-art results across several highly competitive datasets\\\". Please add citations accordingly.\", \"additional_comments\": \"1. It might be interesting to compare the method introduced in the paper to a scenario where the controller is conditioned on an input image and adaptively selects the connections/layers in DenseNet at inference time.\\n2. It might be interesting to report the number of connections in Table 1 for all the models.\\n\\nOverall, I liked the ideas presented in the paper. However, I think that the high degree of overlap with BlockDrop should be addressed by clearly stating the differences in the introduction section. Moreover, I encourage the authors to include missing results in Table 1 and run a comparison to random policy. In the current version of the manuscript, it is hard to compare among different methods, thus, finding a metric or a visualization that would clearly outline the \\\"efficiency and compactness\\\" of the method would make the paper stronger.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
HJlWXhC5Km
Learning to Control Visual Abstractions for Structured Exploration in Deep Reinforcement Learning
[ "catalin ionescu", "tejas kulkarni", "aaron van de oord", "andriy mnih", "vlad mnih" ]
Exploration in environments with sparse rewards is a key challenge for reinforcement learning. How do we design agents with generic inductive biases so that they can explore in a consistent manner instead of just using local exploration schemes like epsilon-greedy? We propose an unsupervised reinforcement learning agent which learns a discrete pixel grouping model that preserves spatial geometry of the sensors and implicitly of the environment as well. We use this representation to derive geometric intrinsic reward functions, like centroid coordinates and area, and learn policies to control each one of them with off-policy learning. These policies form a basis set of behaviors (options) which allows us explore in a consistent way and use them in a hierarchical reinforcement learning setup to solve for extrinsically defined rewards. We show that our approach can scale to a variety of domains with competitive performance, including navigation in 3D environments and Atari games with sparse rewards.
[ "exploration", "deep reinforcement learning", "intrinsic motivation", "unsupervised learning" ]
https://openreview.net/pdf?id=HJlWXhC5Km
https://openreview.net/forum?id=HJlWXhC5Km
ICLR.cc/2019/Conference
2019
{ "note_id": [ "HJxYaNqflV", "r1g7AgzD0X", "ByePRcoH07", "HJgpllc4RX", "r1x_6y94RX", "Bkl0LJ9N0X", "rklGJycVCm", "rkg7LLJ6h7", "S1eCQpbq3X", "rkg-OWpO2Q" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544885441495, 1543082186761, 1542990543069, 1542918132851, 1542918079618, 1542917974327, 1542917849529, 1541367370823, 1541180710444, 1541095784952 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1330/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1330/Authors" ], [ "ICLR.cc/2019/Conference/Paper1330/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1330/Authors" ], [ "ICLR.cc/2019/Conference/Paper1330/Authors" ], [ "ICLR.cc/2019/Conference/Paper1330/Authors" ], [ "ICLR.cc/2019/Conference/Paper1330/Authors" ], [ "ICLR.cc/2019/Conference/Paper1330/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1330/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1330/AnonReviewer3" ] ], "structured_content_str": [ "{\"metareview\": \"The paper presents an unsupervised visual abstraction model, used for reinforcement learning tasks. It is trained through intrinsic rewards, generated from temporal differences of inputs. This is similar to \\\"learning to control pixels\\\". The method is tested in DM Lab (3D environment, 2D navigation tasks) and Atari (Montezuma's Revenge).\\n\\nThe paper is at times hard to follow, and it seems the improvements accompanying the rebuttals did not convince reviewers to change their notes significantly. The experiments do not contain enough comparisons to other models, baselines, nor ablations, to sustain the claims.\\n\\nIn its current form, this is not acceptable for publication at ICLR.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Interesting structured exploration idea, not clear nor detailed enough\"}", "{\"title\": \"termination condition\", \"comment\": \"Thanks for this question. The options in our setting maximize or minimize entity attributes, so there isn't a natural goal success criteria (e.g. sometimes there could be obstacles in an entity's path or none at all). In some cases it might be possible to make statements about goal achievement, for instance if the agent can learn to reason about immovable obstacles. This could be an interesting direction to explore in the future. Also in other papers which have considered adaptive T in the setting of relational goals (e.g. Kulkarni et al.), an internal goal critic could clearly measure goal success contrary to our goal space. In practice, a single T (=20) works well across all our experiments and domains.\"}", "{\"title\": \"Option termination condition\", \"comment\": \"Thanks for the rebuttal. Can you explain why the option termination condition is that Q_{meta} picks an action for every fixed number of steps? Why the termination condition is independent with the task, the environment and the state? Why this is a good choice?\"}", "{\"title\": \"Significantly improved the clarity of our writing in the revision\", \"comment\": \"Thanks a lot for the critical feedback. We have improved the clarity of our writing and made the contributions clearer. We urge you to read the revised paper and hopefully it will convince you of our contributions.\", \"q\": \"\\u201cwhat is Z_c in eqn 2?\\u201d\\nThis is a new random variable that we have to introduce in order to have a well defined objective. This has been explained more clearly in the updated text.\", \"the_original_image_is_transformed_by_applying_the_following_operators\": \"additive color changes in HSV space, horizontal flips and spatial shifts.\"}", "{\"title\": \"Addressed concerns about the experimental setup/experiments and clarity of the ideas/contributions\", \"comment\": \"Thanks a lot for all the feedback and suggestions. This has helped us improve the clarity of our writing.\", \"q\": \"\\u201cAnother concern I have is on some of the experiment results \\u2026 \\u201c\\nWe are using the Espeholt et al. training setup but with Q(lambda) to make it comparable with our agent. We have clarified this in the revised supplemental. Our method either outperforms or is in the same ballpark as the baseline. In tasks with sparse rewards, our method is especially beneficial as the options bank aids temporally extended exploration. Our main claim is that prior DRL agents have not been able to object based structured exploration from pixels. Scaling this approach and making it more robust is an open question but we believe we have shown a promising avenue along these lines.\"}", "{\"title\": \"Improved clarity of the writing, clearly wrote down our contributions and made revisions to the paper\", \"comment\": \"We want to thank the reviewer for valuable feedback in improving the clarity of the paper.\", \"q\": \"\\u201c Please clarify what Qmeta and Qtask do in the text right in the beginning.\\u201c\\nQ_meta picks an internal action, i.e. indexes into either the options bank or Q_task (gets extrinsic reward and operates over low level actions), and is optimized to maximize the environment task reward. This choice is fixed for T steps and the chosen sub-controller executes real actions in the environment. We added a clear explanation in the introduction as well as Sec 3.\"}", "{\"title\": \"Rebuttal summary\", \"comment\": \"We want to thank all reviewers for their critical feedback and suggestions, which has already helped us improve the paper\\u2019s clarity and presentation. All the reviewers agree that the paper tackles an interesting and important problem of discovering spatial and temporal abstractions given raw observations and actions. The main concerns were about the clarity of the writing, making it hard to clearly assess the underlying contributions.\\n\\nWe have significantly improved the presentation of our ideas considering all the feedback and explicitly made our contributions clearer. Our two key contributions are: (1) An information theoretic loss and architecture to learn spatio-temporal visual abstractions given raw pixels and actions, (2) a new agent architecture which learns temporal abstractions grounded in the geometry of the discovered visual abstractions. There have been several agent architectures in the past that make use of object-oriented information for constructing states and to aid exploration. However, this is the first agent architecture that simultaneously learns visual and temporal abstractions, while demonstrating clear improvements over baselines on hard 3D navigation and Atari games.\\n\\nWe urge all reviewers to read the updated version of the paper, as we have carefully addressed and incorporated all critical feedback and suggestions.\"}", "{\"title\": \"The paper is unfortunately written quite confusingly such that it is hard to evaluate the contribution of the potentially interesting ideas.\", \"review\": \"The appproach introduces visual abstractions that are used for reinforcement learning. The abstractions are learned using a lower bound on the mutual information and options are created to generate different measurements for each abstraction. The algorithm hence learns to \\\"control\\\" each abstraction as well as to select the options to achieve the overall task. The algorithm is tested on a 3D navigation task and a few Atari tasks which are known for difficult exploration.\\n\\nThe paper might contain some interesting ideas, however, I am quite confused about the paper due to lack of clarity in writing. The approach is not properly motivated, many equations are not really eplained and important information is missing, so it is really hard to evaluate the contribution of the approach. Please see below for more comments:\\n- It is unclear how the intrinsic reward is defined (which is critical to understand the approach).\\n- It is unclear what the M different measurements are or for what they are used for. \\n- It is unclear qhy equation 1 defines a classification loss. Distribution q is not defined in Eq (1).\\n- I do not understand the description of Q-meta in caption of Figure 2, \\\"Qmeta acts every T steps, which is the fixed temporal\\ncommitment window, and outputs an action to select and execute either: (1) composition over Q\\nfunction from the option bank indexed by a particular entity and an intrinsic reward function or (2)\\nthe Qtask policy which outputs raw actions.\\\" How can an action be a composition over Q-function and a intrinisic reward function? Please clarify what Qmeta and Qtask do in the text right in the beginning. \\n\\nI have to say that the paper confused me too much that it is likely I missed the point of the paper. On the positive side, I think the learning of the abstractions using lower bounds of the mutual information is very interesting. The authors should work on their presentation and this could be a very nice paper.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"A Structured Exploration Algorithm with Visual Abstractions\", \"review\": \"This paper proposed an algorithm for structured exploration in deep reinforcement learning via learning the visual abstractions from pixels. The proposed method learns discrete visual abstractions and derives intrinsic reward functions from them so as to help the agent to optimize the policy.\\n\\nThe proposed method is interesting in that learning the visual abstractions together with the policy may assist in computing an optimal policy. The method is learning a meta Q function and (E * M+1) other Q functions. The authors mentioned that their work is most similar to hierarchical-DQN (Kulkarni et al., 2016) but this work required hand-crafted instance segmentation and the agent architecture do not learn about many intrinsic rewards learners. However, I am concerned if the proposed method solved the problem with the need of hand-crafted instance segmentation since, as shown in the Algorithm 1 and the caption of Figure 2, Q_{meta} acts every T steps. I do not understand why the meta Q function is used to propose actions for every fixed number of steps. Besides that, though the proposed method does have many intrinsic reward functions (in fact, there are E * M additional intrinsic reward functions). However, the authors did not show in the experiments if having too many intrinsic reward functions helps a lot. It will be better if the authors can show that, larger values for E or M can make the performances better.\\n\\nAnother concern I have is on some of the experiment results. For the experiment results in Figure 5 and 6, only in the left figures can the results of the proposed methods outperform the baselines. Besides that, the authors may need to describe the baseline methods in the experiments in more details.\\n\\nAlso, it will be better if the authors can improve the paper a little bit with the writing. For example, it will be better if the authors can explain the variables X, Y and the distribution q when mentioning Equation 1 so that it is easier to understand the paper. Also, there are some typos, such as the section reference on line 10 of Algorithm 1, the definition of the g function on the last line of page 3 (I guess the authors want to write \\\"{0...E}\\\" instead of \\\"{0, E}\\\") and the second sentence of the experiment section (at least I did not see the supplementary sections, but the authors mentioned that). It is better if these typos can be fixed.\", \"references\": \"Tejas D Kulkarni, Karthik Narasimhan, Ardavan Saeedi, and Josh Tenenbaum. Hierarchical deep reinforcement learning: Integrating temporal abstraction and intrinsic motivation. In Advances in neural information processing systems, pp. 3675\\u20133683, 2016.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Insufficient clarity\", \"review\": \"REVISION: thanks for the clarification. I have slightly increased my rating (to 4).\\n\\nThis paper tackles a very interesting subject but lacks sufficient clarity of presentation to allow me to do a proper review.\\n\\nFirst, there are many sentences which are not well-formed or are ambiguous (in pretty much all the sections). Then there are terms which are introduced without being first clearly explained or defined. Finally, there are issues with the mathematical clarity as well, with many notations which are used without being explained or defined. Sometimes one can figure out the missing information later (e.g., fig 1 talks about mutual information objectives without stating if we want to maximize or minimize it, but later in the text we figure that out) but it makes reading very difficult.\\n\\nWhat is a 'transformed one' (on page 2)\\nWhat is a 'geometric intrinsic reward'?\\nWhere are the intrinsic rewards defined?\\nWhat is a 'non-parametric classifier'? A neural net? an kernel SVM?\", \"there_are_also_some_mathematical_problems\": [\"if f (page 3) has a discrete output, then it will probably lose information, so it cannot be inverted (contrary to the stated assumption that f(x)!=f(y) for x!=y)\", \"what are the differences between the different Q functions being defined? do the correspond to different action spaces? What is Q_task? What is pi_meta?\", \"in eqn 2, I do not think that the log q_c term maximizes the mutual information between actions and (G(t),G(t+1)), i.e. it would be missing an entropy term\", \"what is Z_c in eqn 2?\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}" ] }
B1g-X3RqKm
A Proposed Hierarchy of Deep Learning Tasks
[ "Joel Hestness", "Sharan Narang", "Newsha Ardalani", "Heewoo Jun", "Hassan Kianinejad", "Md. Mostofa Ali Patwary", "Yang Yang", "Yanqi Zhou", "Gregory Diamos", "Kenneth Church" ]
As the pace of deep learning innovation accelerates, it becomes increasingly important to organize the space of problems by relative difficultly. Looking to other fields for inspiration, we see analogies to the Chomsky Hierarchy in computational linguistics and time and space complexity in theoretical computer science. As a complement to prior theoretical work on the data and computational requirements of learning, this paper presents an empirical approach. We introduce a methodology for measuring validation error scaling with data and model size and test tasks in natural language, vision, and speech domains. We find that power-law validation error scaling exists across a breadth of factors and that model size scales sublinearly with data size, suggesting that simple learning theoretic models offer insights into the scaling behavior of realistic deep learning settings, and providing a new perspective on how to organize the space of problems. We measure the power-law exponent---the "steepness" of the learning curve---and propose using this metric to sort problems by degree of difficulty. There is no data like more data, but some tasks are more effective at taking advantage of more data. Those that are more effective are easier on the proposed scale. Using this approach, we can observe that studied tasks in speech and vision domains scale faster than those in the natural language domain, offering insight into the observation that progress in these areas has proceeded more rapidly than in natural language.
[ "Deep learning", "scaling with data", "computational complexity", "learning curves", "speech recognition", "image recognition", "machine translation", "language modeling" ]
https://openreview.net/pdf?id=B1g-X3RqKm
https://openreview.net/forum?id=B1g-X3RqKm
ICLR.cc/2019/Conference
2019
{ "note_id": [ "HJlyV2agg4", "S1gHjNOq3Q", "rJezimi_n7", "SJgUXa2QhQ" ], "note_type": [ "meta_review", "official_review", "official_review", "official_review" ], "note_created": [ 1544768551433, 1541207196991, 1541088153668, 1540767006348 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1329/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1329/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1329/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1329/AnonReviewer2" ] ], "structured_content_str": [ "{\"metareview\": \"This paper attempts at ranking of tasks handled by deep learning methods based on learning curves. A main premise of the paper is \\\"fitting learning curves to a power law, and then sorting tasks by empirical estimates of exponents\\\". The idea of the paper is quite interesting.\\n\\nHowever, the paper makes some bold claims which are a bit distant from the empirical study it conducts. It is hard to line up the order in Table 2 with the Chomsky hierarchy. Also, for various tasks, various different deep models are used (ResNets for image classification, LSTMs for LM, and so on). I was not convinced that the beta parameter is model-agnostic.\\n\\nSimilar concerns are expressed by the reviewers, and they agree that the paper should address the criticism that they express in their feedback.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Meta Review\"}", "{\"title\": \"Interesting approach for a relatively unexplored issue\", \"review\": \"The paper proposes an empirical solution to coming up with a hierarchy of deep learning tasks or in general machine learning tasks. They propose a two-way analysis where power-law relations are assumed between (a) validation loss and training set size, and (b) the number of parameters of the best model and training set size. The first power-law exponent, \\\\beta_g, indicates how much can more training data be helpful for a given task and is used for ordering the hardness of problems. The second power-law exponent, \\\\beta_p, indicates how effectively does the model use extra parameters with increasing training set (can also be thought of as how good the model is at compression). From experiments across a range of domains, the authors find that indeed on tasks where much of the progress has been made tend to be ones with smaller \\\\beta_g (and \\\\beta_p). It's arguable as to how comparable these power-law exponents are across domains because of differences in losses and other factors, but it's definitely a good heuristic to start working in this direction.\", \"clarifications_needed\": \"(a) Why was full training data never used? The plots/analysis would have looked more complete if the whole training data was used, wondering why certain thresholds for data fraction were chosen.\\n(b) What exactly does dividing the training set into independent shards mean? Are training sets of different sizes created by random sampling without replacement from the whole training set?\\n(c) How exactly is the \\\"Trend\\\" line fitted? From the right sub-figure in Figure 1, it seems that fitting a straight line in only the power-law region makes sense. But that would require determining the start and end points of the power-law region. So some clarification on how exactly is this curve fitting done? For the record, I'm satisfied with the curve fitting done in the plots but just need the details.\", \"major_issues\": \"(a) Very difficult to know in Figure 3 (right) what s(m)'s are associated with which curve, except the one for RHNs maybe.\\n(b) Section 4.1: In the discussion around Figure 2, I found some numbers a little off. Firstly, using the left plot, I would say that at even 8 images the model starts doing better than random instead of <25 that's stated in the text. Secondly, the 99.9% classification error rate for top-5 is wrong, it's 99.5% for top-5 (\\\"99.9% classification error rate for top-1 and top-5\\\").\\n(c) Section 5: The authors use the phrase \\\"low dimensional natural language data\\\" which is quite debatable, to say the least. The number of possible sentences of K length with vocabulary |V| scale exponentially |V|^K where |V| is easily in 10K's most of the time. So to say that this is low dimensional is plain wrong. Just think about what is the (input, output) space of machine translation compared to image classification.\\n\\nTypos/Suggestions:\\n(a) Section 3: \\\"Depending the task\\\" -> \\\"Depending on the task\\\"\\n(b) Section 4.3: \\\"repeatably\\\" -> \\\"repeatability\\\"\\n(c) Figure 4: Specify number of params in millions. The plot also seems oddly big compared to other plots. Also, proper case the axis labels, like other plots. \\n(d) Section 4.2 and 4.2.1 can be merged because the character LM experiments are not discussed in the main text or at least not clearly enough for me. The values of \\\\beta_g seem to include the results of character LM experiments. So either mention the character LM experiments in more detail or just point to results being in appendix.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"ambitious goal and lack of approach\", \"review\": \"The paper provides empirical evidence that the generalization error scales inversely proportional to the log of number of training samples.\\n\\nThe motivation of the paper is well explained. A large amount of effort is put into experiments. The conclusion is consistent throughout.\\n\\nIt is a little unclear about the definition of s(m). From the definition at the end of Section 2, it is unclear what it means to fit the training data. It can mean reaching zero on the task loss (e.g., the zero-one loss) or reaching zero on the surrogate loss (e.g., the cross entropy). I assume models larger than a certain size should have no trouble fitting the training set, so I'm not sure if the curve, say in Figure 2., is really plotting the smallest model that can reach zero training error or something else.\\n\\nVarying the size of the network is also tricky. Most papers, including this one, seem to be confined by the concept of layers. Increasing the number of filters and increasing the number of hidden units are actually two very structured operations. We seldom investigate cases to break the symmetry. For example, what if the number of hidden units is increased in one layer while the number is decreased in another layer? What if the number of hidden units is increased for the forward LSTM but not the backward? Once we break the symmetry, it becomes unclear whether the size of the network is really the right measure.\\n\\nSuppose we agree on the measure of network size that the paper uses. It is nice to have a consistent theory about the network size and the generalization error. However, it does not provide any reason, or at least rule out any reason, as to why this is the case. For example, say if I have a newly proposed model, the paper does not tell me much about the potential curve I might get.\\n\\nThe paper spends most of the time discussing the relationship between the network size and the generalization error, but it does not have experiments supporting the hypothesis that harder problems are more difficult to fit or to generalize (in the paper's terminology, large beta_g and large beta_p). For example, a counter argument would be that the community hasn't found a good enough inductive bias for the tasks with large beta_g and beta_p. It is very hard to prove or disprove these statements from the results presented in the paper.\\n\\nThis paper also sends a dangerous message that image classification and speech recognition are inherently simpler than language modeling and machine translation. A counter argument for this might be that the speech and vision community has spent too much optimizing models on these popular data sets to the point that the models overfit to the data sets. Again these statements can be argued either way. It is hard to a scientific conclusion.\\n\\nAs a final note, here are the quotes from the first two paragraphs.\\n\\n\\\"In undergraduate classes on Algorithms, we are taught how to reduce one problem to another, so we can make claims about time and space complexity that generalize across a wide range of problems.\\\"\\n\\n\\\"It would be much easier to make sense of the deep learning literature if we could find ways to generalize more effectively across problems.\\\"\\n\\nAfter reading the paper, I still cannot see the relationships among language modeling, machine translation, speech recognition, and image classification.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Interesting paper, but requires more work\", \"review\": \"The authors propose to measure the power-law exponent to sort natural language processing, speech and vision problems by the degree of their difficulty. The main idea is that, while in general model performance goes up for most tasks if more training data becomes available or for bigger model sizes, some tasks are more effective at leveraging more data. Those tasks are supposed to be easier on the proposed scale.\\n\\nThe main idea of the paper is to consider the bigger picture of deep learning research and to put different results on different tasks into an overall context. I think this is an exciting direction and I strongly encourage the authors to continue their research. However, the paper in its current state seems not quite ready to me. The write-up is repetitive at times, i.e., 'There is not data like more data.' appears 7 times in the paper. Also, some parts are very informal, e.g., the use of 'can't' instead of 'cannot'. Also, the sentence 'It would be nice if our particular proposal is adopted, but it is more important to us that the field agree on a satisfactory solution than that they adopt our particular proposal.', though probably correct, makes the reader wonder if the authors do not trust their proposal, and it would better be replaced by alternative suggestions or deleted. Also, the claim 'This may help explain why there is relatively more excitement about deep nets in speech and vision (vs. language modeling).' seems strange to me - deep nets are the most commonly used model type for language modeling at the moment.\\n\\nFurthermore, I believe that drawing conclusions about tasks with the proposed approach is an over-simplification. The authors should probably talk about difficulties of datasets, since even for the same task, datasets can be of varying difficulty. Similarly, it would have been nice to see more discussion on what conclusions can be drawn from the obtained results; the authors say that they hope that 'such a hierarchy can serve as a guide to the data and computational requirements of open problems', but, unless I missed this, it is unclear from the paper how this should be done.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}" ] }
S1zlmnA5K7
Where Off-Policy Deep Reinforcement Learning Fails
[ "Scott Fujimoto", "David Meger", "Doina Precup" ]
This work examines batch reinforcement learning--the task of maximally exploiting a given batch of off-policy data, without further data collection. We demonstrate that due to errors introduced by extrapolation, standard off-policy deep reinforcement learning algorithms, such as DQN and DDPG, are only capable of learning with data correlated to their current policy, making them ineffective for most off-policy applications. We introduce a novel class of off-policy algorithms, batch-constrained reinforcement learning, which restricts the action space to force the agent towards behaving on-policy with respect to a subset of the given data. We extend this notion to deep reinforcement learning, and to the best of our knowledge, present the first continuous control deep reinforcement learning algorithm which can learn effectively from uncorrelated off-policy data.
[ "reinforcement learning", "off-policy", "imitation", "batch reinforcement learning" ]
https://openreview.net/pdf?id=S1zlmnA5K7
https://openreview.net/forum?id=S1zlmnA5K7
ICLR.cc/2019/Conference
2019
{ "note_id": [ "HJgnANUWeV", "HJe89GvsyE", "r1eYNjE9k4", "BkeBv0rmC7", "Hygun0mzAQ", "BJlW8CJGCm", "r1eG-1RlCQ", "BJeevRYjpX", "Hyglr0YiTm", "Bye92aKsTQ", "H1lXqTYiaQ", "SJxX5LHp3X", "rylyPcQ9h7", "HJeQ-p0F2Q", "SkgTBMocc7", "r1lpP08c5Q", "H1lHSV89q7", "rJxRRCmcqX", "Skxqda6YqX" ], "note_type": [ "meta_review", "official_comment", "comment", "official_comment", "comment", "official_comment", "comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review", "official_comment", "comment", "comment", "official_comment", "comment" ], "note_created": [ 1544803540440, 1544413838106, 1544338224917, 1542835805292, 1542762160165, 1542745672769, 1542672122002, 1542327895832, 1542327863848, 1542327730299, 1542327691138, 1541392011270, 1541188183488, 1541168379383, 1539121732917, 1539104356667, 1539101756950, 1539092181626, 1539067250050 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1328/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1328/Authors" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper1328/Authors" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper1328/Authors" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper1328/Authors" ], [ "ICLR.cc/2019/Conference/Paper1328/Authors" ], [ "ICLR.cc/2019/Conference/Paper1328/Authors" ], [ "ICLR.cc/2019/Conference/Paper1328/Authors" ], [ "ICLR.cc/2019/Conference/Paper1328/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1328/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1328/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1328/Authors" ], [ "~David_Schneider1" ], [ "~David_Schneider1" ], [ "ICLR.cc/2019/Conference/Paper1328/Authors" ], [ "~David_Schneider1" ] ], "structured_content_str": [ "{\"metareview\": \"The paper proposes batch-constrained approach to batch RL, where the policy is optimized under the constrain that at a state only actions appearing in the training data are allowed. An extension to continuous cases is given.\\n\\nWhile the paper has some interesting idea and the problem of dealing with extrapolation in RL is important, the approach appears somewhat ad hoc and the contributions limited.\\n\\nFor example, the constraint is based on whether (s,a) is in B, but this condition can be quite delicate in a stochastic problem (seeing a in s *once* may still allow large extrapolation error if that only observed transition is not representative). Section 4.1 gives some nice insights for the special finite MDP case, but those results are a little weak (requiring strong assumption that may not hold in practice) --- an example being the requirement that s' be included in data if (s,a) is in data and P(s'|s,a)>0 [beginning of section 4.1].\\n\\nIn contrast, there are other more robust and principled ways, such as counterfactual risk minimization (CRM) for contextual bandits (http://www.jmlr.org/papers/v16/swaminathan15a.html). For MDPs, the Bayesian version of DQN (the cited Azizzadenesheli et al., as well as Lipton et al. at AAAI'18) can be used to constrain the learned policy as well, with a simple modification of using the CRM idea for bandits. Would these algorithms be reasonable baselines?\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Nice work with potential, but contributions need to be strengthened\"}", "{\"title\": \"Re: Relationship with previous work?\", \"comment\": \"It is true that specific/adversarial counter-examples exist for most forms of function approximation. However, in this work we examine environments where deep Q-learning algorithms have already been shown to perform well on. We show that given the same dataset, deep off-policy algorithms can perform very differently depending on how close their policy is to the policy which generated the dataset. This is an important insight because we show that off-policy algorithms, which would otherwise perform well, can now fail. Furthermore, we demonstrate how this issue can be corrected and introduce a practical algorithm.\"}", "{\"comment\": \"Off-policy Q learning is already known not to converge even with linear function approximation [1]. What is the new insight and the relationship between the example listed here, and the previous work?\\n\\nReference\\n[1] Residual Algorithms: Reinforcement Learning with Function Approximation\", \"title\": \"Relationship with previous work?\"}", "{\"title\": \"Re: Minor Clarification\", \"comment\": \"Hi again, glad the response was helpful!\\n\\n(1) The inverse weighting happens by taking the mean in the KL divergence term (line 150) rather than the sum. (2) Good catch! Yes, the clipping only occurs during inference, the VAE training itself is not modified. We have updated the supplementary to reflect this.\"}", "{\"comment\": \"Thanks a lot for posting the code snippet and the extremely detailed answer - I really appreciate and found it incredibly helpful!\\n\\nI had one minor follow-up question. When computing the VAE loss, in the code sample you provided you simply added the reconstruction loss with the KL loss, but in the paper, you mentioned that the weighting of the KL term is inversely proportional to the size of the latent dimension. Also, it seems like the latent vectors are only clipped when sampling from the decoder directly (not during training). Could you please confirm? Thanks again!\", \"title\": \"Minor Clarification\"}", "{\"title\": \"Re: Questions on implementation details\", \"comment\": \"Hi, thanks for your interest in the paper, and for catching a few typos! When the paper is de-anonymized we will include a GitHub repository in the paper. Until then, we have included the core algorithm code in an anonymous pastebin ( https://pastebin.com/UTaiR5ZZ ), which should speed up your implementation.\\n\\n> In the algorithm description (Algorithm 1 in the paper), could you explain how the perturbation model is updated in more detail. Equation 10 is equally confusing. I'm trying to understand whether the actions should come from a sampled mini-batch or if they should come from the VAE decoder when (1) passing them to the perturbation model to compute perturbations and (2) when actually perturbing the actions before passing them to the critic network.\\n\\nThe actions can come from anywhere. Either the mini-batch, randomly generated, or from the VAE. During the target update and during test time, the perturbations are applied to actions generated by the VAE, so it makes the most sense to train them on the same distribution. \\n\\nEquation 10 describes the deterministic policy gradient algorithm (DPG). The perturbation model is updated so that the perturbed actions maximize the critic. Normally with DPG, there is a policy pi(s) which is trained to maximize Q(s,pi(s)). In our method, instead of pi(s) which can output any action, we have xi(s,a) + a, where xi(s,a) can only output in a limited range. \\n\\n> Could you also give me some more intuition on why a perturbation model is trained? It seems as though we could just leave the perturbation model out and just do Q-learning using the VAE, critic, and value networks. \\n\\nYou could, but as discussed in the paper, using a perturbation model gives an increase in flexibility in the action selection. For example, if the VAE outputs 10 similar actions, allowing perturbations gives more diversity in actions that could be possibly selected. Another case is where the entire action space has been adequately covered, and any action could be viably generated by the VAE. To see the highest valued action would likely require sampling from the VAE many times. Allowing perturbations avoids this issue. \\n\\n> Finally, equation (11) implies that the value function is trained on a mini-batch of next states instead of current states. Is there a reason why the value function loss isn't just over the current states (s) instead of the next states (s')?\\n\\nIn principle it doesn't matter, but the code does in fact train over s, as shown in Algorithm 1. Thanks for catching this, along with the other typos. We have uploaded a corrected version.\"}", "{\"comment\": \"I really enjoyed reading this paper! I had a few questions about some implementation details (I wanted to give the algorithm a try myself), and I was hoping that you could help me out.\\n\\nIn the algorithm description (Algorithm 1 in the paper), could you explain how the perturbation model is updated in more detail. Equation 10 is equally confusing. I'm trying to understand whether the actions should come from a sampled mini-batch or if they should come from the VAE decoder when (1) passing them to the perturbation model to compute perturbations and (2) when actually perturbing the actions before passing them to the critic network.\\n\\nCould you also give me some more intuition on why a perturbation model is trained? It seems as though we could just leave the perturbation model out and just do Q-learning using the VAE, critic, and value networks. \\n\\nFinally, equation (11) implies that the value function is trained on a mini-batch of next states instead of current states. Is there a reason why the value function loss isn't just over the current states (s) instead of the next states (s')?\\n\\nAlso I wanted to point out a few typos (although I could be mistaken):\\n\\n-Equation (8) should have a (+) sign for the KL term. \\n\\n-Equation (16) should have a factor of 0.5 outside the sum and it should be the log variance, not the log standard deviation.\\n\\n-The line in Algorithm 1 right before the \\\"Update VAE\\\" line should have the VAE model take the mini-batch of actions as input along with the mini-batch of states.\", \"title\": \"Questions on implementation details\"}", "{\"title\": \"General Response and Overview\", \"comment\": \"We would like to take this opportunity to thank each reviewer again. We found that the quality of the reviews was high and the reviewers made insightful commentary, each with different flavors with respect to the paper. As this paper introduces the first analysis into the batch setting with deep function approximation, it makes sense that there would be small issues in clarity, and as displayed by our lengthy related works, there are many possible interpretations to where, and how, our work is insightful or significant. We have carefully responded to each reviewer and have made many small updates throughout the paper to improve the clarity and dispel any confusion about the task as well as our approach.\\n \\nAs mentioned in our response to Reviewer 2, almost all significant issues with clarity were from Section 4.2 which we have re-written to better justify the design choices made when approximating the batch-constraint in a continuous setting. We have expanded the introduction on extrapolation error to be more explicit on its origin. Additionally, although no reviewer took issue with the original title, we have taken the opportunity to modify the title to be more informative towards the contents of the paper. We believe these changes address the reviewers\\u2019 concerns and greatly improve the quality of the paper. We will continue to edit the paper before the deadline and our happy to respond to further comments, questions or concerns.\"}", "{\"title\": \"Response to Reviewer 3\", \"comment\": \"We would like to thank the reviewer for their time, feedback and thoughts. A concern presented by the reviewer was limited technical contribution. We would like to re-emphasize our contribution towards the introduction and analysis of extrapolation error in off-policy learning. Our paper provides important insight into the working of deep reinforcement learning with finite amounts of data, or the purely \\u201cexploitative\\u201d setting, as well as imitation learning with noisy demonstrations.\\n \\n> It makes intuitive sense that the proposed approach works well as long as we only encounter state-action pairs that are closed to one of the state-action pairs in the batch. However, I do not expect that this is always the case. The proposed method is to simply choose the closest action in the batch. Then why does the proposed approach perform well?\\n \\nIn regions with no data at all, there is no possible mechanism for recovery because the agent, and any possible agent, will not have trained in this region. Taking the action of the closest state-action pair is an oversimplification of our method, which likely stems from the lack of clarity in our original version of Section 4.2, which has been re-written. Our BCQ algorithm produces deep network policies that can be evaluated across the entire state space and considers both the similarity of the action to the batch as well as the expected value of the action. \\n\\nThat being said, it is important to take actions which are \\u201cclose\\u201d in the Bellman update to minimize the extrapolation error in the value estimate. Otherwise, as shown in Section 3.2, there can be deterioration in performance even in regions of certainty. That is, a non-batch-constrained off-policy reinforcement learning algorithm may fail if exposed to any uncertain regions during training. Our algorithm performs well by reducing the error into the system. Informally, our value estimates are more accurate. \\n \\nIn experiments where BCQ may take actions leading it to unseen states, such as in the experiments with an expert behavioral policy without exploration, we find that there is sufficient generalization to regions with less data to still perform well, while stabilizing the value function. \\n\\nFor future work, an interesting extension of the algorithm would be to bias it towards regions of certainty, through an optimism-under-certainty heuristic, the polar-opposite to many exploration algorithms. This occurs implicitly in our algorithm as mimicking previously taken actions is more likely to lead to regions of certainty, but could be enforced more strongly. \\n \\n> A key assumption in the discrete case is that whole episodes are in the batch. This is rather restricting, because in many applications, it is infeasible to collect a whole episode, and parts of many episodes are collected from many agents.\\n \\nThe data doesn't need to be collected in episodic fashion, rather, that there is sufficient coverage. Collecting data in episodes is one way to ensure this, but not specifically required. This is a weaker assumption than assumptions necessary for standard Q-learning, as we no longer require visitation over all possible state-action pairs.\"}", "{\"title\": \"Response to Reviewer 2\", \"comment\": \"We would like to thank the reviewer for thorough review and constructive feedback. The issues with clarity largely stemmed from Section 4.2, which we agree with the reviewer was not as clear as it could be. This section has been re-written and will hopefully satisfy the reviewer. We have removed superfluous details and simplified the presentation of Section 4.2. We believe these changes better streamlines the introduction of the (unchanged) algorithm, and better justifies some of the algorithmic choices. Other small adjustments to notation and clarity have been made throughout the paper, with regards to both your comments as well as the other reviewers.\", \"further_responses_to_comments\": \"> Page 6: \\u201cTheorem 1 implies with access to only a subset of state-action pairs in the MDP, the value function\\u2026 This suggests batch-constrained policies are a necessary tool for combating extrapolation bias.\\u201d This might be true, but it does not follow from the Theorem 1 as it only applies to the batch Bellman operator and not the standard one used in most methods.\\u201d\\n \\nThe claim we intended to make was not that batch-constrained policies are necessary, but rather suggest that they are likely, or potentially, necessary. We have clarified this in the paper.\\n\\n> Figure 4: where is \\u201cTrue value\\u201d curve on the plots?\\n\\nInitially we left out the true value curve to allow for a larger figure, putting more emphasis on the results. We have re-added the true value curve.\"}", "{\"title\": \"Response to Reviewer 1\", \"comment\": \"We would like to thank the reviewer for their helpful comments and positive feedback. We have added an experimental result to the supplementary material to distinguish ourselves further from imitation learning algorithms and made several clarifying statements and adjustments based on your recommendations.\\n \\nOne con listed was missing comparisons against other state of the art imitation learning algorithms which are robust to noisy demonstrations. However, to the best of our knowledge, we are not aware of any which satisfy the batch setting, where no further data is collected, while also setting no requirements on data being labelled expert vs. non-expert. One algorithm which does satisfy these conditions is [1], but only operates in with discrete actions, making it weak baseline in a continuous control benchmark, where independent discretization would be required. If there was a particular algorithm you had in mind when writing the review, we would be happy to include it in the final paper.\\n \\nWe also note the line between off-policy and robust imitation is fairly thin. For example, in the tabular setting, our approach can learn from the set of data that includes all state-action pairs, similarly to off-policy learning. All state-action pairs, of course, also includes expert actions and could be considered a robust imitation learning algorithm as well. An expert behavioral policy is necessary for the data collection process to be sufficiently interesting, as a purely randomly policy doesn\\u2019t cover enough of the state space for it to be possible to learn meaningful behavior. To further demonstrate the effectiveness of our algorithm as an off-policy algorithm, we included results with a purely random behavioral policy on a pendulum and reacher task in the supplementary material B, where the state space can be sufficiently covered by taking random actions. \\n \\nFurther Responses to Questions/Comments: \\n\\n> Could one obtain a similar effect to BCRL by simply initializing the value estimates pessimistically?\\n \\nEssentially yes, especially in the tabular setting, however, this would slow learning as it may take many updates to \\u201cwash away\\u201d initial negative bias. Furthermore, in a function approximation setting, maintaining an optimistic or pessimistic initialization over many timesteps is impractical and often implausible. Finally, for a fixed, non-batch-constrained policy, this also gives biased estimates. Introducing the notion of batch-constrained gives some understanding to when the policy would be biased vs. when it wouldn't.\\n \\n> Sec 4.1: Since B is a set of (s, a, s', r) tuples, what does it mean for a state s' to be \\\"in B\\\"? Similar question for state-action tuples (s, a).\\n \\ns' in B is shorthand for (s, a, s', r) in B for some s, a, r. We have added a clarifying sentence in the background.\\n \\n> As you note in the appendix, the construction in Sec 4.1 is essentially creating a new MDP that contains only the transitions that occur in the training data. I'd suggest stating as much in the main paper for intuition.\\u201d\\n \\nAt your recommendation we have added this to the main paper.\\n \\n> * Sec 4.2 / 5: The perturbation constraint \\\\Phi is set to 0.05 in the experiments. Since the actions in these control problems are vectors, what does a scalar constraint correspond to? How is the constraint enforced during learning?\\n \\nThis correspond to \\\\Phi * I * tanh() following the final layer. We have added a clarifying sentence in the supplementary.\", \"references\": \"[1] Gao, Yang, Ji Lin, Fisher Yu, Sergey Levine, and Trevor Darrell. \\\"Reinforcement learning from imperfect demonstrations.\\\" arXiv preprint arXiv:1802.05313 (2018).\"}", "{\"title\": \"Solid approach to applying RL algorithms to batch imitation learning from noisy demonstrations\", \"review\": \"The overall approach is sound. The problem of extrapolation is intuitively obvious, but not something I had thought about before. I think typically exploration would correct the problem since states with over-estimated values would become more likely to be reached, giving an opportunity to get a better estimate. \\n\\nThe learning setting is closer to imitation learning than to what I would call RL, since the BCRL approach essentially avoids extrapolation error by ignoring the parts of the problem that are not represented in the training data. The well-known problem with behavior cloning is compounding errors once the agent strays into areas of the state space that are far from the training data. To me \\\"off-policy RL\\\" implies that the goal is to learn a complete policy from off-policy data. I think the \\\"competitors\\\" to which BCRL should be compared are imitation learning algorithms address noisy demonstrations, and not so much off-policy RL algorithms. It would also be interesting to see the generalization performance of BCRL outside of its training data.\\n\\nThe BCRL idea might be applicable in a conventional RL setting as well, since the initial stages of learning could be subject to a similar extrapolation error until there has been enough exploration. A comparison to something like TRPO in this setting would be interesting.\\n\\nThe paper is well-written with good coverage of related literature. There are a few points where the technical content is imprecise, which I note below. \\n\\nComments / Questions:\\n* Could one obtain a similar effect to BCRL by simply initializing the value estimates pessimistically?\\n* Sec 4.1: Since B is a set of (s, a, s', r) tuples, what does it mean for a state s' to be \\\"in B\\\"? Similar question for state-action tuples (s, a).\\n* As you note in the appendix, the construction in Sec 4.1 is essentially creating a new MDP that contains only the transitions that occur in the training data. I'd suggest stating as much in the main paper for intuition.\\n* Sec 4.2 / 5: The perturbation constraint \\\\Phi is set to 0.05 in the experiments. Since the actions in these control problems are vectors, what does a scalar constraint correspond to? How is the constraint enforced during learning?\\n* What are the distance functions D_S and D_A?\", \"pros\": [\"A good approach to applying RL methods in the \\\"imitation-like\\\" setting. I've seen similar things attempted before, but this method makes more sense.\"], \"cons\": [\"The learning setting is more like \\\"fuzzy\\\" behavior cloning from noisy data than off-policy RL. Experimental comparison against more-sophisticated imitation learning approaches is missing.\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Interesting ideas, but clarity must be improved\", \"review\": \"Authors consider a problem of off-policy reinforcement learning in a setting explicitly constrained to a fixed batch of transitions.\\nThe argument is that popular RL methods underperform significantly in this setting because they fail to account for extrapolation error caused by inevitably sparse sampling of the possible action-state space.\\nTo address this problem, authors introduce the notion of batch-constrained RL which studies policies and associated value functions only on the state-space covered by the available training data.\\nFor practical applications a deep RL method is introduced which enables generalisation to the unseen states and actions by the means of function approximation.\\n\\nI find the problem studied in the paper very important. It is indeed strongly connected to the idea of imitation learning which has been studied previously, but I like the explicit point from which authors see the problem.\\nThe experimental results seem quite appealing to justify use of the proposed approach.\\n\\nHowever, on the clarity side the paper should be improved before publication.\\n\\nThe interplay between action generating VAE G_w(s) and \\\\pi is unclear to me.\\nFirst, what does it mean that G(s) is trained to minimise the distance D_A?\\n\\nIf G(s) is a VAE, then it is trained to minimise the corresponding variational lower bound, how is minimisation of the distance over actions is incorporated here? And what exactly is this distance?\\nSimilarly, what does \\u201cD_S will be defined by the implicit distance induced by the function approximation\\u201d exactly mean?\\n\\nOther comments / questions:\", \"page_6\": \"\\u201cTheorem 1 implies with access to only\\na subset of state-action pairs in the MDP, the value function\\u2026 This suggests batch-constrained policies\\nare a necessary tool for combating extrapolation bias.\\u201d\\nThis might be true, but it does not follow from the Theorem 1 as it only applies to the batch Bellman operator and not the standard one used in most methods.\", \"corollary_1_and_2\": \"What is Q^* here?\\n\\nPage 7, first sentence: should there be if A_s, e != \\\\emptyset?\", \"epsion_batch_constrained_policy_iteration\": \"would the beam search actually maximize Q function? This needs to be proven or at least discussed.\\n\\nI don\\u2019t see how is epsilon used in the iteration scheme. This needs to be clarified.\", \"equation_11\": \"the subscript of the max operator looks weird, should there be just a_i?\", \"figure_4\": \"where is \\u201cTrue value\\u201d curve on the plots?\\n\\nThe notation \\\\pi(s, a; \\\\Phi) used throughout the paper is confusing and can be interpreted as a joint distribution over states and actions.\\n\\nAs I said, currently the paper does not appear to be easy to follow to me and even if it does contain important ideas, I believe they must be communicated in a clearer way.\\nI am eager to revise my evaluation if authors make substantial effort to improve the paper.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"How well and why it works at states that are far from any states in the batch?\", \"review\": \"This paper studies extrapolation error in off-policy batch reinforcement learning (RL), where the extrapolation error refers to the overestimation of the value for the state-action pairs that are not in the training data.\\n\\nThe authors propose batch-constrained RL, where the policy is optimize under the constraint that, at each state, only those actions that have been taken in that state in the training data are allowed. This is then extended to continuous space, where it allows only the state-action pairs that are close to a state-action pair in the training data. When there is no such action for a given state, the action that is closet to a feasible action at that state is selected.\\n\\nIt makes intuitive sense that the proposed approach works well as long as we only encounter state-action pairs that are closed to one of the state-action pairs in the batch. However, I do not expect that this is always the case. The proposed method is to simply choose the closest action in the batch. Then why does the proposed approach perform well? Is it because the experiments are performed under rather deterministic settings? How often are no state-action pairs found in the neighbor? Is there any mechanism for recovering from \\\"not in the batch\\\"?\\n\\nThe paper would be much stronger if it study this challenge of \\\"not in the batch\\\" more in depth. Technical contributions in the present paper are rather limited.\\n\\nA key assumption in the discrete case is that whole episodes are in the batch. This is rather restricting, because in many applications, it is infeasible to collect a whole episode, and parts of many episodes are collected from many agents. Although this assumption is stated, it would be nice to emphasize by also stating that the theorems do not hold when this assumption does not hold. The assumption becomes less important for continuous case, because of approximation. It might be interesting to study the performance of the proposed approach when the assumption does not hold in the continuous case.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Re: What about MDP's where the batch doesn't show all the next states you get to after an action?\", \"comment\": \"Thank you for your interest in our paper! From what is stated at a theoretical level, Lemma 1 and Theorem 1 are not broken by non-determinism.\\n \\nThe Bellman operator assumes access to the underlying MDP as it includes an expectation over the next state. The batch Bellman operator, which we introduce, is an extension which masks out any unseen state-action pairs by setting their value to infinity. As a result, the batch Bellman operator can still access the true expectation, even with only a sample of the possible transitions in the batch, allowing Theorem 1 and Lemma 1 to hold.\\n \\nThat being said, I believe the point you are making has to do with the more practical scenario, say with a batch-constrained tabular Q-learning-- what happens if we only have a sample of the possible transitions, without access to the true expectation? Our approach makes the same practical assumption as other off-policy algorithms such as Q-learning and KBRL, which is that the samples you do have for (s,a) are representative of the true MDP. In this case, as you have stated, there is no guaranteed convergence to the true Q^pi(s,a), for any off-policy algorithm, unless the samples you have are indeed representative, e.g. the environment is deterministic or there is infinite data. And of course, we demonstrate the effectiveness of our approach in more complex settings in Section 5.\"}", "{\"comment\": \"I really appreciate this paper. It clearly explains of a lot of issues with off-policy I've been realizing recently.\\n\\nI think there is an issue with Theorem 1 and the proof of Lemma 1 - in my mind, it is fixed by some assumption about the environment. The issue I see is that the batch of data, for the state/action pairs we get to observe in the batch, may mis-represent the underlying MDP restricted to those states. \\n\\nFor instance, suppose there is a (s,a) pair where the action sometimes leads to 'success' and sometimes 'failure' - the terminal state. Like the state 's' could be the robot faces a pit, and the action 'a' could be a jump to get over it. Most of the time perhaps, the action leads to failure - the terminal state, but we were unlucky in our batch and we saw success. \\n\\nIf you assume they next state is deterministic based on the action - which may be reasonable for continuous robot control - where there are some precise values that always lead to success with the jump - I think you could get around this, but if not - like in Theorem 1 - I don't see how Q^{\\\\pi}_{B}(s,a) = Q^{\\\\pi}(s,a), the first is the Q-value we learn from the batch, where we never see a failure after (s,a) so our Q will be high- but the latter is the true Q-value over the whole MDP - where we often see failure, and the Q would be low.\\n\\nLikewise in the proof of Lemma 1 - I think there is one natural new MDP M_{B} to define based on the data you observed, but you need to estimate the transitions from that data - and you might miss things. Like if you see\\n\\n(state, action, next_state):\\n(s0, a0, s1)\\n(s0, a0, s1)\\n(s0, a0, s2)\\n\\nyou'd naturally set the transition P(s1 | s0, a0) = 2/3 and P(s2 | s0, a0) = 1/3 - but this might not be the true transitions. The fixed point you converge to will depend on the MDP you derive from your observations.\", \"title\": \"What about MDP's where the batch doesn't show all the next states you get to after an action?\"}", "{\"comment\": \"I follow now\", \"title\": \"Thanks!\"}", "{\"title\": \"Re: question on example of extrapolation error in simple example\", \"comment\": \"Hi, thanks for your question. Indeed, the value is usually dependent on the kernel function, but this weighting is normalized over all examples of the corresponding action (equation 5). With only one example of a1 the kernel term will be reduced to 1.\"}", "{\"comment\": \"I'm trying to understand equation (6) in section 3.1. Why is Q(\\u00b7, a1) = 1 + \\u03b3Q(s1, a0), that is for both Q(s0, a1) and Q(s1,a1), we have the same value? Doesn't equation (4) given the sample (s0,a1,r=1,s1) in the batch, define\\n \\nQ(s0, a1) = k(s0,s0) * (1 + \\u03b3 V(s1))\\n\\nand \\n\\nQ(s1, a1) = k(s1,s0) * (1 + \\u03b3 V(s1))\\n\\nso that is would depend on the kernel function k? It seems a natural kernel might be one where k(s0,s0)=1 but k(s1,s0) ~ 0.\", \"title\": \"question on example of extrapolation error in simple example\"}" ] }
BJeem3C9F7
Pix2Scene: Learning Implicit 3D Representations from Images
[ "Sai Rajeswar", "Fahim Mannan", "Florian Golemo", "David Vazquez", "Derek Nowrouzezahrai", "Aaron Courville" ]
Modelling 3D scenes from 2D images is a long-standing problem in computer vision with implications in, e.g., simulation and robotics. We propose pix2scene, a deep generative-based approach that implicitly models the geometric properties of a scene from images. Our method learns the depth and orientation of scene points visible in images. Our model can then predict the structure of a scene from various, previously unseen view points. It relies on a bi-directional adversarial learning mechanism to generate scene representations from a latent code, inferring the 3D representation of the underlying scene geometry. We showcase a novel differentiable renderer to train the 3D model in an end-to-end fashion, using only images. We demonstrate the generative ability of our model qualitatively on both a custom dataset and on ShapeNet. Finally, we evaluate the effectiveness of the learned 3D scene representation in supporting a 3D spatial reasoning.
[ "Representation learning", "generative model", "adversarial learning", "implicit 3D generation", "scene generation" ]
https://openreview.net/pdf?id=BJeem3C9F7
https://openreview.net/forum?id=BJeem3C9F7
ICLR.cc/2019/Conference
2019
{ "note_id": [ "B1l17Q10J4", "r1gDLpRpkE", "HJxNUC4iJN", "HygY_sl9yE", "rye_zVPdyN", "Skg8XI2b14", "rJgcCZE1k4", "HkxuniHCRX", "H1lqQQBRRQ", "rylarlqaAX", "H1g9CKHqCQ", "B1l2AzuX0m", "HJxxaf_70Q", "BJxwzGBJ0X", "SJgja6-aTQ", "B1epOpZ6TQ", "S1gAUXuopQ", "Bkl_mZ_iTQ", "ByxFX5Yi3X", "HJejoMW9hX", "rygmHzvtnm" ], "note_type": [ "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544577814723, 1544576335297, 1544404556015, 1544321905323, 1544217616219, 1543779870127, 1543614929796, 1543556015769, 1543553826226, 1543508036665, 1543293394305, 1542845139960, 1542845111535, 1542570511403, 1542426050724, 1542425973070, 1542320981962, 1542320416024, 1541278241259, 1541178019241, 1541136955460 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1327/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1327/Authors" ], [ "ICLR.cc/2019/Conference/Paper1327/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1327/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1327/Authors" ], [ "ICLR.cc/2019/Conference/Paper1327/Authors" ], [ "ICLR.cc/2019/Conference/Paper1327/Authors" ], [ "ICLR.cc/2019/Conference/Paper1327/Authors" ], [ "ICLR.cc/2019/Conference/Paper1327/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1327/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1327/Authors" ], [ "ICLR.cc/2019/Conference/Paper1327/Authors" ], [ "ICLR.cc/2019/Conference/Paper1327/Authors" ], [ "ICLR.cc/2019/Conference/Paper1327/Authors" ], [ "ICLR.cc/2019/Conference/Paper1327/Authors" ], [ "ICLR.cc/2019/Conference/Paper1327/Authors" ], [ "ICLR.cc/2019/Conference/Paper1327/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1327/Authors" ], [ "ICLR.cc/2019/Conference/Paper1327/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1327/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1327/AnonReviewer2" ] ], "structured_content_str": [ "{\"title\": \"Thanks. No misunderstanding here.\", \"comment\": \"Thanks for the note. I see your point and I don't think there is any misunderstanding here. As I said: your 'single unrelated images' are often different viewpoints of the same object. Though the rotation and translation of the object vary, because the objects are so simple, and there are so many training images, it's very likely that two images are essentially different views of almost-the-same scene. Theoretically every scene is unique; in practice, different scenes might be too similar to tell perceptually, given the small degrees of freedom. In this case, the problem setup is then very similar to the related work I listed.\"}", "{\"title\": \"Clarification regarding the concerns\", \"comment\": \"Dear Reviewer 3, thank you again for your time and comments.\\n\\nWe want to clarify what we interpret to be a fundamental misunderstanding regarding our contributions and those of prior art.\\n\\nWorks like those of Yan et al., Eslami et al. train on _multiple images_ of the same scene and Kanazawa et al. and others use various forms of weak supervision. We, on the other hand, only present a _single view_ per unique scene configuration in an unsupervised manner.\\n\\nDuring training, we present our model with multiple views of similar (but never identical!) scenes. Even for scenes with simple objects, object placement and orientation is always different (i.e., randomly rotated and translated) per image. This is a fundamental difference, and significantly changes the problem landscape and difficulty.\\n\\nMost prior works you referred to consider multiple views of each individual scene. We, on the other, learn to infer 3D structure from single samples of unique 3D scenes. We do so in a _completely unsupervised_ fashion.\\n\\nFrom this, we are able to then synthesize, e.g., many novel views of a fixed scene configuration. We also demonstrate the ability to (smoothly) explore the space of possible scene configurations. We also introduce mental rotation task as a benchmark to evaluate 3D understanding based models irrespective of the underlying 3D representation used.\\n\\nWe are happy to further discuss and contextualize our contribution with respect to these works you\\u2019ve raised across your replies during our review discussion, including in the case of techniques where reproducibility would be very challenging. Given our clarification, we feel the case for differentiation from prior art, as well as practicality and novelty, become clear.\\n\\nMoving forward we plan on scaling our method to datasets (like ImageNet) where, for every 3D scene, only a single image is available. Texture variation is one of the key directions of future development needed to make this jump, we feel.\\n\\nAs the other two reviewers agree, we feel that the challenge of this particular problem (single-view, many scenes; purely unsupervised) and our results are promising and exciting for the community. Moreover, we hope the direction we present will open up avenues of future work towards robust, unsupervised scene understanding and reconstruction.\", \"refs\": \"Eslami et al, Neural scene representation and rendering. Science, 2018.\\nKanazawa 2018 - Learning Category-Specific Mesh Reconstruction\\nYan et al, perspective transformer nets , 2016\"}", "{\"metareview\": \"This paper proposes an approach for learning to generate 3D views, using a surfel-based representation, trained entirely from 2D images. After the discussion phase, reviewers rate the paper close to the acceptance threshold.\\n\\nAnonReviewer3, who initially stated \\\"My second concern is the results are all on synthetic data, and most shapes are very simple\\\", remains concerned after the rebuttal, stating \\\"all results are on synthetic, simple scenes. In particular, these synthetic scenes don't have lighting, material, and texture variations, making them considerably easier than any types of real images.\\\"\\n\\nThe AC agrees with the concerns raised by AnonReviewer3, and believes that more extensive experimentation, either on more complex synthetic scenes or on real images, is needed to back the claims of the paper. Particularly relevant is the criticism that \\\"While the paper is called \\u2018pix2scene\\u2019, it\\u2019s really about \\u2018pix2object\\u2019 or \\u2018pix2shape\\u2019.\\\"\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"metareview\"}", "{\"title\": \"Unchanged\", \"comment\": \"Thanks for the follow-up. I'm still not convinced. My main argument is the results are not strong, as the test scenes are very simple. I'd like to see any of the following:\\n\\n1) results on real data, similar to those used by Kanazawa et al, or simpler than those but still reasonably realistic. I understand they used supervision, but only in 2D. If your model can achieve comparable (or slightly worse) results without supervision, that'd be a clear contribution.\\n\\n2) clear explanation of why your model works better than Rezende et al. I understand the difficulty in reproducing their results without a released implementation, but as they have achieved good results without supervision more than two years ago, at least a clear explanation with some qualitative demonstration would be necessary.\\n\\n3) You said \\\"our scenes are simple, but to our best knowledge, we are the first to actually do this reconstruction from single unrelated images\\\". This is not really true, as your 'single unrelated images' are often different viewpoints of the same object. In this case, how would you compare with GQN (Eslami et al, Neural scene representation and rendering), which is also 'implicit'? I don't want to bring up new related work at this moment, but I'm concerned that your recent comment is claiming too much.\\n\\nEslami et al, Neural scene representation and rendering. Science, 2018.\\n\\nI think the concern on how the system works is very relevant and important, because your results are mainly on datasets of just a few objects, and they degrade noticeably on slightly more complicated scenes (Fig 16). With the current set of results, I don't see how this system can be practically useful.\"}", "{\"title\": \"Encouraging further discussion\", \"comment\": \"Dear Reviewer 3,\\n\\nWe were wondering if you had any time to consider our response (the one below this post, titled \\\"Details about Kanazawa (2018)\\\"). If we forgot to address any of your concerns, please let us know.\\n\\nWe would like to emphasize that the scope of this paper is the scene geometry reconstruction. Dealing with other scene properties like material and lights are part of our ongoing work. Our current model handles arbitrary geometric complexity as we demonstrated in the paper going from simple shape primitives to complex scenes (such as the ones in Figure 6 and 16).\"}", "{\"title\": \"Thank you and yes, will try to update\", \"comment\": \"Dear Reviewer 2,\\n\\nAwesome, thank you so much!\\n\\nAnd thanks for the correction. When we prepare the camera-ready version, we'll remove the extra bracket. And as for the depth map difference: yeah, good idea. How'd you imagine that - e.g. have an additional row of depth map images on the right where the value of each pixel = ground truth - reconstructions? If so, no problem at all. We just have to wait until after the review period because we currently can't edit the paper.\", \"and_could_we_kindly_ask\": \"did you change the rating somewhere in the reviewer console? Because for us, your original rating still stands at \\\"3\\\".\\n\\nCheers and good Sunday\"}", "{\"title\": \"Details about Kanazawa (2018)\", \"comment\": \"Dear Reviewer 3,\\n\\nThanks for your time again. We\\u2019d like to respond to your remark about other work. We don\\u2019t think the comparison to Kanazawa (2018) is fair. They don\\u2019t use ground truth mesh representation, but they have a very strong shape feedback from (a) the ground truth silhouette of the objects (b) the symmetry assumption (c) ground truth semantic object-specific keypoints. In more detail: They only experiment with birds and for each bird, they manually annotated the silhouette. Together with the mirror-symmetry assumption along one center plane, this covers most of the body\\u2019s geometry. But in addition to this, the training images were also annotated with 14 keypoints like the beak, legs, etc. and during training they minimize the distance between the 3D model keypoints and the annotations. These three factors in combination create a strong 3D ground truth signal.\\n\\nWhile this work is indeed impressive, we feel it\\u2019s inaccurate to call it \\u201cunsupervised\\u201d. The goal of our method is to infer structure purely from images. Without any kind of segmentation or manual annotation, purely unsupervised. There is only one other work that does something similar in Rezende (2016). But, as we outlined in the revised paper, (a) they only demonstrate this on perfectly illuminated single objects, whereas we do multiple objects with varying light conditions and (b) since they didn\\u2019t release any code, it\\u2019s hard for us and the community to reproduce their work.\\n\\nYes, our scenes are simple, but to our best knowledge, we are the first to actually do this reconstruction from single unrelated images, not multiple views of identical scenes and without any auxiliary loss or training data.\", \"refs\": \"(Kanazawa 2018) - Learning Category-Specific Mesh Reconstruction\\nfrom Image Collections, https://arxiv.org/pdf/1803.07549.pdf\\n\\n(Rezende 2016) - Unsupervised Learning of 3D Structure from Images, http://papers.nips.cc/paper/6600-unsupervised-learning-of-3d-structure-from-images.pdf\"}", "{\"title\": \"Addendum for human evaluation\", \"comment\": \"Dear Reviewer 1,\\n\\nThanks, that's really lovely to hear.\", \"as_for_the_human_evaluation\": \"oh yeah, that's a clear oversight and should be in the appendix. To answer your questions:\\n\\nWe posted the questionnaire to our lab-wide mailing list, where 41 participants followed the call. The questionnaire had 1 calibration question where, if answered incorrectly, we pointed out the correct answers. For all successive answers, we did not give any participant the correct answers and each participant had to answer all 20 questions to complete the quiz. \\n\\nWe also asked participants for their age range, gender, education, and for comments. While many commented that the questions were hard, nobody gave us a clear reason to discard their response. All participants were at least high school graduates currently pursuing a Bachelor's degree. The majority of submissions (78%) were male, whereas the others were female or unspecified. Most of our participants (73.2%) were between 18 and 29 years old, the others between 30 and 39. The resulting test scores are normally distributed according to the Shapiro-Wilk test (p<0.05) and significantly different from random choice according to 1-sample Student's t test (p<0.01).\\n\\nP.S.: If you are curious, you can take the anonymized test as well: https://goo.gl/forms/QzVxjh9XOzhlhiIr2\"}", "{\"title\": \"Still on the border\", \"comment\": \"The revision is helpful and, as R1 suggested, is clearly better than the original version, especially in presentation.\", \"my_major_concern_however_remains\": \"all results are on synthetic, simple scenes. In particular, these synthetic scenes don't have lighting, material, and texture variations, making them considerably easier than any types of real images. While the authors claimed prior work all used mesh supervision, this is not true for many papers that reconstruct 3D shapes using 2D supervision alone (e.g. Kanazawa 2018 as cited, which worked well on real data, reconstructing both shape and texture).\\n\\nI keep my original rating 5. I won't be too much against accepting this paper, if the AC decides to do so, but I cannot champion it.\"}", "{\"title\": \"Clear improvement\", \"comment\": \"I think the revised paper is a clear improvement over the original. The descriptions are more modest and more accurate, and the method is more clear than before. The added experiments are also helpful.\", \"one_new_complaint_is\": \"for the human evaluation on 3D-IQ, I think you need to specify how many participants there were, who they were, how they were recruited, and how long they had for each question, how many questions they answered, etc. Please just say everything, so that the mean and variance can be replicated.\\n\\nThe new Fig1 is good.\\n\\nI think my current score of \\\"above acceptance threshold\\\" is still appropriate. I encourage anonreviewers 2 and 3 to boost their scores, to help the paper get in.\"}", "{\"title\": \"New Revision Added\", \"comment\": \"Dear reviewers,\\n\\nThanks again for all comments and suggestions. We've significantly rewritten major parts of the paper according to your input. We think your feedback has improved the quality of the paper. We specifically clarified the message of our paper, our contributions, and our methods.\\n\\nThanks again for your time and for giving the new revision another look.\"}", "{\"title\": \"Reply to Reviewer 3's \\\"Interesting inverse graphics model\\\" (part 2 of 2).\", \"comment\": \"== RE: More realistic scenes and colors.\\n\\nMost previous approaches have focused on the reconstruction or generation of single objects with no background. In this work, we consider scenes formed by one or more objects in a simple room. We have added new results of more complicated scenes where the number of objects, as well as their shape, vary. Please see our new qualitative results ( https://ibb.co/cbL6AL ) and quantitative results ( https://ibb.co/nj73qL , right side of the table). Additional scene reconstructions can be found here: https://ibb.co/hjpBc0 . We included these new results in the paper, which we\\u2019ll update soon. We use diffuse materials with uniform reflectance for all our experiments. Learning the reflectance and color/texture properties (in addition to the surface depth and orientation) is significantly more challenging, but we are currently working towards that.\\nWe'll keep more realistic settings involving textures, shadows, and realistic lighting for future work.\\n\\n\\n== RE: Missing citations in 3D-IQTT and human performance.\\n\\nFor the 3D-IQTT we\\u2019ve carried out a human evaluation on over 40 students in the department, the results of which were included in the paper and can be seen at https://ibb.co/nhHUS0 . The results show that although our model performs better than the baselines, we are still lagging behind the human level. We also cited and discussed Shepard and Metzler's work.\\n\\n\\n== RE: Move related work to the supplementary material.\\n\\nWe completely rewrote the introduction to more clearly introduce the problem, challenges, assumptions, and the proposed method. We also moved unnecessary related work to the dedicated section.\\n\\n\\n== RE: Comparison to \\u201cUnsupervised Learning of 3D Structure from Images\\u201d [Rezende et al. 2016] and \\u201cPerspective Transformer Nets\\u201d [Xinchen et al. 2016]\\n\\nPerspective Transformer Nets can be considered weakly supervised. They render 24 images per object, which are all observed by the networks during training. In most contexts, one needs a 3D object in order to obtain 24 images of an object. By contrast, we learn on a single image per individual scene, i.e. no scene configuration is ever seen twice. Moreover, the metric used in this paper was intersection over union which works well for voxel-based representation. It's not clear how this evaluation tool can be extended for surfels. Figure 9 in our paper demonstrated our capacity to new, unseen viewing angles. In order to evaluate our model\\u2019s reconstruction of a scene from different novel viewpoints, we added MSE evaluations on the depth map and Hausdorff distance evaluation on the reconstructed surfels from different camera angles. Please see our modified table ( https://ibb.co/nj73qL ). In principle, we could compare our approach with the method of Rezende, but the information provided in the paper is not sufficient to accurately reconstruct their model.\"}", "{\"title\": \"Reply to Reviewer 3's \\\"Interesting inverse graphics model\\\" (part 1 of 2).\", \"comment\": \"Thanks for your time and continued feedback\\n\\nTL;DR: We reworked the intro from the ground, drawing a much clearer distinction between our method and related methods. We also added the missing citation and conducted some human evaluations for our mental rotation task.\\n\\n\\n== RE: The motivation of the paper is unclear.\\n\\nWe have re-written the introduction to more clearly reflect the extent of our contributions. This revision will be released shortly. \\n\\n\\n== RE: Advantages of Implicit representation over-explicit. \\n\\nImplicit and explicit 3D representations aren\\u2019t strictly better or worse but have significant advantages and disadvantages. \\nAn explicit representation stores all rendering-relevant information about all entities in a given 3D space. The main benefit of this method is that it\\u2019s easily transferable, in the sense that an explicit model can directly be loaded into a 3D modeling software and viewed from all angles without any inconsistencies. In the worst possible case, one would store all the properties of each voxel in 3D space, like opaqueness, illumination, color, reflectance, etc. For example, in a space of 512x512x512 points with 10 values per point which are float32-encoded, this would amount to 5GB of data for a single scene. The vast majority of these points aren\\u2019t relevant to most viewpoints, like the inside of objects. Therefore the common workaround is to use a sparse representation, like meshes. These do however come with their own drawbacks, like the question of how to discretize complex objects. This makes mesh representation difficult to deal with in neural networks. The current state of the art relies on deforming a pre-existing mesh.\\nOn the other hand, the implicit approach represents the 3D scene in a high-dimensional latent variable. The main drawback of this is the lack of interpretability: this vector on its own is meaningless and needs to be decoded into a viewpoint-dependent representation that can be rendered. Increasing a single value in this vector could, for example, illuminate an object more or morph a chair in the scene into a table, or both. The big benefit of this is scalability. This produces a compressed representation that fits the complexity of a given scene. The only remaining issue is the viewpoint-dependent consistency, i.e. making sure that when the scene is viewpoint-dependently decoded into a renderable model, the scene\\u2019s content is the same from all angles.\\nWe believe that neural networks can approximate this consistency-enforcing function and we think that in the long term when going to arbitrarily complex scenes, the advantages of this implicit representation outweigh the downsides.\\n\\nTo summarize, in the explicit approach dealing with sparsity and discrete representations remains a major challenge for deep learning, and the cubic scaling of dense approaches like voxel-based representations makes them impractical for large scenes. On the other hand, our implicit representation generates plausible 3D models that show viewpoint extrapolation. This can directly be observed in our video ( https://bit.ly/2zADuqG ) for example in the rotating chair video, where we only fed the model a single chair image, and then move the camera to generate new viewing angles. \\n\\n\\n== RE: Compare surfels with voxels, meshes and point clouds==\\n\\nBoth meshes and voxels don\\u2019t scale well to scenes with multiple objects - voxels due to their poor scaling in complexity and meshes due to difficulties representing them with neural networks, as mentioned above. Point clouds don\\u2019t provide any surfaces. Therefore no shadows, no reflectance, and no shading are possible. In terms of evaluation, most contemporary works use supervised training for their voxel/mesh-reconstruction where we use unsupervised/self-supervised training. A direct comparison with these would be unfair. The only other method we found with unsupervised reconstructions was [Rezende et al. 2016] and there is no source code available for their method. \\nWe did include a new benchmark in the paper, the 3D-IQTT, that should allow for more methods to compare their reconstruction accuracy independently of the underlying method.\\n\\n[Rezende et al. 2016] Unsupervised Learning of 3D Structure from Images.\"}", "{\"title\": \"Reply to Reviewer 2\", \"comment\": \"Dear reviewer, thank you for your time and effort.\\nTL;DR: We added more quantitative results. Our paper already includes examples generalizing to viewpoints that weren\\u2019t part of the training data, but we included additional samples. And we added significantly more details about the methods.\", \"here_are_our_responses_in_more_detail\": \"== RE: Most evaluations are qualitative.\\n\\nThere is no standard protocol to evaluate 3D reconstruction and generation. Most of the state-of-the-art methods (fully unsupervised methods learned on single images) just show qualitative results in their papers. \\nWe did quantitatively evaluate the surfel reconstruction against the ground truth via Hausdorff distance (HD) as described in Appendix B and the reconstructions of our model via mean squared error (MSE) on the depth map. We achieved near-zero MSE and reasonably lower HD (when reconstructed from the same view). We included a table showing the difference in these values for different, unseen camera views: https://ibb.co/nj73qL. On top of these metrics, we also created the 3D IQ test task (3D-IQTT) which is exclusively quantitative. We compared our method with two CNN baselines and we now also included human evaluation. The CNN baselines demonstrate that the task can only be solved with an understanding of the 3D geometry. A preview of the updated comparison table can be found here: ( https://ibb.co/nhHUS0) .\\n\\n\\n== RE: Add reference for the rendering equation.\\n\\nSorry for the oversight. We have added the reference for the rendering equation: it is an approximation of Kajiya\\u2019s rendering equation [Kajiya 1986].\\n\\n[Kajiya 1986] \\u201cThe Rendering Equation\\u201d\\n\\n\\n== RE: More details on experimental setup.\\n\\nWe have added more details on the experimental setup (camera, lights, and material properties used) in the appendix. All our images are of resolutions 128x128. \\nExcept for the 3D-IQTT, we didn\\u2019t store a fixed dataset but rather created the dataset on the fly. For example in the existing Figure 4, during the data generation process the rotation, translation, and object categories were randomized. The probability of seeing the same configuration from two different views is near zero.\\n\\n\\n== RE: Which parts of the GAN are more important.\\n\\nOur Pix2Scene architecture is a bidirectional adversarial model. It consists of an encoder, decoder, renderer, and discriminator. The encoder translates the input image into a latent representation. The decoder transforms a similar latent vector, sampled from noise, into our surfel representation, which is converted into a 2D image by the renderer. The discriminator\\u2019s purpose is to make sure the output images become the same distribution as the input images, and ascertain that the encoded latent representation corresponds to the latent input to the decoder. See our existing Figure 3 for an overview. The decoder-rendering part is important for generating new viewpoints for a given latent code and the encoder-decoder pipeline allows us to infer the 3D structure of a 2D image. Without the encoder, the model would be purely generative.\\n\\n\\n== RE: Novelty of the generated images.\\n\\nGAN-based models usually suffer from mode-collapse. We demonstrated in Figure 8 that our model overcame this issue and was able to interpolate between two given scenes. We\\u2019ve added another figure to further emphasize the interpolation capabilities of our model.\\n\\n\\n== RE: 3D-IQTT semisupervised learning.\\n\\nThanks for this feedback. We agree that this section wasn\\u2019t sufficiently clear. We\\u2019ve rewritten a part of this section and added the details on the interleaved training. It's similar to algorithm 2 from [Kingma et al. 2014], except instead of a randomized minibatch, we train a few iterations of unsupervised data followed by a few iterations of supervised data. We also extended Table 1 to include an entirely unsupervised case as well as human performance on the same task. A preview of the table can be found here: https://ibb.co/nhHUS0. In all cases, the model was trained with an unsupervised dataset of 100,000 lines of data, where each line contained the reference image, the 3 possible answers, but no information on which one was the correct answer.\\n\\n[Kingma et al. 2014] \\\"Semi-supervised Learning with Deep Generative Models\\\", 2014. \\n\\n\\n== RE: Missing reference Make-3D.\\n\\nSorry for the oversight. We will add the reference in our introduction.\"}", "{\"title\": \"Reply to Reviewer 1's \\\"Nice model but some details missing\\\" (part 2 of 2).\", \"comment\": \"== RE: How camera and class conditioning is done.\\n\\nFor the view conditioning, we used conditional batch-normalization which transforms a 3-dimensional vector (representing the camera coordinates) into affine batch-normalization parameters. For the class conditioning, we used the standard conditional GAN technique: We encoded the class labels as one-hot vectors and concatenated this vector to the inputs of the decoder, encoder, and discriminator. We added all of this to the paper.\\n\\n\\n== RE: Reconstruction error.\\n\\nWe added the exact formula for the reconstruction loss to the paper.\", \"it_is_a_weighted_sum_of_two_terms\": \"(a) the L2 loss of the input image given to the encoder and the rendered output on the decoder, and (b) the L2 loss of the noise given to the decoder and the inferred latent code of the rendered decoder output. This loss is similar to the bi-directional L2-loss used in [Li et al. 2017] and [Huang et al. 2018].\\n\\n[Huang et al. 2018] \\u201cMultimodal Unsupervised Image-to-Image Translation\\u201d, 2018\\n[Li et al. 2017] \\u201cALICE: Towards Understanding Adversarial Learning for Joint Distribution Matching\\u201d, 2017\\n\\n\\n== RE: 3D-IQTT loss function and training algorithm.\\n\\nIn these experiments, we removed the assumption of knowing the camera position. Our model had to learn to represent the scene geometry in a part of the latent vector (z_scene) and the camera position in another part (z_view).\", \"the_loss_term_for_the_supervised_part_of_the_training_enforced_this\": \"it rewards the z_scene of the correct answer to be close to the z_scene of the reference and it pushes z_scene of the reference and z_scene of wrong answers apart. We also minimized mutual information between z_scene and z_view in order to enforce distinct source of information captured by the latent dimensions. We also improved the explanation of this method in the paper.\\n\\nWe also added experiments where we did not add any supervised samples. In this case, z_scene and z_view get entangled and the task becomes significantly harder. Both CNN baselines were trained on the supervised data; therefore when comparing them in the unsupervised condition they perform according to the random initialization while our model was able to at least leverage the unsupervised data. Please see our updated Table-1 (https://ibb.co/nhHUS0 )\\n\\nWe also added the interleaved training algorithm for the semi-supervised task to the paper. It's similar to algorithm 2 from [Kingma et al. 2014], except instead of a randomized minibatch, we train a few iterations of unsupervised data followed by a few iterations of supervised data.\\n\\n[Kingma et al. 2014] \\\"Semi-supervised Learning with Deep Generative Models\\\", 2014.\"}", "{\"title\": \"Reply to Reviewer 1's \\\"Nice model but some details missing\\\" (part 1 of 2).\", \"comment\": \"Thanks so much for your time.\\nTL;DR: We\\u2019re completely rewriting the intro to focus on our actual contribution and not the long-term plan. We\\u2019re also working towards removing all the assumptions like camera position and light position knowledge. And we\\u2019ve added a lot more details about the 3D-IQTT methods.\\n\\n== RE: Providing clarification on the claims.\\n\\nWe agree that the extent of the contributions claimed in the introduction was unclear. We set out to present our long-term goal, to \\u201clearn the 3D structure of the real world just from single images\\u201d; however, in this paper, we have just made the first steps towards the goal and that was not apparent from the original introduction. We have reworked our claims accordingly in our new paper version (will be uploaded in the next few days) to reflect that our method \\u201creceives a monochrome image as input, estimates a depth map, and then shades this depth map using perfect knowledge of lighting and camera pose, which reconstructs the input\\u201d - as you suggested.\", \"our_model_makes_several_assumptions\": [\"(a) the camera pose is known, (b) the material properties are constant, (c) the light positions are known, and (d) the world is piece-wise smooth. In order to achieve our long-term goal, we have to eventually get rid of these. Therefore we have made some first steps to address each one:\", \"(Camera pose is known): In the 3D-IQTT experiments, we estimated the camera position in our latent representation while we kept the camera looking at the center of the object. We used the estimated camera parameters in the generator for the rendering process.\", \"(Material properties are constant): We used diffuse materials with uniform reflectance for all our experiments. The reflectance values were chosen arbitrarily but kept fixed for input-output pairs. In other words, we use fixed material which can be chromatic (reflects different wavelengths by different amount) or monochromatic (reflects all wavelengths the same amount). This is not the same as using \\\"monochromatic image\\\", it is just that material is constant and doesn't need to be inferred. We've added the details to the appendix.\", \"Learning the reflectance and color/texture properties (in addition to the surface depth and orientation) is significantly more challenging, but we are currently working towards that.\", \"(Lighting assumptions): In our work presented in the last version of the paper, we used multiple point light sources that were placed randomly on the surface of a spherical sector around the scene and colored randomly. For each pair of rendered input image and model-reconstructed output image, these light conditions were identical. We added more details about how we handled the lighting to the appendix.\", \"(The world is piece-wise smooth): This might not be perfectly accurate, but it\\u2019s a common assumption in 3D reconstruction. In an extreme case like when capturing cactus spikes, this might not work, but for example, when we were reconstructing a chair with a thin stretcher (see our video, https://bit.ly/2zADuqG), the reconstruction worked well.\", \"We agree that we are currently mainly recovering a depth map, but the surfel representation was picked with our long-term goal in mind, since gives us several advantages: (a) surfel representation allow us to represent only the visible surface of a complicated scene instead of explicitly representing the complete scene. Given an image we can infer its implicit 3D representation and then recreate novel surfel representations of the underlying scene from unobserved viewpoints. Moreover this representation fits well with current convolutional architectures (b) with our existing normal estimation and additional material estimation this allows for realistic shading.\"]}", "{\"title\": \"Response\", \"comment\": \"Thanks for asking. I realize 'implicit' means without supervision. In this case, how would the system compare with Rezende et al. [2016]? Their code might not be available. An alternative is perspective transformer net [Yan et al, 2016] and its follow-ups.\\n\\nA common problem with these unsupervised/self-supervised (or 'implicit') approach is that the learned scene representation is often incomplete and looks bad from a different view. How would the reconstructed scenes (objects) in Fig 4 look like from a different view? These results are important for a '3D' representation.\\n\\nRezende et al. Unsupervised Learning of 3D Structure from Images. NIPS'16.\"}", "{\"title\": \"Quick question regarding comparison to voxel/point clouds/primitives\", \"comment\": \"Dear AnonReviewer3,\\n\\nThank you so much for your review. We appreciate the time you put into this and your feedback.\\n\\nBefore we respond in full to all of your points, we have a quick question:\\nYou mentioned, \\\"It\\u2019d be good to compare with these [voxel/point clouds/primitives] scene representations\\\". We'd love to implement this but we're having some issues finding suitable code. Do you know of any code for methods that implicitly (i.e. without supervised training) learns object reconstruction using meshes/voxels/point clouds?\\n\\nThanks,\\nthe Pix2Scene team\"}", "{\"title\": \"Interesting inverse graphics model. Motivation and experiments are lacking.\", \"review\": \"This paper explored explaining scenes with surfels in a neural recognition model. The authors demonstrated results on image reconstruction, synthesis, and mental shape rotation.\\n\\nThe paper has many strengths. The model is clearly presented, the implementation is neat, the results on synthetic images are good. In particular, the results on the mental rotation task are interesting and new; I feel we should include more studies like these for scene and object representation learning. \\n\\nA few concerns remain. First, the motivation of the paper is unclear. The main advantage of the proposed representation, according to the intro, is its `implicitness\\u2019, which enables viewpoint extrapolation. I\\u2019d like to see more explanation on why \\u2018explicit\\u2019 representations don\\u2019t support that. A lot of the intro is currently talking about related work, which can be moved to later sections or to the supp material.\\n\\nThe paper then moves on to discuss surfels. While it\\u2019s new combine surfels with deep nets, I\\u2019m not sure how much benefits it brings over voxels, point clouds, or primitives. It\\u2019d be good to compare with these scene representations. \\n\\nMy second concern is the results are all on synthetic data, and most shapes are very simple. While the paper is called \\u2018pix2scene\\u2019, it\\u2019s really about \\u2018pix2object\\u2019 or \\u2018pix2shape\\u2019. I\\u2019d like to see results on more realistic scenes, where the number of objects as well as their shape and material varies.\\n\\nFor the mental rotation task, the authors should cite and discuss the classic work from Shepard and Metzler and include human performance for calibration.\\n\\nI\\u2019m on the border for this paper. Happy to adjust my rating based on the discussion and revision.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Nice model but some details missing\", \"review\": [\"This paper introduces a method to create a 3D scene model given a 2D image and a camera pose. The method is: (1) an \\\"encoder\\\" network maps the image to some latent code vector, (2) a \\\"decoder\\\" network uses the code and the camera pose to create a depthmap, (3) surface normals are computed from the depthmap, and (4) these outputs are fed to a differentiable renderer which reconstructs the input image. At training time, a discriminator provides feedback to (and simultaneously trains on) the latent code and the reconstructions. The model is self-supervised by the reconstruction error and the GAN setup. Experiments show compelling results in 3D scene generation for simple monochromatic synthetic scenes composed of an empty room corner and floating ShapeNet shapes.\", \"This is a nice problem, and if the approach ever works in the real world, it will be useful. On synthetic environments, the results are impressive.\", \"The paper seems to claim more ground than it actually covers. The abstract says \\\"Our method learns the depth and orientation of scene points visible in images\\\", but really only the depth is learned, and the \\\"orientation\\\" is an automatically-computed surface normal, which is a free byproduct of any depth estimate. The \\\"surfel\\\" description includes a reflectance vector, but this is never estimated or further described in the paper, so my guess is that it is simply treated as a scalar (which equals 1). Taking this reflectance issue together with the orientation issue, the model is not really estimating surfels at all, but rather just a depthmap, which makes the method seem considerably less novel. Furthermore, the differentiable rendering (eq. 1) appears to assume that all light sources are known exactly -- this is not a trivial assumption, and yet it is never mentioned in the paper. The text suggests that only an image is required to run the model, but Figure 3 shows that the networks are conditioned on the camera pose -- exact knowledge of the camera pose is difficult to obtain precisely in real settings, so this again is not an assumption to ignore.\", \"To rewrite the paper more plainly, one might say that it receives a monochrome image as input, estimates a depthmap, and then shades this depthmap using perfect knowledge of lighting and camera pose, which reconstructs the input. This may sound less appealing, but it also seems more accurate.\", \"The paper is also missing some details of the method and evaluation, which I hope can be cleared up easily.\", \"What is happening with the light source? This is critical in the shading equation (eq. 1), and yet no information is given on it -- we need the color and the position of every light in the scene.\", \"How is the camera pose represented? Section 3.3.3 says conditional normalization is used, but what exactly is fed to the network that estimates these conditional normalization parameters?\", \"What is the exact form of the reconstruction error? An equation would be great.\", \"How is the class-conditioning done in 4.2?\", \"In Eq. 4, the first usage of D_\\\\theta should use only the object part of the vectors, and the second usage should use only the geometric part, right? Maybe this can be cleared up with a second D_subscript.\", \"I do not understand the \\\"interleaved\\\" training setup in 4.4.1. Please explain that more.\", \"It is not clear to me why the task in 4.4.2 needs any supervised training at all, if the classification is just done by computing L2 distances in the latent space. What happens with \\\"0 sampled labels\\\"?\", \"Overall, I like the paper, and I can imagine others in my group liking it. I hope it gets in, assuming the technical details get cleaned up and the language gets softer.\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"learning 3D or depth images from 2D images\", \"review\": \"The paper deals with creating 3D representations or depth maps from 2D image data using adversarial training methods.\\nThe flow makes the paper readable.\\n\\nOne main concern is that most of the experiments seem to have results as visual inspections of figures provided. It is really hard to judge the correctness or how well the algorithms do.\\n\\nIt would be useful to provide references of equation 1 if used from previous text.\\n\\nIn the experiments, it is usually not clear how many training images were used, how many test. How different were the objects used in the training data vs test? Were all the test objects novel? How useful were the GAN techniques? Which part of the GAN did the most work i.e. the usefulness and accuracy of the different parts of the net? Even in 4.2, though it mentions use of 6 object types for both training and testing, using the figures is hard to estimate how well the model does compared to a reference baseline.\\n\\nIn 4.4.1, the discussion on how much improvement there is due to use of unlabeled images is missing? Do they even help? It is not quite clear from table 1. How many unlabeled images were used? How many iterations in total are used of the unlabeled ones (given there is 1 in 100 update of labeled ones).\", \"missing_reference\": \"http://www.cs.cornell.edu/~asaxena/reconstruction3d/saxena_make3d_learning3dstructure.pdf\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"1: The reviewer's evaluation is an educated guess\"}" ] }
BJlxm30cKm
An Empirical Study of Example Forgetting during Deep Neural Network Learning
[ "Mariya Toneva*", "Alessandro Sordoni*", "Remi Tachet des Combes*", "Adam Trischler", "Yoshua Bengio", "Geoffrey J. Gordon" ]
Inspired by the phenomenon of catastrophic forgetting, we investigate the learning dynamics of neural networks as they train on single classification tasks. Our goal is to understand whether a related phenomenon occurs when data does not undergo a clear distributional shift. We define a ``forgetting event'' to have occurred when an individual training example transitions from being classified correctly to incorrectly over the course of learning. Across several benchmark data sets, we find that: (i) certain examples are forgotten with high frequency, and some not at all; (ii) a data set's (un)forgettable examples generalize across neural architectures; and (iii) based on forgetting dynamics, a significant fraction of examples can be omitted from the training data set while still maintaining state-of-the-art generalization performance.
[ "catastrophic forgetting", "sample weighting", "deep generalization" ]
https://openreview.net/pdf?id=BJlxm30cKm
https://openreview.net/forum?id=BJlxm30cKm
ICLR.cc/2019/Conference
2019
{ "note_id": [ "B1xFB1Y4A4", "HJxcbOkJxN", "S1gAKB7QC7", "rklya7Xm0Q", "rkl2xZmXCQ", "SkgKResg0X", "S1lWe2ceRQ", "ryeU8m5xRQ", "Skx4CMsha7", "BJgqy28cTQ", "ryxipBQcpX", "B1eemyXqp7", "rygcHkH53m", "SyxkXUThiQ" ], "note_type": [ "comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_review" ], "note_created": [ 1559691073171, 1544644609525, 1542825350072, 1542824886792, 1542824179820, 1542660304723, 1542659048736, 1542656845628, 1542398667847, 1542249441572, 1542235587287, 1542233879770, 1541193538488, 1540310551405 ], "note_signatures": [ [ "~Xinshao_Wang1" ], [ "ICLR.cc/2019/Conference/Paper1326/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1326/Authors" ], [ "ICLR.cc/2019/Conference/Paper1326/Authors" ], [ "ICLR.cc/2019/Conference/Paper1326/Authors" ], [ "ICLR.cc/2019/Conference/Paper1326/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1326/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1326/Authors" ], [ "ICLR.cc/2019/Conference/Paper1326/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1326/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1326/Authors" ], [ "ICLR.cc/2019/Conference/Paper1326/Authors" ], [ "ICLR.cc/2019/Conference/Paper1326/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1326/AnonReviewer2" ] ], "structured_content_str": [ "{\"comment\": \"Dear authors, it is a great work and I am interested in it a lot.\\n\\nAs one offical reviewer mentioned, I am also concerned about the the measurement of forgetting itself. ''Simply due to chance, some examples will be correctly labeled at some point in training, which makes it difficult to distinguish whether a \\u201cforgotten\\u201d example was actually ever learned in the first place. \\\"\", \"i_suppose_many_factors_cause_this_phenomenon_happening\": \"the random sampling of SGD, the batch size, the learning rate, initialisation, etc.\\n\\nI noticed that the paramters' update order has been studied and added. \\nCould you please share more information about other factors, e.g., batch size, learning rate, or initialised by pretrained models etc. \\n\\n\\nI am looking forward to your sharing. Thanks very much.\", \"title\": \"The measurement of forgetting itself\"}", "{\"metareview\": \"This paper is an analysis of the phenomenon of example forgetting in deep neural net training. The empirical study is the first of its kind and features convincing experiments with architectures that achieve near state-of-the-art results. It shows that a portion of the training set can be seen as support examples. The reviewers noted weaknesses such as in the measurement of the forgetting itself and the training regiment. However, they agreed that their concerns we addressed by the rebuttal. They also noted that the paper is not forthcoming with insights, but found enough value in the systematic empirical study it provides.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Meta-review\"}", "{\"title\": \"Alleviating Catastrophic Forgetting as One of Several Exciting Future Steps\", \"comment\": \"Thanks for your review and suggestions, your suggested additional experiments have strengthened the paper and we will acknowledge them in the paper, if accepted. Applying some of our results towards solving catastrophic forgetting is one of the promising directions we hope to investigate in the future. One of the paths we are currently investigating is whether we can build focused memories of representative examples from previous tasks. Nonetheless, we believe our current analysis to be general, and, as such we keep the hope that our results could potentially be helpful in an even larger set of problems.\"}", "{\"title\": \"Additional Experiments Introducing Pixel Noise\", \"comment\": \"Thank you for your review and suggestions.\\n\\nWe performed two additional experiments in CIFAR-10 and have presented the results in the updated supplementary. We are happy to include any parts that the reviewer finds helpful in the main paper.\\n\\n1. We corrupt all training images with additive Gaussian noise with mean 0 and increasing standard deviation (std 0.5, 1, 2, 10), and track the forgetting events during training as usual. Note that we add the noise after a channel-wise standard normalization step of the training images (zero mean, unit variance). Therefore, noise with standard deviation of 2 has twice the standard deviation of the unperturbed training data.\\n\\nWe present the results in Figure 11 in Appendix 10. We observe that adding increasing amount of noise decreases the amount of unforgettable examples and increases the amount of examples in the second mode of the forgetting distribution.\\n\\n2. We follow the label noise experiments presented in Figure 3, and augment only random 20% of the training data with additive Gaussian noise (mean 0, std 10). We present the results of comparing the forgetting distribution of the 20% of examples before and after pixel noise was added in Figure 12 (Left) in Appendix 10. We observe that the forgetting distribution under pixel noise resembles the one under label noise. It is a very interesting observation that we plan to investigate in the future.\\n\\nWe agree that it is important to follow-up with a dataset like Imagenet and will pursue this direction in our future work.\"}", "{\"title\": \"Acknowledging Remarks\", \"comment\": \"Thank you for all of your important remarks -- they have substantially contributed to improving the paper and we will make sure to acknowledge it in the final version of the paper, if accepted.\"}", "{\"title\": \"Response to Author\", \"comment\": \"Thank you for the clarification and extra experiments on CIFAR-100.\\nOverall, this is a paper with high quality, the experiments are complete and the paper is well written. I'm increasing the score to 7.\\nI'm not giving a higher score because I think the impact of this paper on solving the catastrophic forgetting problem seems limited.\"}", "{\"title\": \"Second Response to Authors\", \"comment\": \"Thank you for providing the additional experiments and updating the text. The new section on \\\"Forgetting by chance\\\" is very nice and the multiple runs for Figure 4 make the point much more convincingly.\\n\\nOverall, the paper has improved dramatically since the initial submission, and I appreciate the authors' effort to provide additional controls to clarify and provide additional substantiation for the claims made in the paper. The observations in this work are significant and novel, and as such, I am raising my score to an 8, and clearly recommend acceptance to ICLR.\"}", "{\"title\": \"Updated Figures and Descriptions\", \"comment\": \"Thank you for the response and for the additional comments. Please find our responses below:\\n\\n1) We\\u2019ve included the histogram of forgetting events under the true gradient steps in Figure 12 in the updated Appendix 11. We also included a discussion about confidence bounds in the paragraph \\u201cStability across seeds\\u201d in Section 4 and we created a new paragraph \\u201cForgetting by chance\\u201d to discuss the new results.\\n\\n3) We\\u2019ve updated Figure 4 with the mean and standard errors over 5 runs of the experiments. In each run, we randomly sample the \\u2018never forgotten\\u2019 set and the \\u2018forgotten at least once\\u2019 set from all of the examples of their respective kind. The initial stability of the forgotten set in the first half of c.3 is reproducible. This is an interesting observation that we plan to investigate in the future.\\n\\n4) The examples in the supplemental figure are the least and most forgotten examples of each class, when all examples are sorted by number of forgetting events (ties are broken randomly). We clarified this in Appendix 13 in the updated paper.\"}", "{\"title\": \"Response to Authors\", \"comment\": \"Thank you for your response and the additional experiments provided. Please find my comments below:\\n\\n1) Both of the additional experiments (Appendices 11 and 12) are quite nice and provide clear evidence that the results observed are not merely due to chance forgetting. For Figure 12, please include a comparison to the histogram of forgetting events under true gradient steps as well. In addition, I could not find discussion of chance forgetting in the manuscript itself. Please include several sentences discussing both of these experiments in the main text (it's fine to leave the figures and details in the appendix).\\n\\n2) Thank you for the clarification.\\n\\n3) Thank you for including the additional ordering in Figure 4. While these experiments definitely show that the degradation in section 2 is greater for the forgotten set than the never forgotten set, it's interesting that the forgotten set is relatively stable for the first half of c.3, such that the difference between c.3 and b.3 is only present between epochs 50 and 60. I wonder if this is simply due to chance in the training run. It would be helpful to redo this experiment once more with multiple runs and error bars to assess whether this is real or simply an artifact.\\n\\n4) Thanks for including additional examples in the supplemental figure. Just to clarify, were these examples chosen randomly or hand-selected? \\n\\nIn light of the updated results, I have increased my score to a 6. Should the authors include a new version of Figure 4 with multiple runs and address the other post-rebuttal comments, I would be happy to further increase my score.\"}", "{\"title\": \"Extra Review: Excellent paper which thoroughly explores a very interesting question\", \"review\": \"This is an excellent analysis paper of a very interesting phenomenon in deep neural networks.\\n\\nQuality, Clarity, Originality:\\nAs far as I know, the paper explores a very relevant and original question -- studying how the learning process of different examples in the dataset varies. In particular, the authors study whether some examples are harder to learn than others (examples that are forgotten and relearned multiple times through learning.) We can imagine that such examples are \\\"support vectors\\\" for neural networks, helping define the decision boundary.\\n\\nThe paper is very clear and the experiments are of very high quality. I particularly appreciated the effort of the authors to use architectures that achieve close to SOTA on all datasets to ensure conclusions are valid in this setting. I also thought the multiple repetitions and analysing rank correlation over different random seeds was a good additional test.\\n\\nSignificance\\nThis paper has some very interesting and significant takeaways.\\nSome of the other experiments I thought were particularly insightful were the effect on test error of removing examples that aren't forgotten to examples that are forgotten more. In summary, the \\\"harder\\\" examples are more crucial to define the right decision boundaries. I also liked the experiment with noisy labels, showing that this results in networks forgetting faster.\\n\\nMy one suggestion would be to try this experiment with noisy *data* instead of noisy labels, as we are especially curious about the effect of the data (as opposed to a different labelling task.)\\n\\nI encourage the authors to followup with a larger scaled version of their experiments. It's possible that for a harder task like Imagenet, a combination of \\\"easy\\\" and \\\"hard\\\" examples might be needed to enable learning and define good decision boundaries.\\n\\nI argue strongly for this paper to be accepted to ICLR, I think it will be of great interest to the community.\", \"rating\": \"9: Top 15% of accepted papers, strong accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Clarifications and Additional Investigation on CIFAR-100\", \"comment\": \"Thanks for your interesting review. We try to address your remarks below:\\n\\n1) We randomly shuffle all datasets at the start of each epoch.\\n\\n2) As suggested, we investigated forgetting in CIFAR-100. We show the detailed results in Appendix 14 of the updated paper. In short, we observe that about 8% of examples in CIFAR-100 are unforgettable, which is the lowest percentage out of all investigated datasets: CIFAR-100 contains 10 times fewer examples per class (500 examples per class) than CIFAR-10 or the MNIST datasets, making each image all the more useful for the learning problem.\\n\\nUnexpectedly, we observed that the distribution of forgetting events in CIFAR-100 resembles the distribution of forgetting events in the noisy CIFAR-10 (with 20% randomly changed labels). This led us to suspect that a portion of CIFAR-100 examples could have noisy labels. Upon visualization of the most forgotten examples in CIFAR-100, we discovered that there are several images that appear under multiple labels, introducing noise to the dataset and possibly diminishing the proportion of unforgettable examples.\\n\\nFor completeness, we added the removal experiments from Figure 5 (Left) for CIFAR-100 to Appendix 14. The results align with those from the other datasets -- we are able to remove all unforgettable examples and maintain generalization performance, while outperforming a random removal baseline.\\n\\n3) We have included the experiment in the main paper in Figure 4 (right). Note that the \\\"never forgotten\\\" set continues to suffer from less degradation when training on the \\\"forgotten at least once\\\" set.\"}", "{\"title\": \"Additional Experiments and Clarifications\", \"comment\": \"Thanks for your detailed review. We tried to improve the paper according to your comments:\\n\\n-- Major points:\\n\\n1) We do acknowledge the importance of considering the possibility of forgetting occurring by chance, suggesting the need for confidence bounds on the number of forgetting events. Before addressing it with additional experiments, we wish to point out that the paper in its current form suggests that it is highly unlikely for the ordering produced by the metric to be the by-product of another unrelated random cause:\\n\\n1/ The correlation between the ordering obtained from two sets of 5 random seeds is 97.6%. We will highlight this fact more prominently in the paper (according to your minor point 2).\\n2/ Removing unforgettable examples has a stronger effect than removing randomly chosen examples, suggesting that the vast majority of removed examples with low forgetting events are not picked due to some unrelated random phenomenon.\\n \\nWe followed your interesting suggestion and applied random steps to collect chance forgetting events on CIFAR-10. The results are shown in Appendix 11 of the updated paper. We report the histogram of ``chance forgetting events (please, see text in the paper for more details) averaged over 5 seeds. This gives an idea of the chance forgetting rate across examples. In this setting, examples are being forgotten \\u201cby chance\\u201d at most twice and most of the time less than once. We are happy to include parts of that section in the main text if it answers your concerns, as we believe it makes the paper stronger.\\nWe also ran the original experiment on 100 seeds to devise 95% confidence bounds on the average (over 5 seeds) number of forgetting events per example (see Appendix 12). The confidence interval of the least forgotten examples is tight, confirming that examples with a small number of forgetting events can be ranked confidently.\\n\\n2) We trained on all datasets for the same number of epochs (200) to study the number of forgetting events. We\\u2019ll clarify this in the paper.\\n\\n3) Not including the figure with the opposite alternating sequence of tasks was an oversight (we intended to include it in the supplementary). We have now included it in the main paper in Figure 4 (right). Note that the \\u201cnever forgotten\\u201d set continues to suffer from less degradation when training on the \\u201cforgotten at least once\\u201d set.\\n\\n4) We have updated Figure 2 to include a forgettable and unforgettable example from each class, and have included 12 more examples per class in the supplementary (Figure 14). Our main claim is that the unforgettable examples are supported by other examples in the training set, and thus can be removed without impacting generalization. The visualization shows that the unforgettable examples indeed are prototypical of their class (e.g. unobstructed full view of the entire object, commonly observed background), especially when compared to the forgettable examples, which contain more peculiar features (e.g. obstructed view of object or only parts of the object, uncommon color or context).\\n\\n-- Minor points\\n\\n1) We thank the reviewer for pointing us to this work and have included it in the discussion (Section 2 / Paragraph 1)\\n2) We have moved this discussion to Section 4 where we mention experimental results and mentioned the finding at the end of the Introduction.\\n3) We have updated Figure 1 to include the full histograms.\\n4) We\\u2019ve updated all numbers to improve readability.\"}", "{\"title\": \"Review of \\\"An Empirical Study of Example Forgetting during Deep Neural Network Learning\\\"\", \"review\": \"UPDATE 2 (Nov 19, 2018): The paper has improved very substantially since the initial submission, and the authors have addressed almost all of my comments. I have therefore increased my score to an 8 and recommend acceptance.\\n------------------------------------------------------------------------------------------------------------------------------\\n\\nUPDATE (Nov 16, 2018) : In light of the author response, I have increased my score to a 6.\\n------------------------------------------------------------------------------------------------------------------------------\\n\\nThis paper aims to analyze the extent to which networks learn to correctly classify specific examples and then \\u201cforget\\u201d these examples over the course of training. The authors provide several examples of forgettable and unforgettable examples, demonstrating, among other things, that examples with noisy examples are more forgettable and that a reasonable fraction of unforgettable examples can be removed from the training set without harming performance. \\n\\nThe paper is clearly written, and the work is novel -- to my knowledge, this is the first investigation of example forgetting over training. There are an interesting and likely important set of ideas here, and portions of the paper are quite strong -- in particular, the experiment demonstrating that examples with noisy examples are more forgettable is quite nice. However, there are several experimental oversights which make this paper difficult to recommend for publication in its current form.\", \"major_points\": \"1) The most critical issue is with the measurement of forgetting itself: the authors do not take into account the chance forgetting rate in any of their experiments. Simply due to chance, some examples will be correctly labeled at some point in training (especially in the datasets analyzed, which only contain 10 classes). This makes it difficult to distinguish whether a \\u201cforgotten\\u201d example was actually ever learned in the first place. In order to properly ground this metric, measurements of chance forgetting rates will be necessary (for example, what are the forgetting rates when random steps are taken at each update step?). \\n\\n2) Were the networks trained on MNIST, permutedMNIST, and CIFAR-10 trained for the same number of epochs? Related to point 1, the forgetting rate should increase with the number of epochs used in training as the probability of each example being correctly classified should increase. If the CIFAR-10 models were trained for more epochs, this would explain the observation that more CIFAR-10 examples were \\u201cforgettable.\\u201d\\n\\n3) In the experiment presented in Figure 4b, it is difficult to tell whether the never forgotten set suffers less degradation in the third training regime because the examples were never forgotten or because the model had twice has much prior experience. Please include a control where the order is flipped (e.g., forgotten, never forgotten, forgotten in addition to the included never forgotten, forgotten, never forgotten order currently present).\\n\\n4) The visual inspection of forgettable and unforgettable examples in Figure 2 is extremely anecdotal, and moreover, do not even appear to clearly support the claims made in the paper.\", \"minor_points\": \"1) In the discussion of previous studies which attempted to assess the importance of particular examples to classification decisions, a citation to [1] should be added. \\n\\n2) The point regarding similarity across seeds is absolutely critical (especially wrt major comment 1) , and should be included earlier in the paper and more prominently.\\n\\n3) The histograms in Figure 1 are misleading in the cropped state. While I appreciate that the authors included the full histogram in the supplement, these full histograms should be included in the main figure as well, perhaps as an inset.\\n\\n4) The inclusion of a space after the commas in numbers (e.g., 50, 245) is quite confusing, especially when multiple numbers are listed as in the first line on page 4.\\n\\n[1] Koh, Pang Wei and Percy Liang. \\u201cUnderstanding Black-box Predictions via Influence Functions.\\u201d ICML (2017).\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Thorough experiments which prove there exist \\\"support examples\\\" in neural network training.\", \"review\": \"This paper studies the forgetting behavior of the training examples during SGD. Empirically it shows there are forgettable and unforgettable examples, unforgettable examples are like \\\"support examples\\\", one can achieve similar performance by training only on these \\\"support examples\\\". The paper also shows this phenomenon is consistent across different network architectures.\", \"pros\": \"This paper is written in high quality, clearly presented. It is original in the sense that this is the first empirical study on the forgettability of examples in during neural network training.\", \"comments_and_questions_on_the_experiment_details\": \"1. Is the dataset randomly shuffled after every epoch? One concern is that if the order is fixed, some of the examples will be unforgettable simply because the previous batches have similar examples , and training the model on the previous batches makes it good on some examples in the current batch.\\n2. It would be more interesting to also include datasets like cifar100, which has more labels. The current datasets all have only 10 categories.\\n3. An addition figure can be provided which switches the order of training in figure 4b. Namely, start with training on b.2.\", \"cons\": \"Lack of insight. Subjectively, I usually expect empirical analysis papers to either come up with unexpected observations or provide guidance for practice. In my opinion, the findings of this work is within expectation, and there is a gap for practice.\\n\\nOverall this paper is worth publishing for the systematic experiments which empirically verifies that there are support examples in neural networks.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
r1lgm3C5t7
Universal discriminative quantum neural networks
[ "Hongxiang Chen", "Leonard Wossnig", "Hartmut Neven", "Simone Severini", "Masoud Mohseni" ]
Quantum mechanics fundamentally forbids deterministic discrimination of quantum states and processes. However, the ability to optimally distinguish various classes of quantum data is an important primitive in quantum information science. In this work, we trained near-term quantum circuits to classify data represented by quantum states using the Adam stochastic optimization algorithm. This is achieved by iterative interactions of a classical device with a quantum processor to discover the parameters of an unknown non-unitary quantum circuit. This circuit learns to simulate the unknown structure of a generalized quantum measurement, or positive-operator valued measure (POVM), that is required to optimally distinguish possible distributions of quantum inputs. Notably we used universal circuit topologies, with a theoretically motivated circuit design which guaranteed that our circuits can perform arbitrary input-output mappings. Our numerical simulations showed that quantum circuits could be trained to discriminate among various pure and mixed quantum states, exhibiting a trade-off between minimizing erroneous and inconclusive outcomes with comparable performance to theoretically optimal POVMs. We trained the circuit on different classes of quantum data and evaluated the generalization error on unseen quantum data. This generalization power hence distinguishes our work from standard circuit optimization and provides an example of quantum machine learning for a task that has inherently no classical analogue.
[ "quantum machine learning", "quantum data classification" ]
https://openreview.net/pdf?id=r1lgm3C5t7
https://openreview.net/forum?id=r1lgm3C5t7
ICLR.cc/2019/Conference
2019
{ "note_id": [ "HyxnXjJJeV", "Hyl6l37i3Q", "BJl60PYchX", "rJgSi_Eq2Q" ], "note_type": [ "meta_review", "official_review", "official_review", "official_review" ], "note_created": [ 1544645412091, 1541254132675, 1541212117514, 1541191837338 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1325/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1325/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1325/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1325/AnonReviewer1" ] ], "structured_content_str": [ "{\"metareview\": \"The paper needs work to improve clarity and strengthen the technical message. Also, the authors broke the policy of anonymous submission which disqualifies the paper.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Paper needs improvement\"}", "{\"title\": \"Review TLDR: Ok good fit for ICLR maybe even better for QIP\", \"review\": \"Authors give a method to perform a full quantum problem of classifying unknown mixed quantum states. This is an important topic but the paper is ok and I think the test case is a bit lacking.\\n\\nThe theory is sound and the math is good. The only question I have is how does this hold on a real quantum computer such as IBMQ/rigetti quantum computing etc.. or even under a noisy simulator \\n\\nAlthough the paper is sounds and it is a good idea, the presentation is a bit lacking. There are several typos and formatting problems, such as excess spaces and some sort of hex code (9b8d) in the abstract which I am guessing is left over from the NIPS template.\\nTwo other things is that usually in double blind review one should not leave the emails with affiliation and one should anonymize the Acknowledgements as well.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Classifying a quantum state with a neural network--Not sure ICLR has the right audience for this\", \"review\": \"Summary of paper:\\nThe authors partially integrate a neural network into classical approaches to classify the state of a quantum circuit. The model is not actually clear in what it is doing, but there are some trained weights somewhere. They allow for an \\\"uncertainty\\\" prediction by giving one more node than there are classification targets, corresponding to a less-penalized uncertain prediction. They evaluate their model on numerical simulations.\", \"strengths\": \"-\", \"weaknesses\": [\"The neural network architecture is entirely standard with nothing new.\", \"The paper is poorly written and very hard to follow.\", \"The focus is almost exclusively on the application, and yet the application is not explained effectively.\", \"The implication of the results and usefulness is not elaborated.\", \"The particular contributions are not clear.\"], \"suggested_revisions\": [\"What is the 9b8d in the first sentence of the abstract?\", \"\\\"...been developed to address the question [of] whether quantum mechanics...\\\"\", \"\\\"...for all the dataset[s] in Table 1...\\\"\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}", "{\"title\": \"Not double blind\", \"review\": \"Unfortunately, while this is interesting work, the authors emails are listed on the first page and the acknowledgments are very revealing. I am a big fan of Google, UCL, and the Royal Society, and this strongly biases my view of the work.\", \"my_biased_review\": [\"the paper is interesting, and should go to another venue. I do not think the authors will get benefit from presenting this work at ICLR (there is a tiny quantum focus).\", \"how is the cost function justified? I'd be curious to see how the authors derived it. Right now above Eq 2.4 it seems like it is heuristic to balance successful/erroneous/inconclusive rates. If it is a heuristic, the paper should clearly state this.\", \"using simple examples of quantum data and quantum states would go a long way towards helping me understand the problem setup (Eq 2.1). It took me a while to grok this.\", \"The acronym POVM is never defined.\"], \"rating\": \"2: Strong rejection\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}" ] }
r1gl7hC5Km
Adapting Auxiliary Losses Using Gradient Similarity
[ "Yunshu Du", "Wojciech M. Czarnecki", "Siddhant M. Jayakumar", "Razvan Pascanu", "Balaji Lakshminarayanan" ]
One approach to deal with the statistical inefficiency of neural networks is to rely on auxiliary losses that help to build useful representations. However, it is not always trivial to know if an auxiliary task will be helpful for the main task and when it could start hurting. We propose to use the cosine similarity between gradients of tasks as an adaptive weight to detect when an auxiliary loss is helpful to the main loss. We show that our approach is guaranteed to converge to critical points of the main task and demonstrate the practical usefulness of the proposed algorithm in a few domains: multi-task supervised learning on subsets of ImageNet, reinforcement learning on gridworld, and reinforcement learning on Atari games.
[ "auxiliary losses", "transfer learning", "task similarity", "deep learning", "deep reinforcement learning" ]
https://openreview.net/pdf?id=r1gl7hC5Km
https://openreview.net/forum?id=r1gl7hC5Km
ICLR.cc/2019/Conference
2019
{ "note_id": [ "BJxbHGYxeE", "Bklzqu6sRX", "HylMgwajC7", "HylMDMj5Am", "r1er2gjqAX", "SyxdCRc5RQ", "rkxamqUqAX", "HkxmPvI9AX", "Ske2aSLqCQ", "BkxRqSI9C7", "Hkx9DB85Cm", "Syg2_4L9A7", "r1eyMy-92X", "BklWUQ1OhX", "Bklbw0cw2X" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544749624603, 1543391369859, 1543390954512, 1543316057557, 1543315628546, 1543315151749, 1543297573368, 1543296858756, 1543296451735, 1543296406339, 1543296354300, 1543296115915, 1541177095395, 1541038921112, 1541021273053 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1324/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1324/Authors" ], [ "ICLR.cc/2019/Conference/Paper1324/Authors" ], [ "ICLR.cc/2019/Conference/Paper1324/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1324/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1324/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1324/Authors" ], [ "ICLR.cc/2019/Conference/Paper1324/Authors" ], [ "ICLR.cc/2019/Conference/Paper1324/Authors" ], [ "ICLR.cc/2019/Conference/Paper1324/Authors" ], [ "ICLR.cc/2019/Conference/Paper1324/Authors" ], [ "ICLR.cc/2019/Conference/Paper1324/Authors" ], [ "ICLR.cc/2019/Conference/Paper1324/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1324/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1324/AnonReviewer2" ] ], "structured_content_str": [ "{\"metareview\": \"This paper tackles the problem of using auxiliary losses to help regularize and aid the learning of a \\\"goal\\\" task. The approach proposes avoiding the learning of irrelevant or contradictory details from the auxiliary task at the expense of the \\\"goal\\\" tasks by observing cosine similarity between the auxiliary and main tasks and ignore those gradients which are too dissimilar.\\n\\nTo justify such a setup one must first show that such negative interference occurs in practice, warranting explicit attention. Then one must show that their algorithm effectively mitigates this interference and at the same time provides some useful signal in combination with the main learning objective. \\n\\nDuring the review process there was a significant discussion as to whether the proposed approach sufficiently justified its need and usefulness as defined above. One major point of contention is whether to compare against the multi-task literature. The authors claim that prior multi-task learning literature is out of scope of this work since their goal is not to measure performance on all tasks used during learning. However, this claim does not invalidate the reviewer's request for comparison against multi-task learning work. In fact, the authors *should* verify that their method outperforms state-of-the-art multi-task learning methods. Not because they too are studying performance across all tasks, but because their method which knows to prioritize one task during training should certainly outperform the learning paradigms which have no special preference to one of the tasks. \\n\\nA main issue with the current draft centers around the usefulness of the proposed algorithm. First, whether the gradient co-sine similarity is a necessary condition to avoid negative interference and 2) to show at least empirically that auxiliary losses do offer improved performance over optimizing the goal task alone. Based on the experiments now available the answers to these questions remains unclear and thus the paper is not yet recommended for publication.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Further validation of algorithm needed\"}", "{\"title\": \"Clarifying what we mean by \\\"multi-task\\\"\", \"comment\": \"We suspect that this disagreement is primarily caused by what we mean by \\u201cmulti-task learning\\u201d versus what the reviewer means by \\u201cmulti-task learning\\u201d.\\n\\nTo be more specific, we further clarify the differences between [Chen et al,. ] & [Kendall et al.,] and our paper, assume we have two tasks, L_{main} and L_{aux}: \\n\\n[Chen et al,. ] & [Kendall et al.,]: \\n- Problem setup: achieve good performance on both L_{main} and L_{aux}\\n- They optimize [ w_{main} * L_{main} + w_{aux} * L_{aux} ], where w_{main} is tuned based on L_{main} only; w_{aux} is tuned based on L_{aux} only.\", \"our_paper\": \"- Problem setup: achieve good performance on L_{main} only\\n- We optimize [ L_{main} + \\\\lambda * (L_{aux} ], where \\\\lambda = 2*sign(cosine( \\\\nabla_L_{main}, \\\\nabla_L_{aux} ))-1, we refer to this as \\\"our method\\\"\\n- NOTE: *\\\\lambda depends on both L_{main} and L_{aux}. This is the crucial difference*\\n\\nSince \\\\lambda is a binary weight in our \\u201cunweighted\\u201d version, we compare to:\\n(i) L_{main}, which is equivalent to always set \\\\lambda=0, we refer to this as \\u201csingle task\\u201d\\n(ii) L_{main} + L_{aux}, which is equivalent to always set \\\\lambda=1, we refer to this as \\u201cmulti-task\\u201d \\nFor this reason (that the weight \\\\lambda is binary), we believe this is a fair comparison to test the effectiveness of our method without additional confounding factors. We did not intend to claim our method is better than all variants of multi-task learning papers such as those in [Chen et al,. ] & [Kendall et al.,] (they are solving a different problem, see also the earlier comment). Apologies if this was unclear, we will be happy to rephrase the claims/text in our paper. \\n\\nAs we mentioned earlier, the work of [Chen et al,. ] & [Kendall et al.,] is complementary as they address the problem of scaling losses individually, while our method focuses on capturing the alignment between losses, therefore, deciding if an auxiliary task will be helpful for the main task and for how long. One way to combine the strengths of our method and their work would be to optimize the following: \\nw_{main} * L_{main} + \\\\lambda * w_{aux} * L_{aux} \\nWe leave this for future work.\\n\\nDirectly comparing our method to L_{main} + w_{aux} * L_{aux} would not be appropriate as \\n(i) it is a bit unfair to those methods as w_{aux} only depends on L_{aux} \\n(ii) due to the confounding factors (aux loss is \\u201cweighted\\u201d in one vs \\u201cunweighted\\u201d in the other), it is hard to draw any meaningful conclusions about whether the performance difference is coming due to individually scaling the loss or scaling the loss by the alignment.\"}", "{\"title\": \"Clarification\", \"comment\": \"Apologies for not being clearer: we meant the statement \\u201cestablished methods \\u2026 \\u201d in the context of usual auxiliary tasks in RL, i.e., [Jaderberg et al.,] and [Mirowski et al.,]. As we have pointed out in the previous rebuttal, \\u201cthe auxiliary task in Mirowski et al. is a depth prediction task which predicts an extremely low dimensional version of the depth map, instead of trying to construct the full depth map\\u201d. Similarly in Jaderberg et al., 2017, an example is the use of an \\u201cimmediate reward prediction\\u201d as the auxiliary task, which explicitly helps the agent in shaping its internal feature learning thus more efficient in terms of the amount of experience needed to learn.\\n\\nThis is very different from the setup in [Chen et al], [Kendall et al.], so we agree that our statement wouldn\\u2019t be applicable in those contexts. While it is true that in their papers equal weight hurts the performance in one task or another, it is not clear that the results in [Chen et al., Table 2] and [Kendall et al., Table 1] are caused necessarily by negative interference. A confounding factor in their paper is that the scales/units of their losses are very different. The fact that their proposed solutions, which take into account only the scale of the losses without looking at their interaction, can still achieve better performance, shows that part of the issue is ensuring that the individual losses are on the same scale (through the course of training) and not that one necessarily hurts the performance of the other. Quoting [Kendall et al.], their method \\u201callows us to simultaneously learn various quantities with different units or scales in both classification and regression settings\\u201d, quoting [Chen et al] their goal is to \\u201cplace gradient norms for different tasks on a common scale through which we can reason about their relative magnitudes\\u201d. These methods are primarily addressing the scaling problem and not directly addressing negative interference as we are (see also below where we further clarify the differences).\\t\\n\\nWe disagree with the reviewer that the \\\"paper uses set of problems which are not realistic and practical\\\". For supervised learning, we use (subsets of) ImageNet dataset which is more challenging than say MNIST/SVHN used in publications. For RL, Atari games have been extensively used as the benchmark in many publications. From both perspectives, we believe our experiment design is realistic and valid, and comparable with other benchmarks used in the literature.\\n\\nRegarding the comment that our experiments were \\u201cdesigned to satisfy the assumptions of the setup in the paper\\u201d. We picked these experiments as we know the ground truth of whether an auxiliary task helps or not, and we can scientifically validate whether our experiments agree with our hypothesis. Note that our methods are not given any kind of privileged information (whether the auxiliary loss helps or hurts) during training, the ground truth is just for us to verify if our method is working as intended. We believe it is completely reasonable to define particular tasks and in fact preferable to make such assumptions explicit, as it allows us to show that the method indeed works due to the reasons cited in the paper.\\n\\nAs mentioned earlier, we\\u2019re happy to additionally include experiments using the multi-task setup of [Kendall et al.] or [Chen et al] in the final version. To the best of our knowledge, the ground truth task similarity is not available and the code for neither of those papers is publicly available (we\\u2019d appreciate pointers if the reviewer is aware of an implementation!), so we need to reproduce their experimental setup from scratch, which was not possible in the short rebuttal time. We\\u2019ll include this experiment in the final version.\\n\\n\\\"Although they are scientifically valuable, they are not relevant to the practitioners.\\\": We believe our work is relevant to practitioners and valuable to the community. See also Reviewer 2\\u2018s comments which said \\u201cthe proposed method is a welcome addition to the tool box of practitioners.\\u201d\"}", "{\"title\": \"Partially agree with the response\", \"comment\": \"If the main purpose of the experiments is showing that no negative interference happens during training, I partially agree with the authors since it is showing better performance than naive multi-task learning with equal weighting. On the other hand, in ImageNet the single task outperforms both methods; hence, some negative interference happens. Claiming it as a success is little bit far fetched. Only conclusion I see is that the method has less negative interference than most naive multi-task baseline.\\n\\nFor learning Breakout and Ms. Pacman simultaneously comment, I misunderstood that part of the paper. Thanks for the clarification.\"}", "{\"title\": \"Disagree with the response\", \"comment\": \"(i) I strongly disagree with the authors that they are not fair baselines. Authors already compare with multi-task learning. However, they choose the most naive multi-task learning algorithm in the literature (equal weighting). A fair comparison to multi-task learning requires comparison to strong multi-task learning baselines. All the cited papers are multi-task learning papers and authors need to choose a strong multi-task learning baseline. Authors use sentences like \\\"...we compare single-task training, multi-task training...\\\". These claims require strong multi-task baselines not naive baselines.\\n\\n(ii)/(iii) Yes, they are doing something completely different since they are doing multi-task learning. But, paper is claiming the proposed method is better than multi-task learning. Hence, paper need a strong multi-task learning baselines.\"}", "{\"title\": \"Misunderstandings/Errors\", \"comment\": \"The statement that \\\"In contrast, the established methods choose their auxiliary tasks to be the ones that always help\\\" is simply wrong. All the loss scaling methods actually show that naive implementation of the multi-task learning actually hurts some times. For example, [Chen et al., Table 2]: naive multi-task learning hurts normal estimation error. [Kendall et al., Table 1]: naive multi-task learning hurts segmentation performance. In summary, these experiments are still very interesting for the practitioners and proposed method is applicable. And, the statement that \\\"there is no significant negative interference\\\" is NOT correct.\\n\\nI think authors misunderstood my definition of toy problem. I did not mean an easy problem. The paper uses set of problems which are not realistic and practical. They are designed to specifically satisfy the assumptions of the setup in the paper. Although they are scientifically valuable, they are not relevant to the practitioners. As I said, the existing problems studied in the literature already shows significant negative interference and relevant to the practitioners. So, I see no reason for them to be included in the paper.\"}", "{\"title\": \"Response to Reviewer 2\", \"comment\": \"We thank Reviewer 2 for the feedback!\\n\\nRegarding the reviewer\\u2019s concern on computation expenses of our method, our method does not add expensive computation because the only additional step is to compute a one-step cosine similarity between gradients. Since computing cosine similarity is of O(D) time complexity where D is the dimensionality of the shared parameters, it is just a constant multiplier, as already noted by the reviewer. \\n\\nRegarding the reviewer\\u2019s suggestion on running more experiments: we believe that our experiments support our main claims and as the reviewer noted, we already report a diverse set of experiments. For the suggestion of \\u201cexperiments where auxiliary tasks have been used before would be interesting to test with the only addition being introducing the method proposed.\\u201d, note that this may not be necessarily the best setup to illustrate our method as some of the prior work could have selected tasks such that the auxiliary task always helps (i.e. the cosine similarity is always greater than 0). Also the problem setup and motivations are slightly different in some of the prior work. For example, in Mirowski et al., the use of depth prediction task as the auxiliary task was meant to assist \\u201crepresentation learning\\u201d, which is different from the goal of our method that the auxiliary task seeks \\u201calignment\\u201d with the main task in their gradient space (e.g., in the RL GridWorld task, our setting has no representation learning at all since policies are tabular). \\n\\nWe believe that this method is a useful addition to the tool box (as the reviewer also observed) and opens up the possibilities of using auxiliary tasks which may help initially but hurt in the end. We hope that the simplicity of our method will encourage the community to try these on other domains and investigate extensions that further improve performance.\\n\\nThanks for pointing out the writing mistakes. We have fixed them in the revised version.\"}", "{\"title\": \"Response to Reviewer 1\", \"comment\": \"We thank Reviewer 1 for the feedback! We have revised the paper based on your suggestions and we also address your concerns individually below.\\n\\nIn the first review point, the reviewer asked for a theoretical guarantee of our method. We have shown in section 2 that our method has a convergence guarantee (although there is no guarantee of an improvement of the speed of convergence). Proof details were presented in Appendix A.1. We agree that defining the usefulness of an auxiliary task beforehand is an interesting future direction. We believe that the level of usefulness has to be conditioned on the model (or at least a family of models) and potentially on the initialization of parameters. In some regard, our proposed approach relies on conditioning on all of this information (e.g. the value of theta) and it would be interesting if some of this can be efficiently marginalized away.\\n\\nFor the second review point, it is unclear to us that another metric (and definitely not true for L2) can guarantee convergence on the task of interest. Based on our theoretical proof in section 2, cosine similarity is the metric for which we can prove the convergence and hence the only one that is applicable for our approach. In particular, we believe that instead of thinking of cosine as a metric, a fruitful perspective is to see it as a projection of the gradient of the auxiliary loss in the subspace of decent directions of the main loss.\\n\\nFor the third review point, we have revised our paper and included Algorithm 1 in the main text. We will move some more content from appendix to the main text for the final camera ready.\\n\\nFor the fourth review point, we did not identify the near or far class pairs in ImageNet based on the semantic similarity of class names. Instead, we used two semantic quantitative measures, the lowest common ancestor (LCA) in the tree hierarchy and the Frechet Inception Distance (FID) of pre-trained image embeddings. The procedure is described briefly in section 3.1 and detailed in Appendix E. We believe they have quantitatively defined a good similarity metric for our setting. In figure 2b, our cosine method did not perform worse before step 5000. In fact, the cosine method has shown a jumpstart at the beginning of training compared to the other methods. We used RMSProp with momentum as the optimizer for image classification tasks.\\n\\nFor the fifth review point, we are not sure what does the reviewer mean by \\u201cother relatedness method in reinforcement learning\\u201d. As mentioned in section 4, to the best of our knowledge, so far there has not been a working method that could quantify task relatedness in RL (Carroll and Seppi, 2005; Ammar et al., 2014). We would appreciate more suggestions from the reviewer on this. Regarding the reviewer\\u2019s question about noise in gradient, we have explained this in footnote 4 of our paper: \\u201c...we compute cosine similarity between a distillation gradient and a single sample of the policy gradient estimator, meaning that we are using a high variance estimate of the similarity...\\u201d. Specifically, in this RL experiment of distilling then transfer between the *same* task, the noise of the gradient in a single sample makes the tasks less similar---but they are actually the same (e.g., their cosine similarity should be very close to 1, but due to the noise, it could be much lower than 1 for a single sample).\"}", "{\"title\": \"Response to Reviewer 3: part 3\", \"comment\": \"3. final major issue \\u201cexperimental results are suggesting the method is not effective.\\u201d: Our goal was to show that our method can successfully detect and block negative transfer, while recovering the performance of multi-task learning on helpful auxiliary tasks; our experiments on supervised learning and reinforcement learning supports our claims that our method is effective at this.\\n\\nIn ImageNet experiment of far class pair [48, 920], it shows exactly this property---when class 48 detects that the loss from class 920 is negatively affecting its learning, it ignores class 920 and continue its learning, thus in the end matching the single task performance (the performance boost after around 15k steps). While in the naive multi-task setting, class 48 is not able to gain this boost because of the consistent negative effect from class 920. Similarly for near class pair [871, 484] that our method is comparable to multi-task learning because both losses share similar gradient direction and there is no dropping occurs. We have added a description in the caption of Figure 2 to clarify this in our paper. \\n\\nIn Atari experiment of Breakout and Ms. Pacman, we\\u2019d like to clarify what the reviewer said about \\u201c...I do not get why the performance of breakout is relevant...\\u201d. The performance of Breakout is relevant because in this setting, our main task is to learn Breakout and Ms. Pacman simultaneously (i.e. the main task is multi-task). We chose the main task itself to be multi-task to illustrate a complex scenario where auxiliary task helps with only part of the main task, and that too only initially. Therefore, we are not comparing \\u201csingle task vs. multi-task\\u201d, but to use an auxiliary task, a teacher agent with distilled policy from Breakout only, to bootstrap the learning of both Breakout and Ms. Pacman. In Figure 5 of our paper we show that, when always distilling from the teacher (Multitask RL + distillation) the agent learns only Breakout but not Ms. Pacman, because the teacher only knows about Breakout; when never distilling (Multitask), the reward from one game saturate the learning of another. As shown in the \\u201cNormalized Average Score\\u201d plot of Figure 5, our method performs better than all these methods because the agent is able to learn both games well. \\n\\nRegarding the reviewer\\u2019s concern that the Ms. Pacman\\u2019s learning curve was not plateaud yet, we\\u2019d like to clarify that we followed a typical setting on Atari where one usually set a fixed \\u201cbudget\\u201d of steps and compare performance at that point in time. This is a reasonable setting since 1) our method cares about bootstrap learning and 2) early stopping is not well defined in RL. We could increase the budget and train longer, but that will be more resource-consuming and will not change the current presented results in any meaningful way.\\n\\n---\\n\\nTo summarize, our goal was to propose a method that automatically (i) leverages the auxiliary task when it is helpful (e.g. learn faster) and (ii) block negative transfer when the auxiliary task is not helpful. Our experiments support our main achieves this and shows that our method can recover multi-task performance when auxiliary task helps and gracefully recover performance of single task when auxiliary task hurts. We have demonstrated that our method works well on a variety of supervised learning as well as reinforcement learning experiments (which are more challenging). In addition, we have noted and shown in Appendix D an example of where our method could slow down convergence and discussed a few drawbacks. \\n\\nWe hope we have addressed your major concerns.\\n\\n\\u201cMINOR NITPICKS\\u201d\\n- We have revised our paper to include Algorithm 1 in the main text. \\n- We have added a description for each class ID in the main text \\n- Scaling our method to many tasks should be relatively straightforward since the treatment of each auxiliary loss is done independently (as the gradient of each auxiliary loss is projected on the gradient of the task of interest). We have added this as a future work in discussion. \\n- Regarding \\u201cDoes the theory still hold for loss functions which are not Lipschitz as the Cauchy's gradient method requires that for convergence\\u201d, this is a good point. We will look into this in future work.\"}", "{\"title\": \"Response to Reviewer 3: part 2\", \"comment\": \"2. \\u201cAnother major issue is the weak multi-task learning baseline used in the paper.\\u201d We thank the reviewer for pointing out the recent developments in the related literature. However, in our opinion, our method is not directly comparable with the papers mentioned by the reviewer due to the following reasons (we have also cited these papers and added a discussion in the related work):\\n\\n(i) as the reviewer already noted, these papers aim to solve the multi-objective optimization problem (i.e., all tasks are of interest) which is not the same problem setup as our method. In particular for the GridWorld experiment, simply adding losses is actually the optimal thing to do in the multi-task setting since there is no representation learning (policies are tabular). However, the simple adding method fails completely in our problem setup because of the fact that a multi-task solution is not what we are after---our goal is to extract useful information for a single important task. Similarly, for the example functions in Figure 1, any multi-task oriented adaptation scheme will fail because the minimum of the multi-task solution is nowhere near the solution of the main task. To summarize, multi-task techniques are not capable of making the right tradeoffs under the scenarios where one is only interested in the performance of the main task while all other tasks are just for the purpose of bootstrapping learning, and can potentially hurt performance on the main task. \\n\\n(ii) these papers proposed different ways of scaling the multi-loss function and they follow the intuition of finding a weight for each task individually (e.g., GradNorm scales the weight based on gradient magnitude of each task) without looking at their interaction. However, our method is not a \\u201cgradient scaling\\u201d method like in these papers; it is a vector field projection technique and this distinction is crucial to understand. Fundamentally, our proposed approach is a technique that allows adding additional gradient fields (or losses) in a way that guarantees converges to the local optima of the main task. The auxiliary vector fields, or gradients, can only affect the convergence speed (which we have empirically shown to be better), and we prove this convergence property in proposition 1 and 2 (see Appendix A.1 for details). The papers mentioned are trying to optimize a different goal. We will try to add experiments using the scaling methodologies as in these papers in the camera-ready version, but we emphasize that this would not be a fair comparison due to the distinct objectives between these methods and ours. Moreover, their mechanisms do not explicitly look for the relatedness of each task, while ours measures task relatedness using gradient cosine similarity. \\n\\n(iii) one should also note that the overall setting is different in these papers compared to ours. In the papers mentioned, they all have a multi-task setting where a neural network takes image inputs and produces multiple labels based on the *same input data*. In contrast, our setting considers *different input data* for each task. In ImageNet, we input images from two different classes; in RL GridWorld, the environments are of different distribution; in Atari games, Breakout and Ms. Pacman are two very distinct tasks. This difference is critical for the usefulness of our method versus their methods. For example, using gradient magnitude to scale loss function between Breakout and Ms. Pacman (as was done in GradNorm) would not work since their magnitude would be very different due to distinct input distributions and different environment dynamics in RL tasks. Furthermore, the tasks used in their papers (e.g. predicting segmentation map and depth from a single image) could inherently be helpful to each other (cosine similarity always greater than 0), so these tasks may not be the best to illustrate the problem our method is trying to solve. \\n\\nTo summarize, these papers solve the problem of weighing individual losses for multi-task learning without looking at their interaction, whereas we solve the problem of deciding when to use auxiliary task when we ultimately only care about main task performance, by looking at gradient cosine similarity. Nonetheless, we agree that our approaches are complementary and could potentially be combined to further improve performance. For example, as the reviewer said, instead of using simply \\u201cbinary decision of using both gradients or only the main one\\u201d, we could adapt ideas from existing literature to find a better weighing mechanism.\"}", "{\"title\": \"Response to Reviewer 3: part 1\", \"comment\": \"We thank Reviewer 3 for their detailed feedback!\\n\\nWe have clarified our problem setup in Section 1.1: The goal is to devise an algorithm that can automatically (i) leverage the auxiliary task when it is helpful (e.g. learn faster) and (ii) block negative transfer when the auxiliary task is not helpful (i.e. recover the performance of training only on the main task). We have updated captions of figures to clarify our experimental setup. \\n\\nWe address the major issues one by one.\\n\\n1. \\u201cOne major issue for me is the experimental setup.\\u201d We handpicked these tasks because they show rich behaviors in a way that *the auxiliary task is helpful to the main task initially, but hurt later on*---this is the setting we care about and our adaptive mechanism is reasonable under this scenario. In contrast, the established methods choose their auxiliary tasks to be the ones that always help. These tasks are usually simple, converge fast, and there is no significant negative interference that can happen during the training phase. For example, the auxiliary task in Mirowski et al. is a depth prediction task which predicts an extremely low dimensional version of the depth map, instead of trying to construct the full depth map---even though the latter might seems to be a more natural task. Our approach is meant to reduce the pressure on finding a suitable auxiliary task; if a task always helps, then simple multi-task could already work and there is no need for using our method to compute their gradient cosine similarities. We respectfully disagree that all of our experiments are \\u201ctoy\\u201d. We experimented with ImageNet dataset, which even in the form of our modified binary task version is not an easy task. We also tested in the Atari domain, which is a difficult task for RL agents. Given the variety of our experimental designs (cross-domain and cross-task, supervised learning and reinforcement learning), we think it\\u2019s unfair to say that \\u201cpaper uses set of toy experiments\\u201d.\"}", "{\"title\": \"General Response\", \"comment\": [\"We thank all reviewers for their detailed feedback. We have uploaded a revised version addressing all of the concerns raised by the reviewers. Briefly, highlights of the major changes we made include:\", \"Moved Algorithm 1 from Appendix C to Section 3;\", \"Added ImageNet class names in Section 3.1;\", \"Cited and discussed the papers mentioned by Reviewer 3 in Section 4;\", \"Text revisions to improve readability.\", \"We also address individual reviewer comments below.\"]}", "{\"title\": \"Interesting idea but weak experimental setup.\", \"review\": \"The paper is addressing the problem of a specific multi-task learning setup such that there are two tasks namely main task and auxiliary task. Auxiliary task is used for the sole purpose of helping the main one. In other words, auxiliary task performance is not of interest. The simple and sensible approach proposed in the paper is using cosine similarity between the gradients of two loss functions and incorporating the auxiliary one if it is positively aligned with the main gradient. Authors suggest to further scale loss functions using the cosine similarity but it only experiments with the simpler case of binary decision of using both gradients or only the main one. Authors provide a convergence guarantee (without any convergence rate) by simply extending the convergence of gradient method.\\n\\nThe paper is definitely addressing an important problem as the authors cite many previous work which uses the setup of set of auxiliary tasks helping a main one. The method is simple and easy to implement. Hence, it has a potential to be useful for the community.\\n\\nOne major issue for me is the experimental setup. The authors cite many interesting, realistic and practical setups (Zhang et al., 2016; Jaderberg et al., 2017; Mirowski et al., 2017; Papoudakis et al., 2018), but do not use any of these setups in their experiments. Instead, paper uses set of toy experiments. This is very puzzling to me as all these papers set existing baselines for interesting problems which authors can easily compare. I think the paper needs to be experimented and compared with these established methods.\\n\\nAnother major issue is the weak multi-task learning baseline used in the paper. There have been many interesting developments in adaptive scaling of multiple loss functions in the literature. However, paper does not compare with them. Example of these methods are: [GradNorm: Gradient Normalization for Adaptive Loss Balancing in Deep Multitask Networks, ICML 2018] and [Multi-Task Learning Using Uncertainty to Weigh Losses for Scene Geometry and Semantics, CVPR 2018]. Although these methods addresses the case of all tasks being important, it is a valid baseline and need to be compared. Similar to my first points, these papers also use very realistic and interesting experiments which would fit better than the toy experiments in the paper.\\n\\nFinal major issue is the fact that experimental results are suggesting the method is not effective. In ImageNet experiment, auxiliary tasks actually hurt the final performance as the single task is better than all methods including the proposed one. Proposed method does not guarantee that auxiliary tasks will have no harm. The GridWorld experiment is sort of a sanity check to me as it is very hand-crafted. For Breakout experiment, single task actually outperforms all baselines and this means the proposed method results in a harm similar to ImageNet case. For Breakout+MSPacMan experiment, multi task and the proposed method performs almost exactly same. I do not get why the performance on Breakout is relevant for this case since it is not a main task. The paper clearly states that only performance of an interest is the main one which is MSPacMan in this case. Also, in this experiment clearly all methods are still learning as the curve did not plateau yet. I am curious, why the learning is stop there. I do not think we need the method to be effective to be published; but, the negative result should be explained properly.\\n\\nMINOR NITPICKS\\n- Algorithm 1&2 are crucial to understand the paper, they should be in main text\\n- ImageNet class IDs change between years. So, actual wordnet IDs or class names is a better thing to state\\n- What happens if there are multiple auxiliary tasks?\\n- Does the theory still hold for loss functions which are not Lipschitz as the Cauchy's gradient method requires that for convergence\\nIn summary, the paper is proposing a sensible method for an important problem. However, it is only tested for toy problems although there are interesting existing setups which would be ideal for the method to be tested. Moreover, it is only compared with the most-naive multi task learning baselines. Even this limited experimental setup does not confirm what the paper is claiming (using auxiliary tasks only when they help). And the paper fails to explain this failure cases. The method needs to be experimented with a more realistic setup with more realistic baselines.\\n\\n------\", \"after_rebuttal\": \"I gave detailed responses to each part of the rebuttal below. Here is the summary:\\n\\nAlthough the response addresses some of my concerns. There are still major issues with the experimental study. 1) there are existing, relevant and well-studied multi-task setups with negative interference. Method should be experimented with some of those setups. 2) Multi-task baseline in the paper is naive and far from state-of-the-art. Paper need strong baselines as discussed. Hence, I am keeping my score. Paper needs to be improved with a stronger experimental study and need to be re-submitted.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Used grandent similarity to decide whether an auxiliary task is useful or hurting the main task. Showed improved results in supervised learning and reinforcement learning domains.\", \"review\": \"The paper studies the problem of how to measure the similarity between an auxiliary task and the target tasks, and further decide when to use the auxiliary loss in the training epoches. The proposed cosine simiarity based soft gradient update scheme seems reasonable. The author(s) also experiment the proposed method on three tasks, one supervised learning image classification task, two reinforcement learning tasks, and show improved results respectively.\\n\\nThe paper is in generally well-written. However it would be great if the concerns below could be addressed or discussed in the paper.\\n\\n1) The proposed method is based on the intuition: if the gradients of the target and auxiliary loss are in the same direction, the auxiliary loss will help the main/target task. Some examples are showed in the paper to support this argument, however it would be helpful if there is some theoritical gurantee on this. So a more general question would be: rather than define the similarity measure to measure the gradient similarity of the target and auxiliary loss, it would be more useful to try to learn or define whether the auxiliary task is good for the target task beforehand.\\n\\n2) In proposition 1, if the concerns in 1) are reasonable, the equation would be doubtful. For example, one can simply try (g(target task)-g(auxiliary task)) in the equation. Besides, more similarity metrics are expected to be compared here to show why cosine is the optimal choice. For example, L2.\\n\\n3) Too much content is embedded in appendix, for example, it would be helpful to move the two algorithms or at least discussed the two variants of the gradient updates in the experimental section. Since it is not clear to me whether hard cosine mixing or soft cosine mixing is used to produce the results in the image classification task.\\n\\n4) In the image classification task, a quantitative analysis would be more convincing since the semantics of the near and far is really hard to define. Even the authors can show a vague definition, it will be helpful. In figure 2b), why the cosine method performs worse compared the other methods before 5000 in x-axis? Is this because of the noise of the gradient? Plus, what is the optimizer used in this experiment?\\n\\n5) In the first reinforcement learning task, since cosine similarity is the only method used to measure the similarity between auxiliary task and the target task, it would be useful to show the comparison among other task relatedness method in reinforcement learning. For 'This is expected as the noise in the gradients make it hard to measure if the two tasks are a good fit or not', why is this? Since cosine similarity would be zero if the two tasks are not good fit.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Simple method for using gradient information of auxiliary task when it agrees with gradient of main task\", \"review\": \"The paper proposes a method for using auxiliary tasks to support the optimization with respect to a main task. In particular, the method assumes the existence of a loss function for the main task that we are interested in, and a loss function for an auxiliary task that shares at least some of the parameters with the main loss function. When optimizing for the main loss function, the gradient of the auxiliary loss function is also used to update the shared parameters in cases of high cosine similarity with the main task. The method is demonstrated on image classification and a few reinforcement learning settings.\\n\\nThe idea of the paper is simple, and the method has a nice property of (if ignoring some caveats) guaranteeing steps that are directionally correct with respect to the main task. In that sense it is useful in practice, as it limits the potential damage the auxiliary task does to the optimization of the main task.\\n\\nAs the authors also note, the method suffers from some drawbacks. Although the method limits the negative effect of the auxiliary task on the optimization of the main loss function, it can still slow down optimization if the auxiliary task is not well chosen. In that sense, the method is no silver bullet. In addition, the method seems fairly computationally expensive (it would be interesting to understand how much it slows down an update, I would assume the added complexity is roughly a constant multiplier). However, as an alternative to naively adding an auxiliary task, the proposed method is a welcome addition to the tool box of practitioners.\\n\\nAlthough the experiments presented in the paper are quite different from each other, I would have wished for even more experiments. The reason is that as the method does not guarantee faster convergence, its applicability is mainly an empirical question. Especially experiments where auxiliary tasks have been used before would be interesting to test with the only addition being introducing the method proposed.\\n\\nThe paper is generally well written and the results are fairly clearly presented. As a minor comment, the authors might want to check that articles (such as \\\"the\\\") are not missing in the text.\\n\\nAll in all, the main merit of the proposed method is its conceptual simplicity and easy to understand value in practical applications where an auxiliary loss function is available. The method also seems to work well enough in the experiments presented.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}" ] }
rJ4km2R5t7
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
[ "Alex Wang", "Amanpreet Singh", "Julian Michael", "Felix Hill", "Omer Levy", "Samuel R. Bowman" ]
For natural language understanding (NLU) technology to be maximally useful, it must be able to process language in a way that is not exclusive to a single task, genre, or dataset. In pursuit of this objective, we introduce the General Language Understanding Evaluation (GLUE) benchmark, a collection of tools for evaluating the performance of models across a diverse set of existing NLU tasks. By including tasks with limited training data, GLUE is designed to favor and encourage models that share general linguistic knowledge across tasks. GLUE also includes a hand-crafted diagnostic test suite that enables detailed linguistic analysis of models. We evaluate baselines based on current methods for transfer and representation learning and find that multi-task training on all tasks performs better than training a separate model per task. However, the low absolute performance of our best model indicates the need for improved general NLU systems.
[ "natural language understanding", "multi-task learning", "evaluation" ]
https://openreview.net/pdf?id=rJ4km2R5t7
https://openreview.net/forum?id=rJ4km2R5t7
ICLR.cc/2019/Conference
2019
{ "note_id": [ "rklUcATSxN", "HJlblc2e0Q", "B1xRaF2lR7", "HklEfYnxR7", "S1gHMCM0nX", "ryeyISNYhQ", "S1g4cw9Ohm", "S1eZR9Jhq7", "BklRvvUFcQ" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review", "official_comment", "comment" ], "note_created": [ 1545096845705, 1542666728965, 1542666693646, 1542666508452, 1541447181332, 1541125447334, 1541085068404, 1539205832825, 1539037030402 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1323/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1323/Authors" ], [ "ICLR.cc/2019/Conference/Paper1323/Authors" ], [ "ICLR.cc/2019/Conference/Paper1323/Authors" ], [ "ICLR.cc/2019/Conference/Paper1323/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1323/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1323/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1323/Authors" ], [ "~quan_vuong1" ] ], "structured_content_str": [ "{\"metareview\": \"This paper provides an interesting benchmark for multitask learning in NLP.\\nI wish the dataset included language generation tasks instead of just classification but it's still a step in the right direction.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Multitask learning is one of the most important problems in AI\"}", "{\"title\": \"Author response\", \"comment\": \"Thank you for your review!\\n\\nWe agree that the diagnostic data is a key contribution of our work. We wanted to not only have an application-driven measure of progress, but also a targeted measure of performance on specific natural language phenomena that we would expect a general-purpose NLU model to handle well. \\n\\nRegarding parameter sharing, our intent was to include tasks with very little training data such that automated systems could not do well learning on just those tasks\\u2019 data. Competitive systems, then, would need to include some form of knowledge-sharing from outside data. In only requiring model predictions to evaluate on test, we wanted to avoid restricting future research to any particular paradigm of knowledge sharing. We use multi-task learning and parameter sharing because it is a straightforward baseline with lots of precedent (GenSen, Collobert and Weston, etc.), so we thought it necessary to include.\\n\\nCould you please clarify how a small *test* set would encourage few-shot learning? To the best of our knowledge, few-shot learning is when you have a small *training* set for the target task.\", \"re\": \"table 5, we agree! We\\u2019ll post an updated version of the paper shortly.\"}", "{\"title\": \"Author response\", \"comment\": \"Thanks for your thoughtful review!\\n\\nWhile we agree that more analysis would be nice, the central contribution of our paper is to motivate and introduce the benchmark. Thus, our experiments are designed to give baseline numbers for a broad range of models and to highlight the benefits of our design decisions, in order to make the case that our benchmark offers useful improvements over previous evaluation standards like SentEval. \\n\\nRegarding the diagnostic data, we do believe much of the information you mention is present in the paper. For example, we explicitly give the class distribution for the entire dataset (including statistics by category is a good point - we will soon add the class distribution per coarse-grained category), expert annotator agreement (high, at \\\\kappa = 0.73), and human performance (R_3 = 0.8 versus 0.28 for the best model). The last statistic in particular indicates that these examples are understandable and solvable by humans and challenging for existing models. These numbers are in-line with other semantic datasets that have been productively used by the community, for example SQuADv2 (humans get ~87 EM); SimLex-999 (0.67 correlation); WordSimilarity-353 (.61 correlation). We agree that human annotations are not perfect, but perfect annotations don\\u2019t exist, and datasets can still be useful even when their human annotations are a little noisy, as in the previously mentioned examples.\"}", "{\"title\": \"Author response\", \"comment\": \"Thank you for your review!\\n\\nTo clarify, GLUE is not a benchmark for language modeling (the task of modeling the probability of a piece of text) but rather (classification or regression based) natural language understanding. \\n\\nRegarding previous work in this space, we mention in the paper what we believe are the two major comparable works: SentEval and DecaNLP, and highlight the benefits over these neighbors afforded by our design decisions. As your review hints at, we believe that a major benefit of a well-designed benchmark is that it can more clearly distinguish models that make significant improvements and thereby incentivize researchers to work on it. Early results on GLUE seem to suggest that we have been successful in that regard.\"}", "{\"title\": \"Interesting new benchmark\", \"review\": \"Summary:\\nGLUE is a benchmark consisting of multiple natural language understanding tasks\\nthat functions via uploading to a website and receiving a score based on\\nprivately held test set labels.\\nTasks include acceptability judgement, sentiment prediction, semantic equivalence\\ndetection, judgement of premise hypothesis entailment, question paragraph pair\\nmatching, etc..\\nThe benchmark also includes a diagnostic dataset with logical tasks such as\\nlexical entailment and understanding quantifiers.\\n\\nIn addition to presenting the benchmark itself, the paper also presents models\\nfor performance baselines.\\nThere is some brief analysis of the ability of Sentence2Vector vs. more complex\\nmodels with e.g. attention mechanisms and of single-task vs. multi-task training.\", \"evaluation\": \"The GLUE benchmark seems like a well designed benchmark that could potentially\\nignite new progress in the area of NLU.\\nBut since I'm not an expert in the area of language modeling and know almost\\nnothing about existing benchmarks I cannot validate the added benefit over\\nexisting benchmarks and the novelty of the suggested benchmarking approach.\", \"details\": \"The paper is well written, clear and easy to follow.\\n\\nThe proposed benchmarks seem reasonable and illustrate the difficulty of\\nbenchmark tasks that involve logical structure.\", \"page_5\": \"showing showing (Typo)\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"1: The reviewer's evaluation is an educated guess\"}", "{\"title\": \"Weak reject\", \"review\": \"The paper proposes a new benchmark for natural language understanding: GLUE. Models will be evaluated based on a diverse set of existing language understanding tasks which encourages the models to learn shared knowledge across different tasks. The authors empirically show that models trained with multiple tasks in the dataset perform better than models that focused on one specific task. They also point out existing methods are not able achieve good performance in this dataset and request for more general natural language understanding system. The work also collects an expert evaluated diagnostic evaluation dataset for further examination for the models.\", \"quality\": \"borderline, clarity:good, originality: borderline, significance: good,\", \"pros\": [\"The benchmark is set up in a online platform with leaderboard which can be easily accessible to people.\", \"The benchmark comes with a diagnostic evaluation dataset with coarse-grained and fine-grained categories that examine different aspect of language understanding abilities.\", \"Baseline results for major existing models are provided\"], \"cons\": \"- The author should provide more detailed analysis and interpretable explanations for the results as opposed to simply stating that the overall performance is better.\\nFor example, why attention hurts performance in single task training? Why multi-tasks training actually leads to worse performance on some of the dataset? Do these phenomenons still exist if you train on a different subset of the dataset?\\nWhat are the samples that the models failed to perform well? It would be nice to get some more insights and conclusions based on the results obtained from this benchmark to shed some lights on how to improve these models. The results section should be seriously revised.\\n\\n- The diagnostic evaluation dataset seems to be a way to better understand the model, however, it is hard to see the scope of the data (are the samples under each categories balanced?). Besides, the examples in the dataset seems very confusing even for humans (Table 2). The evaluation with NLP expert is also far from perfect. I wonder how accurate is this dataset annotated (or even the sentences make sense or not), and how suitable it is for evaluating model\\u2019s language understanding abilities. It would be nice if the authors can include some statistics about the dataset.\\n\\nThe paper proposes a useful benchmark that measures different aspects of language understanding abilities which would be helpful to the community. However, I feel the novelty or take away messages from the experiment section is limited.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}", "{\"title\": \"A timely and useful resource\", \"review\": [\"This paper introduces the General Language Understanding Evaluation (GLUE) benchmark and platform, which aims to evaluate representations of language with an emphasis on generalizability. This is a timely contribution and GLUE will be an impactful resource for the NLP community. This is mitigated, perhaps, somewhat by the recent release of decaNLP. But, as discussed the authors, this has a different focus (re-framing all tasks as QQ) and further does not feature the practical tools released here (leaderboard, error analysis) that will help drive progress.\", \"Some comments below.\", \"The inclusion of the small diagnostic dataset was a nice addition and it would be nice if future corpora included similar.\", \"Implicit in this and related efforts is the assumption that parameter sharing ought to be possible and fruitful across even quite diverse tasks. While I do not object to this, it would be nice if the authors could make an explicit case here as to why should we believe this to be the case.\", \"The proposed platform is touted as one of the main contributions here, but not pointed to -- I assume for anonymity preserving reasons, but still would have been nice for this to be made explicit.\", \"I would consider pushing Table 5 (Appendix) into the main text.\"], \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Clarification Answers\", \"comment\": \"Hi Quan,\\nThanks for using GLUE! Regarding your questions:\\n\\n- SST-2, QNLI, RTE, and WNLI are all roughly balanced, so using accuracy is reasonable here (and has been used as the evaluation metric in the past). We use F1 for datasets with class imbalances, and also to maintain comparability with previous work on those datasets.\\n\\n- For QNLI, each example consists of the original question and a single sentence from the context paragraph that was originally paired with that question. The task is to determine if that context sentence contains an answering span to the question. If I understand correctly, it's the latter of the two options you mentioned, but we do some additional filtering of question, context sentence pairs that are too easy (due to low lexical overlap).\\n\\nLet us know if you have any additional questions.\"}", "{\"comment\": \"Thank you for the paper and the dataset!\\n\\nI'm using GLUE in my own research and would like to ask a few clarification questions. \\n\\nIn Table 1, why is it that only accuracy (and not F1) is used to measure performance on SST-2, QNLI, RTE and WNLI?\\n\\nFor QNLI, in the sentence, \\\"The task is to determine whether the context sentence contains the answer to the question.\\\", is the task is to then:\\n\\n- Determine whether the answer is contained in any context sentence. The label for a (context, question) pair would then be binary, where 1 indicates at least one sentence in the context contains the answer to the question and 0 otherwise.\\nOR\\n- Determine the sentence that contains the answer out of all the sentences in the context passage. The label for each sentence in the context passage would then be binary, i.e. a sentence is assigned a gold label of 0 if the answer is not part of the sentence and 1 if it is.\", \"title\": \"Clarification Questions\"}" ] }
Hk41X2AqtQ
Hierarchically-Structured Variational Autoencoders for Long Text Generation
[ "Dinghan Shen", "Asli Celikyilmaz", "Yizhe Zhang", "Liqun Chen", "Xin Wang", "Lawrence Carin" ]
Variational autoencoders (VAEs) have received much attention recently as an end-to-end architecture for text generation. Existing methods primarily focus on synthesizing relatively short sentences (with less than twenty words). In this paper, we propose a novel framework, hierarchically-structured variational autoencoder (hier-VAE), for generating long and coherent units of text. To enhance the model’s plan-ahead ability, intermediate sentence representations are introduced into the generative networks to guide the word-level predictions. To alleviate the typical optimization challenges associated with textual VAEs, we further employ a hierarchy of stochastic layers between the encoder and decoder networks. Extensive experiments are conducted to evaluate the proposed method, where hier-VAE is shown to make effective use of the latent codes and achieve lower perplexity relative to language models. Moreover, the generated samples from hier-VAE also exhibit superior quality according to both automatic and human evaluations.
[ "Natural Language Processing", "Text Generation", "Variational Autoencoders" ]
https://openreview.net/pdf?id=Hk41X2AqtQ
https://openreview.net/forum?id=Hk41X2AqtQ
ICLR.cc/2019/Conference
2019
{ "note_id": [ "SkeDko7xlE", "HkeE8jS_RX", "HJxrbUCURX", "rJxbKjRrCm", "B1goPoCSCX", "BJxL95kZAQ", "SJxiPwaapm", "BJgPXD66am", "SklM0La66m", "HJlvgTYkpm", "H1lZieB2n7", "HyehKP0uhX" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544727262629, 1543162700442, 1543067132672, 1543003000590, 1543002979426, 1542679181836, 1542473571284, 1542473502986, 1542473417811, 1541541103513, 1541324952654, 1541101443533 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1322/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1322/Authors" ], [ "ICLR.cc/2019/Conference/Paper1322/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1322/Authors" ], [ "ICLR.cc/2019/Conference/Paper1322/Authors" ], [ "ICLR.cc/2019/Conference/Paper1322/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1322/Authors" ], [ "ICLR.cc/2019/Conference/Paper1322/Authors" ], [ "ICLR.cc/2019/Conference/Paper1322/Authors" ], [ "ICLR.cc/2019/Conference/Paper1322/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1322/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1322/AnonReviewer2" ] ], "structured_content_str": [ "{\"metareview\": \"Strengths: Interesting work on using latent variables for generating long text sequences.\\nThe paper shows convincing results on perplexity, N-gram based and human qualitative evaluation.\", \"weaknesses\": \"More extensive comparisons with hierarchical VAEs and the approach in Serban et. al in terms of language generation quality and perplexity would have been helpful. Another point of reference for which additional comparisons were desired was: \\\"A Hierarchical Latent Structure for Variational Conversation Modeling\\\" by Park et al. Some additional substantive experiments were added during the discussion period.\", \"contention\": \"Authors differentiated their work from Park et al. and the reviewer bringing this work up ended up upgrading their score to a 7. The other reviewers kept their scores at 5.\", \"consensus\": \"The positive reviewer raised their score to a 7 through the author rebuttal and discussion period. One negative reviewer was not responsive, but the other reviewer giving a 5 asserts that they maintain their position. The AC recommends rejection. Situating this work with respect to other prior work and properly comparing with it seems to be the contentious issue. Authors are encouraged to revise and re-submit elsewhere.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Interesting latent variable model, borderline paper due to experimental execution and novelty\"}", "{\"title\": \"Re: Re: \\u201cAttribute vector arithmetic\\u201d results & comparison with related work\", \"comment\": \"Thanks for your consideration of our efforts to improve the manuscript. We agree that this claim (i.e., \\\"the general contents of the original sentences are mostly retained\\\") is not rigorous enough. In this regard, we have refined this part and updated a revised manuscript.\"}", "{\"title\": \"Re: \\u201cAttribute vector arithmetic\\u201d results & comparison with related work\", \"comment\": \"Thanks for doing the attribute vector experiments and expanding your discussion of VHCR.\\n\\n> As shown above, the original sentences have been successfully manipulated to positive sentiment with the simple attribute vector operation. Notably, the general contents of the original sentences are mostly retained, even though their sentiments are altered.\\n\\nTo be frank, I totally disagree. Most of the sequences change both in content and in sentiment. For example, in the first example the original talks about being vegetarian/vegan friendly, chips and salsa, and sangria; the transferred about pork belly and salad. The only real instance of keeping content similar is \\\"my boyfriend and i\\\" appearing in the second-to-last example, but otherwise the content is totally different - the original is talking about buying game cards; the transferred is talking about a restaurant. So I don't think you can claim your model has decoupled style and content in any meaningful way. That being said, I think it's important to include this kind of result in your paper, mostly to motivate future work.\\n\\nI appreciate the work you've done to improve the paper. I've raised my score to a 7.\"}", "{\"title\": \"Continue due to the character limit\", \"comment\": \"2) We agree with the reviewer that a clear comparison with the VHCR model in Park et al. is important. In the revised manuscript, we particularly discuss the differences between our method and the VHCR model in the \\u201cHierarchical structures in NLP\\u201d part of section 2. Specifically, our approach is unique from the following perspectives: (i) both latent variables in our hier-VAE are designed to contain global information. More importantly, although the local/utterance variables are generated from the global latent variable in VHCR, the priors for the two sets of latent variables are both fixed as standard diagonal-covariance Gaussians. In contrast, the prior of the bottom-level latent variable in our model is learned from the data (thus more flexible relative to a fixed prior), which exhibits promising results in terms of mitigating the \\u201cposterior collapse\\u201d issue (see Table 2); (ii) in hier-VAE, the underlying data distribution of the entire paragraph is captured in the bottom-level latent variable. While in the setup of VHCR, the responses are modeled/generated conditioned on both the latent variables and the contexts. Therefore, the (global) latent variable learned by our model should contain more information.\"}", "{\"title\": \"\\u201cAttribute vector arithmetic\\u201d results & comparison with related work\", \"comment\": \"Thanks for your detailed response. According to your valuable feedback, we have made two further revisions on our manuscript:\\n\\n1) The \\u2018\\u2019attribute vector arithmetic\\u2019\\u2019 experiment: on the Yelp Review dataset, we first obtain the sampled latent codes for all reviews with positive sentiment (among the entire training set), and calculate the corresponding mean latent vector. The mean latent vector for all negative reviews are computed in the same manner. The two vectors are then subtracted (i.e., positive mean vector minus negative mean vector) to obtain the \\u2018\\u2019sentiment attribute vector\\u2019\\u2019. For evaluation, we randomly sample 1000 reviews with negative sentiment and add the \\u2018\\u2019sentiment attribute vector\\u2019\\u2019 to their latent codes. The manipulated latent vectors are then fed to the hierarchical decoder to produce the transferred sentences (which should hopefully convey positive sentiment). Below are some transferred results:\\n\\n>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>\", \"original\": \"a friend recommended this to me , and i can t figure out why , the food was underwhelming and pricey , the service was fine, and the place looked nice .\", \"transferred\": \"a friend of mine recommended this place , and i was so glad that i did try it , the service was great , and the food was delicious .\\n>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>\\n\\nAs shown above, the original sentences have been successfully manipulated to positive sentiment with the simple attribute vector operation. Notably, the general contents of the original sentences are mostly retained, even though their sentiments are altered. We further employed a CNN sentiment classifier to evaluate the sentiment of manipulated sentences. The classifier is trained on the entire training set and achieves a test accuracy of 94.2%. With this pre-trained classifier, 83.4% of the transferred reviews are judged to be positive-sentiment, indicating that \\u2018\\u2019attribute vector arithmetic\\u2019\\u2019 method consistently produces the intended manipulation of sentiment. We have included the results and discussion in our revised manuscript.\"}", "{\"title\": \"Response\", \"comment\": \"Thanks for your detailed consideration of my comments. While I agree that a big difference between your work and that of Park et al. is their use of \\\"utterance drop\\\", I'm not sure I agree that the hierarchical structure of the two models is significantly different. You wrote\\n\\n> Specifically, we propose to improve the flexibility and expressiveness of the prior distribution, by leveraging a hierarchy of latent variables. This setup endows the encoder/inference networks with stronger ability to extract high-level (global) features of a paragraph. Notably, the model proposed in [1] shares the same high-level idea of making the inference networks more expressive to resolve the \\u2018\\u2019posterior collapse\\u2019\\u2019 problem.\\n\\nIt appears they also do this - see equations (24) and (25) in Park et al., where the utterance variables are treated as stochastic and are dependent on the global conversation variable. If there is indeed an important different in the structure of your latent variables, you need to make that absolutely clear in your updated draft. Otherwise, it seems that the only difference is their use of \\\"utterance drop\\\" and the difference in applications, neither of which are terribly significant. Looking at your updated draft, it appears you have only mentioned Park et al. in pointing out their use of utterance drop. I think you need to flesh out this comparison.\\n\\nWith the revisions you made, experiments with \\\"attribute vector arithmetic\\\", and some additional words comparing your work to Park et al., I would be happy to raise my score.\"}", "{\"title\": \"Thanks for the valuable, critical feedback\", \"comment\": \"We would like to thank the reviewer for the detailed comments and suggestions for the manuscript.\\n\\n> The paper is well written though some parts are confusing. For example, equation 4 refers to q as the prior distribution but this seems like it's the posterior distribution as it is described just below equation 5. p(z_1|z_2) is also not well defined. It would be clearer to specify the full algorithm in the paper. \\n\\nThanks for pointing out this confusing typo. In the revised version, we have corrected \\u2018prior\\u2019 to \\u2018posterior\\u2019 while describing equation 4. As to the specific configuration of p(z_1|z_2), it is actually described in the supplementary materials (due to the space limit). To avoid any confusions here, we added a 'Model specification\\u2019 section, at the beginning of the 'Experiments' part in the revised version, to illustrate these details and hyperparameter choices.\\n\\n> The work also mentions that words are generated for each sentence until the _END token is generated. Is this token always generated? What happens to a sentence if that token is not generated?\\n\\nThe reviewer brings up a good point about the end_token generation. In our training data, all sentences have an _END token (at the end of each sentence), and the word-level LSTM is able to learn that there will be an _END token after each sentence and generate that token. We should also note that, we constrained the model to generate a maximum number of words for each sentence. We use maximumly 20 words for the Yelp reviews dataset and 25 for the arXiv dataset. This part has been made more clear in the revised version. \\n\\n> It's also unclear how this work is novel with regards to the works below.\\n\\nWe agree with the reviewer that there are similarities between our model and the VHRED model proposed by Serban et. al., however our model is quite different in terms of the architecture and motivation. Firstly, the VHRED model infers a latent variable for each context/utterance, and the context is provided to the decoder while generating each response. In contrast, our model leverages a global latent variable to model the entire paragraph without any additional inputs (no contexts are provided). As a result, the latent variables in our model are designed to abstract globally meaningful features from the entire paragraph. \\n\\nSecondly, we have leveraged a hierarchy of latent variables to mitigate the \\u2018posterior collapse\\u2019 issue, which has given rise to promising empirical results. Specifically, the generative network has made better use of the latent variable information, indicated by a larger KL term (please refer to our response to Reviewer 2 for additional information). This strategy has not been employed and discussed in the \\u2018Hierarchical Variational Autoencoders for Music\\u2019 paper either.\\n\\nTo further prove that our model can extract globally informative features, we conducted an additional experiment, in the revised paper, to visualize the learned latent variable. Specifically, from the arXiv dataset, we select the most frequent four classes/topics and re-train our hier-VAE-D model on the corresponding abstracts. We plot the bottom-level latent variable for our hier-VAE-D with t-SNE (which is supposed to contain the global features). The result is shown in Figure 2 of the revised manuscript. It can be observed that the latent codes of paragraphs from the same topic are indeed grouped together in the embedding space, indicating that the latent variables has encoded high-level features from the input paragraph. \\n\\nAs to the factorized hierarchical VAE paper, although they utilize the hierarchical nature of sequential data, their VAE model only takes a sub-sequence (segment) of the entire sequence as the input (as illustrated in Figure 3 of the factorized hierarchical VAE paper). Therefore, their model can only be used to modify the style/attribute of a sub-sequence (short sentence), rather than generate/sample an entire sequence (long-form document).\\n\\n> Minor comments\\n\\nThanks for pointing the typos out. We have corrected them accordingly in the revised paper.\"}", "{\"title\": \"Continue due to the character limit\", \"comment\": \"> Finally, I know that space is tight, but other papers on global-latent-variable models tend to include more demonstrations that the global variable is capturing meaningful information, e.g. with attribute vector arithmetic.\\n\\nWe share the same intuition with the reviewer that exploring what information has been meaningfully abstracted in the (global) latent variable is valuable to understand the strengths of our proposed model. We would like to mention that in our original submission, we included a section in the experiments where we reported results to measure the continuity of the learned latent space. This measure is commonly used in latent-variable models to evaluate how smooth the learned VAE latent space is. Our results demonstrated that the generated samples are syntactically and semantically reasonable. We even found that the generated sentences, along a linear trajectory, gradually transit from positive to negative sentiment (as shown in Table 4). Having said that, we followed reviewer's suggestion and are currently running additional experiments to manipulate the review sentiment via attribute vector arithmetic. We will update with the corresponding results shortly.\\n\\nFurthermore, we\\u2019ve just reported the results of our new experiment to visualize the learned global latent variable by plotting the corresponding t-SNE embeddings on the arXiv dataset (please refer to our response to Reviewer 1 for additional information).\\n\\nBelow, please also find our responses to the specific comments of the reviewer:\\n\\n[VAE prior] Thanks for pointing this out. We have revised this sentence according to your suggestion.\\n\\n[Feeding z to LSTM] We apologize for the confusion. As to feeding the latent codes z to LSTM, we follow [2] and use z to infer the initial state of LSTM and feed it to every step of LSTM as well. That said, z is employed as the input to the sentence-level LSTM (higher level decoder) at every time step. For the word-level LSTM, the plan vectors are utilized by the decoder in a similar manner. Different from the higher-level LSTM, the input to the word-level LSTM at each step is the concatenation of z and the word embedding of the previous token. We have revised the description of this part in Section 3.2 to avoid any confusion.\\n\\n[CNN encoder structure] Due to space limit, the specific structure of CNN encoder is described in the supplementary materials. To make it more clear, we added a ''Model specification\\u2018\\u2019 section in the revised version, in the beginning of ''Experiments\\u2018\\u2019 part, detailing the encoder structure..\\n\\n[Notation] \\\\theta is defined as the parameters of the whole generative network. The individual network that parametrize p(x|z_1) and p(z_1|z_2) are both part of the generative networks. Therefore, we use \\\\theta to denote the parameters of both distributions.\\n\\n[AAE and ARAE baselines] Yes, the setups of the two baseline methods are the same as flat-VAE, except that adversarial divergence, instead of the KL divergence, is employed to match the prior and posterior distributions. \\n \\n[1] Kim, Y., Wiseman, S., Miller, A.C., Sontag, D.A., & Rush, A.M. (2018). Semi-Amortized Variational Autoencoders. ICML.\\n[2] Bowman, S.R., Vilnis, L., Vinyals, O., Dai, A.M., J\\u00f3zefowicz, R., & Bengio, S. (2016). Generating Sentences from a Continuous Space. CoNLL.\"}", "{\"title\": \"Thanks for your kind and helpful comments\", \"comment\": \"Thanks for taking the time to provide such thorough comments and detailed feedback.\\n\\n> my main criticism is that the authors fail to compare their approach to \\\"A Hierarchical Latent Structure for Variational Conversation Modeling\\\" by Park et al.\\n\\nThanks for referring to this interesting paper. The VHCR model (as denoted in their paper) indeed bears close resemblance to our proposed method. However, there are two key differences that make our work unique: \\n\\n1) To allow the model to make more use of the latent variable, this reference proposes to employ an \\u2018utterance drop\\u2019 strategy. Their goal is to weaken the autoregressive power of hierarchical RNNs by dropping the utterance encoder vector with a certain probability. Although the KL divergence term tends to get larger with this modification, the modeling capacity of the LSTM decoder may be sacrificed. Instead, we try to resolve the same issue from a different perspective (without weakening the decoder during training). Specifically, we propose to improve the flexibility and expressiveness of the prior distribution, by leveraging a hierarchy of latent variables. This setup endows the encoder/inference networks with stronger ability to extract high-level (global) features of a paragraph. Notably, the model proposed in [1] shares the same high-level idea of making the inference networks more expressive to resolve the \\u2018\\u2019posterior collapse\\u2019\\u2019 problem.\\n\\nTo compare the effectiveness of the two different strategies (to mitigate the `posterior collapse\\u2019 issue), we experiment the \\u2018utterance drop\\u2019 (u.d) method based upon our hier-VAE-S model on the Yelp dataset (note that to allow fair comparison, we use hier-VAE-S as the baseline to evaluate the two strategies). The corresponding language modeling results are shown as below:\\n\\n\\t\\t\\t NLL KL PPL\", \"hier_vae_s\": \"160.8 3.6 46.6\\nhier-VAE-S (with u.d): 161.3 \\t 5.6 47.1\", \"hier_vae_d\": \"160.2 6.8 45.8\\n\\nAs shown above, their u.d strategy allows better usage of the latent variable (indicated by a larger KL divergence value). However, the NLL of the language model becomes even worse with the u.d method, possibly due to the weakening of the decoder during training (similar observations have also been shown in Table 2 of the VHCR paper). In contrast, our \\u2018hierarchical prior\\u2019 strategy yields larger KL terms as well as lower NNL value, indicating the advantage of our strategy to mitigate the \\u2018posterior collapse\\u2019 issue.\\n\\n2) The previous VHCR model considers a multi-turn dialogue generation scenario, where the dialog context is provided at each time step (in terms of the higher-level LSTM) and the model seeks to generate the corresponding response conditioned on the context. What is different in our hier-VAE model is that, we are interested in generating long and coherent units merely conditioned on the latent variable (no additional context information is provided to the decoder). As a result, the underlying data distribution of the entire paragraph is captured in the bottom-level latent variable. While in their setup, the responses are modeled/generated conditioned on both the latent variables and the contexts. In this sense, the problem we are trying to tackle is relatively more challenging. \\n\\n> More generally, it would have been nice to see more ablation experiments (e.g. convolutional vs. LSTM encoder)\\n\\nAt the initial stage of this project, we found that the hierarchical CNN encoder employed here is important for the VAE model to work well (relative to a flat CNN encoder). Based on reviewer\\u2019s suggestion, we re-ran the language modeling experiments with a flat CNN encoder and a hierarchical LSTM encoder (on the Yelp dataset). The results are shown below:\\n\\n\\t\\t\\t\\t\\t NLL KL PPL\", \"flat_cnn_encoder\": \"164.6 2.3 50.2\", \"hierarchical_lstm_encoder\": \"161.3\\t 5.7 46.9\", \"hierarchical_cnn_encoder\": \"160.2 6.8 45.8\\n\\nIt can be observed that the model with a flat CNN encoder yields worst (largest) perplexity, suggesting that it is beneficial to make the encoder hierarchical. Additionally, hierarchical CNN encoder exhibits slightly better results than hierarchical LSTM encoder according to our experiments.\"}", "{\"title\": \"Nice experiments, but limited novelty\", \"review\": \"This paper proposed a hierarchical generative model for generating long text. The authors use a hierarchical LSTM decoder to first generate sentence-level representations; then based on the representation of each sentence, a word-level LSTM decoder is utilized to generate a sequence of words in this sentence. In addition, they use multiple layers of latent variables to address the posterior collapse issue.\\nThe paper studies an important problem and the authors performed extensive experiments.\\n\\n\\nMy major concern is about the novelty of this paper.\\nHierarchical LSTM for generating long txt has been widely studied. For example, in the following works:\\nLi, Jiwei, Minh-Thang Luong, and Dan Jurafsky. \\\"A hierarchical neural autoencoder for paragraphs and documents.\\\" arXiv preprint arXiv:1506.01057 (2015).\\nHierarchical LSTM for Sign Language Translation, AAAI, 2018.\\n\\nPlacing hierarchical latent variables in VAE is also investigated before.\\nFor example, in \\nZhao, Shengjia, Jiaming Song, and Stefano Ermon. \\\"Infovae: Information maximizing variational autoencoders.\\\" arXiv preprint arXiv:1706.02262 (2017). With some adaption from image domain to text domain\\nSerban, Iulian Vlad, et al. \\\"A Hierarchical Latent Variable Encoder-Decoder Model for Generating Dialogues.\\\" AAAI. 2017.\\n\\nThe author combines this two ideas together, which is incremental in terms of novelty.\\n\\n\\nIn the writing of section 3.2, the authors should clearly cite the previous works on hierarchical LSTM and acknowledge that this is not the contributions of this paper. Under the current writing, for unfamiliarized readers, it sounds like this is proposed by the authors of this paper, which is not the case.\\n\\nThe notations of this paper is confusing, which hinders its readbility.\\nFor example, in equation 5, the distribution is parameterized by theta.\\nIn equation 6, p(x|z) is also parametrized by theta.\\n\\nIn the experiments, I'd like to see a comparison with the following works.\\nI suggest the authors to compare with the following works.\\n\\nFan, Angela, Mike Lewis, and Yann Dauphin. \\\"Hierarchical Neural Story Generation.\\\" ACL (2018).\\n\\nGhosh, S., Vinyals, O., Strope, B., Roy, S., Dean, T., & Heck, L. (2016). Contextual LSTM: A Step towards Hierarchical Language Modeling.\\n\\nZhao, Shengjia, Jiaming Song, and Stefano Ermon. \\\"Infovae: Information maximizing variational autoencoders.\\\" arXiv preprint arXiv:1706.02262 (2017). With some adaption from image domain to text domain\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Nice samples but lack of comparison to existing hierarchical VAEs\", \"review\": \"This paper proposes using a hierarchical VAE to text generation to solve the two problems of long text generation and mode collapse where diversity in generated text is lost.\\n\\nThe paper does this by decoding the latent variable into sentence level latent codes that are then decoded into each sentence. The paper shows convincing results on perplexity, N-gram based and human qualitative evaluation.\\n\\nThe paper is well written though some parts are confusing. For example, equation 4 refers to q as the prior distribution but this seems like it's the posterior distribution as it is described just below equation 5. p(z_1|z_2) is also not well defined. It would be clearer to specify the full algorithm in the paper.\\n\\nThe work also mentions that words are generated for each sentence until the _END token is generated. Is this token always generated? What happens to a sentence if that token is not generated?\\n\\nThe novelty of this paper is questionable given the significant amount of existing work in hierarchical VAEs. It's also unclear why a more direct comparison can't be made with Serban et. al in terms of language generation quality and perplexity. If a downstream model is only able to make use of one latent variable, can't multiple variables simply be averaged?\\n\\nIt's also unclear how this work is novel with regards to the works below.\\n\\nHierarchical Variational Autoencoders for Music\\nRoberts, et. al\\nNIPS 2017 creativity workshop\\nThis seems to have a similar hierarchical structure where there is an initial 16 step decoder that decodes the latent code for the lower level note level LSTMs to use during generation.\\n\\nUnsupervised Learning of Disentangled and Interpretable Representations from Sequential Data\\nHsu, et. al\\nNIPS 2017\\nThis proposes a factorized hierarchical variational autoencoder which also has a double latent variable hierarchical structure, one that is conditional on the other.\\n\\nMinor comments\\n- Typo in page 3 under Hierarchical structures in NLP: characters \\\"from\\\" a word\\n- Typo above section 4.3: hierarhical\\n\\n=== After rebuttal ===\\nThanks for the response.\\n\\nI believe that Reviewer2's criticism about the similarity to Park et. al isn't sufficiently addressed by the authors. Even if the hierarchical structure is different it's unclear whether this alternative structure is superior to Park et. al. There appears to be no evidence that the latent variables contain more global information relative to VHCR (Park et. al). These claims aren't tested and the results in the paper aren't comparable since the authors don't evaluate on the same datasets as Park et. al.\\n\\nIn general, I think the claims of a superior hierarchical structure to models such as the factorized hierarchical VAE paper needed to be tested to show evidence of a more powerful representation for hier-VAE.\\n\\nI will keep my score.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Review\", \"review\": \"This paper proposes a hierarchical variational autoencoder for modeling paragraphs. The model creates two levels of latent variables; one level of sentence-level latent variables and another single global latent. This avoids posterior collapse issues and the authors show convincing results on a few different applications to two datasets.\\n\\nOverall, it is an impressive result to be able to convincingly model paragraphs with a useful global latent variable. Apart from some issues with confusing/incomplete notation (see below), my main criticism is that the authors fail to compare their approach to \\\"A Hierarchical Latent Structure for Variational Conversation Modeling\\\" by Park et al. As far as I can tell, the approaches are extremely similar, except that Park et al. may not learn the prior parameters and also use a hierarchical RNN encoder rather than a CNN (which may be irrelevant). They also are primarily interested in dialog generation, so the lower-level of their hierarchy models utterances in a conversation rather than sentences in general, but I don't see this as a major difference. I'd encourage the authors to compare to this and potentially use it as a baseline. More generally, it would have been nice to see more ablation experiments (e.g. convolutional vs. LSTM encoder). Finally, I know that space is tight, but other papers on global-latent-variable models tend to include more demonstrations that teh global variable is capturing meaningful information, e.g. with attribute vector arithmetic. The authors could include results of manipulating review sentiment via attribute vector arithmetic, for example.\", \"specific_comments\": [\"\\\"The Kullback-Leibler (KL) divergence term ... which can be written in closed-form (Kingma & Welling, 2013), encourages the approximate posterior distribution q\\u03c6(z|x) to be close to the multivariate Gaussian prior p(z).\\\" The prior is not always taken to be a multivariate Gaussian. You should add a sentence stating that the VAE prior is often taken to be a diagonal-covariance Gaussian for convenience.\", \"3.2 has a few things which are unclear. In the second paragraph, you define z as the sampled latent code which is fed through an MLP \\\"to obtain the starting state of the sentence-level LSTM decoder\\\". But then LSTM^{sent} appears to be fed z at every timestep. LSTM^{sent} is also not defined - am I to assume that its arguments are the previous state and current input, so that z is the input at every timestep? Also, you write \\\"where h^s_0 is a vector of zeros\\\" which makes it sound like the starting state of the sentence-level LSTM decoder is a vector of zeros, not the output of the MLP which takes z as input. In contrast, LSTM^{word} takes three arguments as input. Which are the \\\"state\\\" and which are the \\\"input\\\" to the LSTM?\", \"I don't see any description of your CNN encoder (only the LSTM decoder in section 3.2, 3.3 only covers the hierarchy of latent variables, not the CNN architecture). What is its structure? Figure 1 shows a CNN encoder generating lower-level sentence embeddings and a high-level global embedding. How are those computed? It is briefly mentioned in 4.1 under \\\"Datasets\\\" but this seems insufficient.\", \"p_\\\\theta(x | z) is defined as the generating distribution, but also as a joint distribution of z_1 and z_2. Unless I am missing something I think you are overloading the notation for p_\\\\theta.\", \"I don't think enough information is given about the AAE and ARAE baselines. Are they the same as the flat-VAE, except with the KL term replaced by the an adversarial divergence between the prior and approximate posterior?\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
HkzyX3CcFQ
Contextual Recurrent Convolutional Model for Robust Visual Learning
[ "Siming Yan*", "Bowen Xiao*", "Yimeng Zhang", "Tai Sing Lee" ]
Feedforward convolutional neural network has achieved a great success in many computer vision tasks. While it validly imitates the hierarchical structure of biological visual system, it still lacks one essential architectural feature: contextual recurrent connections with feedback, which widely exists in biological visual system. In this work, we designed a Contextual Recurrent Convolutional Network with this feature embedded in a standard CNN structure. We found that such feedback connections could enable lower layers to ``rethink" about their representations given the top-down contextual information. We carefully studied the components of this network, and showed its robustness and superiority over feedforward baselines in such tasks as noise image classification, partially occluded object recognition and fine-grained image classification. We believed this work could be an important step to help bridge the gap between computer vision models and real biological visual system.
[ "contextual modulation", "recurrent convolutional network", "robust visual learning" ]
https://openreview.net/pdf?id=HkzyX3CcFQ
https://openreview.net/forum?id=HkzyX3CcFQ
ICLR.cc/2019/Conference
2019
{ "note_id": [ "SkgzCLNiyV", "H1ewdKtsR7", "HkgMSYFsCQ", "BkgWrpdiR7", "H1gYpXvjR7", "BJeQtfvjAX", "B1lRdulhnQ", "HygOWC_937", "rye4DnoY27" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544402634184, 1543375215319, 1543375162166, 1543372088921, 1543365568529, 1543365243292, 1541306486211, 1541209599757, 1541155931831 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1320/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1320/Authors" ], [ "ICLR.cc/2019/Conference/Paper1320/Authors" ], [ "ICLR.cc/2019/Conference/Paper1320/Authors" ], [ "ICLR.cc/2019/Conference/Paper1320/Authors" ], [ "ICLR.cc/2019/Conference/Paper1320/Authors" ], [ "ICLR.cc/2019/Conference/Paper1320/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1320/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1320/AnonReviewer2" ] ], "structured_content_str": [ "{\"metareview\": \"This paper explores the addition of feedback connections to popular CNN architectures. All three reviewers suggest rejecting the paper, pointing to limited novelty with respect to other recent publications, and unconvincing experiments. The AC agrees with the reviewers.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"metareview: limited novelty, unconvincing experiments\"}", "{\"title\": \"Point-by-point address to reviewer's concerns [part 2]\", \"comment\": \"6. Then robustness to noise and adversarial attacks tested on ImageNet and with a modification of the architecture. According to the caption of Fig. 4, this is done with 5 timesteps this time!\\n\\nWe apologize that we actually used 2 unroll times model for ImageNet classification instead of 5 unroll times. We have corrected this mistake. \\n\\n7. Accuracy on ImageNet needs to be reported ** especially ** if classification accuracy is not improved (as I expect). \\n\\nWe have reported the top-1 accuracy of ImageNet classification in Table 7 for your reference.\\n\\n8. Then experiments on fine-grained with ResNet-34! What architecture is this? Is this yet another number of loops and feedback iterations? When reporting that \\\"Our model can get a top-1 error of 25.1, while that of the ResNet-34 model is 26.5.\\u201d Please provide published accuracy for the baseline algorithm. \\n\\nWe mentioned in the paper that \\u201cNotice that our proposed models are based on VGG16 with 2 recurrent connection (loop1+loop2 in Figure 2) in all the tasks.\\u201d\\nAnd we have systematically compared our model against three baseline models (VGG, VGG-ATT, VGG-LR) on fine-grained image classification dataset on Table 2.\\n\\n9. For the experiment on occlusions, the authors report using \\u201ca multi-recurrent model which is similar to the model mentioned in the Imagenet task\\u201d. Sorry but this is not good enough. \\n\\nNow, we used the same version of our model in all tasks.\\nWe have systematically compared our model against three baseline models (VGG, VGG-ATT, VGG-LR) on occlusion dataset on Table 3. \\n\\n10. Table 4 has literally no explanation. What is FF? What are unroll times? As a side note, VGG-GAP does not seem to be defined anywhere. \\n\\nWe apologize for not explaining everything clearly. FF here means feed-forward which is equal to VGG16 feed-forward model. \\n\\n11.When stating \\\"We investigated VGG16 (Simonyan & Zisserman, 2014), a standard CNN that closely approximate the ventral visual hierarchical stream, and its recurrent variants for comparison.\\u201d, the authors probably meant \\u201ccoarsely\\u201d not \\u201cclosely\\\".\\n\\nWe agree with you and have changed it to \\u201ccoarsely\\u201d. But VGG is probably the best among the various deep networks to the ventral stream, in terms of the gradual growth of the receptive field size, the number of stages. A number of studies showed that V1 neurons in monkeys are closest to conv2.3 in some datasets (Kohn and Schwartz dataset) and conv3.1 of VGG (Tolias dataset, and Tang dataset).\"}", "{\"title\": \"Point-by-point address to reviewer's concerns [part 1]\", \"comment\": \"Thank you for the valuable feedback and comments. Below we address your comments point by point.\\n\\n1. Architectures and hyper-parameters are casually changed from experiment to experiment (please refer to Do CIFAR-10 Classifiers Generalize to CIFAR-10? By Recht et al 2018 to understand why this is a serious problem.) \\n\\nTo address your concern, we now choose one model VGG models with loop1+loop2 (Figure 2) and test it against the other models in all the five tasks.\\n \\n2. This study is not the first to look into recurrent / feedback processes. Indeed some (but not all) prior work is cited in the introduction. Some of these should be used as baselines as opposed to just feedforward networks. \\n\\nYes, now we implemented Li et al\\u2019s VGG-LR (learning to rethink), and Jetley et al\\u2019s VGG-ATT (attention), and test them in addition to our model and VGG in the different tasks. (see general response). We demonstrated in all these tasks, our model provides the best performance.\\n\\n3.Overall the improvements are relatively modest (e.g., see Fig. 4 right panel where the improvements are a fraction of a % or left panel with a couple % fooling rates improvements) \\n\\nWe have corrected this mistake.\\nGenerally, in all tasks, our model outperformed the best of the other models by one or two percentage point. But in fine-grained recognition and noisy image recognition, our performance improvement reached 8% and 25% more than the best among the other models.\\n\\n4. The experiments are all over the place. What is the SOA on CIFAR-10 and CIFAR-100? If different from VGG please provide a strong rationale for testing the circuit on VGG and not SOA. In general, the experimental validation would be much stronger if consistent improvements were shown across architectures.\\n\\nWe now limit the comparison between one version of our model (the optimal one) against VGG and the other two VGG models with loops VGG-ATT and VGG-LR. VGG is used because it resembled the primate visual system the most in many aspects \\u2013 gradual increase in receptive fields, convolution, and pooling. We consider each VGG stage, which includes multiple convolution layers followed by a pooling layer, to be equivalent to one visual area, hence, although VGG16 has 16 layers, it roughly has 6-7 stages, approximating the primate hierarchical visual system. \\n\\n5. Accuracy is reported for CIFAR-10 and CIFAR-100 for 1 and 2 feedback iterations and presumably with the architecture shown in Fig. 1. \\n\\nYes. The architecture used for CIFAR-10 and CIFAR-100 is VGG16 model with loop1+loop2, which is shown in Fig 1, Fig 2 and Fig 3.\"}", "{\"title\": \"Point-by-point address to reviewer's concerns\", \"comment\": \"Thank you for the valuable feedback and comments. Below we address your comments point by point.\\n\\n1. -I think there are lots of related literature that shares a similar motivation to the current work. Just list a few that I know of: \\nRonneberger, Olaf, Philipp Fischer, and Thomas Brox. \\\"U-net: Convolutional networks for biomedical image segmentation.\\\" International Conference on Medical image computing and computer-assisted intervention. Springer, Cham, 2015. \\nLin, Tsung-Yi, et al. \\\"Feature Pyramid Networks for Object Detection.\\\" CVPR. Vol. 1. No. 2. 2017. \\nNewell, Alejandro, Kaiyu Yang, and Jia Deng. \\\"Stacked hourglass networks for human pose estimation.\\\" European Conference on Computer Vision. Springer, Cham, 2016. \\nYu, Fisher, et al. \\\"Deep layer aggregation.\\\" arXiv preprint arXiv:1707.06484 (2017). The current work is very similar to such works, in the sense that it tries to combine the higher-level features with the lower-level features. Compared to such works, it lacks both novelty and insights about what works and why it works.\\n\\n**Novelty**: Yes. Indeed, it has been long recognized that deep neural network is missing a key feature in cortical processing and there are a variety of feedback mechanisms used in the literature (1) concatenation of feedforward and feedback responses (e.g in U-net and others listed by reviewer 2); (2) unfolding feedback into a feedforward networks (U-net, autoencoder); (3) gating, or dot product (multiplication) as in Attentional network which use high-level semantic information to gate object segmentation and localization at lower layer; (4) recurrent with gating as in LSTM and GRU, for keeping and using internal state and memory to gate sequence processing; (5) scaling, addition and subtraction typical in neuroscience. (6) Gated Boltzmann machine and transformer network. (7) capsule network. What we investigated, however, are neuroscience motivated questions: Why loops are predominantly between adjacent visual areas? What kind of contextual information would be useful and for what tasks? What is contextual modulation? These questions have not been answered by those technology-driven innovations. \\nThe gating circuit we proposed are also neurally motivated, and designed to answer neuroscience question. For that reasons, we might also be the first to introduce loops between convolution layers and successfully train on real complicated image dataset(ImageNet) at the same time. \\n\\n**Insights**: Our work did provide a number of insights discussed in our general comments. We show loops are good, tight loops are better, more loops are better, top-down gating is important, context information should include both top-down and bottom information, and feedback improves feedforward connections. \\n\\n2. - The performance gain is pretty marginal, especially given that the proposed network has an iterative nature and can incur a lot of FLOPs. It would be great to show the FLOPs when comparing models to previous works. \\n\\nThe performance improvement is a means rather than an end to our research. We use it to gauge the importance of certain designs, to assess the importance of feedback in a certain task. So we are not particularly concerning with the number of FLOPs. Our primary motivation is for understanding the computational logic of brain structures, developing a new deep learning module for technology is secondary. We are afraid that we have not made these objectives clear and the paper was evaluated as a technology paper rather than a brain science paper. \\n\\nIn the earlier submission, we mainly provided results on the noisy image object recognition, though we mentioned results in adversary attacks and occlusion and fine-grained recognition, and we only compared our model against VGG. Now, we have implemented the other VGG-with-loops models such as VGG-ATT and VGG-LR and tested them as well. These models have similar FLOP to our model. We demonstrated our model is better.\\n\\n3.- It is interesting observation that the recurrent network has a better tolerance to noise and adversarial attacks, but I am not convinced giving the sparse set of experiments done in the paper. Overall I think the current work lacks novelty, significance and solid experiments to be accepted to ICLR.\\n\\nBased on your review that our performance gain is limited and the experiments are sparse, to address your concerns, we systematically compared our model against other baseline models in five different tasks (recognition in Cifar10 & 100 datasets, fine-grained image classification dataset CUB-200, occlusion dataset, and ImageNet with different level of noise, adversarial attack) and provided five additional tables (Table 1,2,3,7,8) and a figure(Figure 4) for systematic comparison. We systematically compared feedforward VGG, VGG-ATT and VGG-LR, and we demonstrated in all these tasks, our model provided the best performance, thus demonstrating the computational advantages of some of the neural constraints and design.\"}", "{\"title\": \"Point-by-point address to reviewer's concerns\", \"comment\": \"Thank you for the valuable feedback and comments. Below we address your comments point by point.\\n\\n1.A similar idea has been explored by (Li, et al. 2018). Compared with that work, the novelty of this work is weaken and seems limited. The difference from Li is not very clear. The authors need to give more discussion.\", \"novelty\": \"Our proposed circuit has two novel neurally inspired novel recurrent circuit design: tight loops between adjacent visual areas, and contextual modulation.\\nLi et al. proposed a recurrent module bringing semantic information from the FC layer to help a lower layer perform object detection and segmentation. However, in the visual cortex, most of the loops are between adjacent areas, between V1 and V2, between V2 and V4. Such **tight loop** has not been investigated earlier, hence the problem is **novel**. Besides we also investigated different design of contextual modulation. (see general comments above).\\n\\n2.Furthermore, experimental comparison with Li et al. 2018 is also necessary. \\n\\nWe implemented Li\\u2019s VGG-LR (Li, et al. 2018) for comparison in different tasks. Both models were unrolled 2 times during the training. In ImageNet classification task, our work shows a slightly improvement from 71.55% to 71.632% on top1 accuracy(Table 7). Our model showed much stronger robustness against noise corruption in recognition tasks (Figure 4). In addition, our model also outperformed VGG-LR (as well as VGG-ATT) in CIFAR 10 & 100 datasets recognition, fine-grained image classification(CUB-200) dataset as well as recognition under occlusion (Alan Yuille\\u2019s dataset (Table 1, 2, 3). \\n\\n3. The performance gain is limited. The authors mainly evaluate their method for noisy image classification. Such application is very narrow and deviates a bit from realistic scenarios. \\n\\nWe now tested our model against the other models (VGG, VGG-ATT, VGG-LR) in 5 different tasks and in different datasets. We have added five additional tables and a figure (one for each task) to document the model\\u2019s performance. We found our design outperformed in every task, with 25% improvement relative to the other baseline model for noisy image recognition, and 8% improvement relative to others in fine-grained recognition. Whether the gain is small or large is in the eye of the beholders, but our work demonstrated some computational benefits of recurrent connections, particularly for some relevant tasks.\"}", "{\"title\": \"General Responses to the reviewers and the program committee\", \"comment\": \"We thank reviewers for their critique and valuable suggestions. The critiques of our work center on four issues: (1) *novelty* -- a number of deep networks inspired by recurrent feedback connections have already been developed such as U-net by unrolling feedback into feedforward networks with considerable performance increase, and there are also recent works of adding loops and local circuits (Li et al. 2018, Jetley et al. 2018, Nayebi et al. 2018) to improve performance; (2) *systematic comparison* is lacking across multiple tasks and against benchmark models, particularly VGG with loops; (3) *significance* -- improvement in performance seems marginal and unimpressive; (4) *insights* -- only show performance increase without explaining what work and why it works.\\n\\nWe agreed and disagreed but blamed ourselves for not communicating our contributions clearly. Here is our general rebuttal, followed by a point-by-point response to individual reviewers\\u2019 comments.\\n\\n*Novelty*: \\nWhile recent deep learning works have added loops to the feedforward networks (VGG-ATT (attention, Jetley), VGG-LR (learning to rethink \\u2013 Li), and VGG-cortex (Nayebi)), their loops jump across many layers (many stages), typically bringing semantic information from the FC layer to help lower layers to do object detection and segmentation. In the visual cortex, most of the loops are between adjacent areas, between V1 and V2, between V2 and V4, between V4 and IT. The *computational advantages of these tight loops have never been demonstrated*. Hence, our investigation of *tight loops is novel*. Note, we consider each visual area corresponds to a set of convolution layers ending with a pooling layer, thus loops between adjacent visual areas are modeled by loops between the last convolution layer before adjacent pooling layer. Another novel contribution is our systematic exploration of various *mechanisms of gated contextual modulation* in each loop to see what top-down and bottom-up (horizontal) contextual information are most useful, and whether gating is critical. Our module design involving top-down signals and *contextual modulation* signals (from the current layer and higher layer) with multiplicative interaction is novel. Even though concatenation, convolution and multiplication are standard operations, how to put them together is important. \\n\\n*Systematic evaluation*:\\nIt is true that the evaluation in our original submission is a bit haphazard. Now, we reorganized our presentation and added additional comparison tests against VGG models with loops according to the reviewers\\u2019 advice. We implemented VGG-ATT and VGG-LR because codes are not available but checked our implementation can reproduce their original reasons. Overall, we tested these two benchmark VGG-loop models and many versions of our VGG loop models in five tasks (ImageNet recognition (Table 7) , CIFAR10 and CIFAR100 recognition (Table 1), fine-grained recognition (Table 2), Noisy ImageNet recognition (Table 8 & Figure 4), adversary attack (Figure 4), recognition under occlusion (Table 3). In all tasks, our tight loop with top-down gating contextual modulation mechanisms have produced some improvement over VGG, VGG-ATT and VGG-LR models, but the most dramatic improvement was in fine-grained recognition (by 8%) and noisy ImageNet recognition (by 25% at high noise level). \\n\\n*Significance*:\\nWe believe our work are significant in several aspects. The usefulness of feedback in deep learning, and particularly that of tight loops between cortical areas, has been elusive, and not demonstrated. Thus, *identifying and demonstrating what tasks* feedback and contextual modulation can be useful is important. We have demonstrated robustness against noise and fine-grained recognition benefit significantly from feedback and contextual modulation. Scientifically, we investigated the computational benefits of contextual modulation, which involve feedback and integration of horizontal information. Figuring what works best (top-down signal and top-down and horizontal context gating each other) provides potential insight not just for future module design but also to understanding neural mechanisms. Maybe the significance lies more on providing *insights in neuroscience* than some drastic improvement in technology. \\n\\n*Insights*:\\nScientifically, our work provides insights to at least WHAT work, though we are still investigating WHY it works. Here is a list of interesting insights.\\n1.\\tLoops are helpful. The tighter loop is better than the wider loop in some tasks. \\n2.\\tMore loops work better. \\n3.\\tTop-down gating signal and contextual information (from top-down and bottom-up horizontal) are both important.\\n4.\\tUsing contextual modulation with gating work, using contextual modulation to scale or add or subtract does not.\\n5.\\tHaving recurrent feedback with contextual modulation fundamentally change the feedforward representation.\"}", "{\"title\": \"an ok paper but not good enough\", \"review\": \"This paper introduces feedback connection to enhance feature learning through incorporating context information.\\n\\nA similar idea has been explored by (Li, et al. 2018). Compared with that work, the novelty of this work is weaken and seems limited. The difference from Li is not very clear. The authors need to give more discussion. Furthermore, experimental comparison with Li et al. 2018 is also necessary. \\n\\nThe performance gain is limited. The authors mainly evaluate their method for noisy image classification. Such application is very narrow and deviates a bit from realistic scenarios.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"The paper proposes to add \\\"recurrent\\\" connections inside a convolution network with gating mechanism. The idea is not novel and the performance improvement is marginal.\", \"review\": \"The paper proposes to add \\\"recurrent\\\" connections inside a convolution network with gating mechanism. The basic idea is to have higher layers to modulate the information in the lower layers in a convolution network. The way it is done is through upsampling the features from higher layers, concatenating them with lower layers and imposing a gate to control the information flow. Experiments show that the model is achieving better accuracy, especially in the case of noisy inputs or adversarial attacks.\\n\\n- I think there are lot of related literature that shares a similar motivation to the current work. Just list a few that I know of:\\nRonneberger, Olaf, Philipp Fischer, and Thomas Brox. \\\"U-net: Convolutional networks for biomedical image segmentation.\\\" International Conference on Medical image computing and computer-assisted intervention. Springer, Cham, 2015.\\nLin, Tsung-Yi, et al. \\\"Feature Pyramid Networks for Object Detection.\\\" CVPR. Vol. 1. No. 2. 2017.\\nNewell, Alejandro, Kaiyu Yang, and Jia Deng. \\\"Stacked hourglass networks for human pose estimation.\\\" European Conference on Computer Vision. Springer, Cham, 2016.\\nYu, Fisher, et al. \\\"Deep layer aggregation.\\\" arXiv preprint arXiv:1707.06484 (2017).\\nThe current work is very similar to such works, in the sense that it tries to combine the higher-level features with the lower-level features. Compared to such works, it lacks both novelty and insights about what works and why it works.\\n\\n- The performance gain is pretty marginal, especially given that the proposed network has an iterative nature and can incur a lot of FLOPs. It would be great to show the FLOPs when comparing models to previous works.\\n\\n- It is interesting observation that the recurrent network has a better tolerance to noise and adversarial attacks, but I am not convinced giving the sparse set of experiments done in the paper.\\n\\nOverall I think the current work lacks novelty, significance and solid experiments to be accepted to ICLR.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"An experimental mess\", \"review\": \"This paper presents a novel deep learning module for recurrent processes. The general idea and motivation are generally appealing but the experimental validation is a mess. Architectures and hyper-parameters are casually changed from experiment to experiment (please refer to Do CIFAR-10 Classifiers Generalize to CIFAR-10? By Recht et al 2018 to understand why this is a serious problem.) Some key evaluations are missing (see below). Key controls are also lacking. This study is not the first to look into recurrent / feedback processes. Indeed some (but not all) prior work is cited in the introduction. Some of these should be used as baselines as opposed to just feedforward networks. TBut with all that said, even addressing these concerns would not be sufficient for this paper to pass threshold since overall the improvements are relatively modest (e.g., see Fig. 4 right panel where the improvements are a fraction of a % or left panel with a couple % fooling rates improvements) for a module that adds significant computational cost to an architecture runtime. As a side note, I would advise to tone down some of the claims such as \\\"our network could outperform baseline feedforward networks by a large margin\\u201d...\\n\\n****\", \"additional_comments\": \"The experiments are all over the place. What is the SOA on CIFAR-10 and CIFAR-100? If different from VGG please provide a strong rationale for testing the circuit on VGG and not SOA. In general, the experimental validation would be much stronger if consistent improvements were shown across architectures.\\n\\nAccuracy is reported for CIFAR-10 and CIFAR-100 for 1 and 2 feedback iterations and presumably with the architecture shown in Fig. 1. Then robustness to noise and adversarial attacks tested on ImageNet and with a modification of the architecture. According to the caption of Fig. 4, this is done with 5 timesteps this time! Accuracy on ImageNet needs to be reported ** especially ** if classification accuracy is not improved (as I expect). \\n\\nThen experiments on fine-grained with ResNet-34! What architecture is this? Is this yet another number of loops and feedback iterations? When reporting that \\\"Our model can get a top-1 error of 25.1, while that of the ResNet-34 model is 26.5.\\u201d Please provide published accuracy for the baseline algorithm.\\n\\nFor the experiment on occlusions, the authors report using \\u201ca multi-recurrent model which is similar to the model mentioned in the Imagenet task\\u201d. Sorry but this is not good enough.\\n\\nTable 4 has literally no explanation. What is FF? What are unroll times?\\n\\nAs a side note, VGG-GAP does not seem to be defined anywhere.\\n\\nWhen stating \\\"We investigated VGG16 (Simonyan & Zisserman, 2014), a standard CNN that closely approximate the ventral visual hierarchical stream, and its recurrent variants for comparison.\\u201d, the authors probably meant \\u201ccoarsely\\u201d not \\u201cclosely\\\".\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}" ] }
rkx1m2C5YQ
Recurrent Kalman Networks: Factorized Inference in High-Dimensional Deep Feature Spaces
[ "Philipp Becker", "Harit Pandya", "Gregor H.W. Gebhardt", "Cheng Zhao", "Gerhard Neumann" ]
In order to integrate uncertainty estimates into deep time-series modelling, Kalman Filters (KFs) (Kalman et al., 1960) have been integrated with deep learning models. Yet, such approaches typically rely on approximate inference techniques such as variational inference which makes learning more complex and often less scalable due to approximation errors. We propose a new deep approach to Kalman filtering which can be learned directly in an end-to-end manner using backpropagation without additional approximations. Our approach uses a high-dimensional factorized latent state representation for which the Kalman updates simplify to scalar operations and thus avoids hard to backpropagate, computationally heavy and potentially unstable matrix inversions. Moreover, we use locally linear dynamic models to efficiently propagate the latent state to the next time step. While our locally linear modelling and factorization assumptions are in general not true for the original low-dimensional state space of the system, the network finds a high-dimensional latent space where these assumptions hold to perform efficient inference. This state representation is learned jointly with the transition and noise models. The resulting network architecture, which we call Recurrent Kalman Network (RKN), can be used for any time-series data, similar to a LSTM (Hochreiter and Schmidhuber, 1997) but uses an explicit representation of uncertainty. As shown by our experiments, the RKN obtains much more accurate uncertainty estimates than an LSTM or Gated Recurrent Units (GRUs) (Cho et al., 2014) while also showing a slightly improved prediction performance and outperforms various recent generative models on an image imputation task.
[ "state estimation", "recurrent neural networks", "Kalman Filter", "deep learning" ]
https://openreview.net/pdf?id=rkx1m2C5YQ
https://openreview.net/forum?id=rkx1m2C5YQ
ICLR.cc/2019/Conference
2019
{ "note_id": [ "r1l-kGSWgE", "rkxkuiQNAX", "S1ln-iQ4Rm", "Byec4cQ4AX", "Syg1qfyRn7", "BklzonzohX", "Syx2wN5c37" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544798680752, 1542892391448, 1542892292325, 1542892082422, 1541431942732, 1541250202259, 1541215332219 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1319/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1319/Authors" ], [ "ICLR.cc/2019/Conference/Paper1319/Authors" ], [ "ICLR.cc/2019/Conference/Paper1319/Authors" ], [ "ICLR.cc/2019/Conference/Paper1319/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1319/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1319/AnonReviewer1" ] ], "structured_content_str": [ "{\"metareview\": \"A lot of work has appeared recently on recurrent state space models. So although this paper is in general considered favorable by the reviewers it is unclear exactly how the paper places itself in that (crowded) space. So rejection with a strong encouragement to update and resubmission is encouraged.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Borderline - but missing clarity\"}", "{\"title\": \"Paper revised based on reviewers comments\", \"comment\": \"Thank you for your time and valuable feedback that helped us to improve the paper considerably. We implemented the following improvements to the paper based on the comments of the reviewers:\\n\\n1.) We added a qualitative comparison to recent probabilistic generative model approaches that also use KF related techniques, see Table 1. This analysis shows that our approach is more general while allowing for more simple learning methods than other probabilistic SOTA methods.\\n\\n2.) We explicitly stated the update equations used by the RKN (see section 2.4) and clarified Figure 1 to better visualize the general structure.\\n\\n3.) We explicitly state the likelihood objective in section 2.5\\n\\n4.) We added a quantitative comparison to Embed to Control (Watter et. al 2015), Structured Inference Networks (Krishnan et al. 2017) and the Kalman Variational Autoencoder (Fraccaro et al. 2017) on the image imputation tasks for the pendulum and the quadlink. See Table 3 and section 3.3. Those experiments show that our approach outperforms the other models despite using a much easier optimization scheme and less information (smoothing vs. filtering).\\n\\n5.) Reedited abstract, introduction and related work to clarify the contribution of the paper in relation to recent SOTA.\\n\\nFor more specific answers to the comments, please see below:\\n\\n\\u201cIn terms of model presentation\\u2026 \\u201c - We reworked the related work section and added a table comparing our approach to a variety of recent works on a qualitative level. This comparison shows while most generative modelling approaches have been used to predict future or missing images, they can not be used directly for state estimation. Our model is more general and can be used straightforwardly for image prediction as well as for state estimation. Moreover, our model does not require the use of approximate inference methods such as variational inference but can be learned directly in an end to end manner. We believe that this is one of the main reasons why the RKN outperforms all other generative models in the imputation experiment that is now added in Section 3. \\n\\n\\u201cIn terms of model evaluation\\u201d - We added a quantitative comparison in Section 3 where we compare our approach to different generative models from the literature. The results show that while the RKN is using a much simpler model and/or learning method than related approaches, we significantly outperform them on the image imputation task in terms of the log-likelihood loss function. Note that we had to switch from a Gaussian to a Bernoulli distribution for creating images to make this comparison easier\"}", "{\"title\": \"Paper revised based on reviewers comments\", \"comment\": \"Thank you for your time and valuable feedback that helped us to improve the paper considerably. We implemented the following improvements to the paper based on the comments of the reviewers:\\n\\n1.) We added a qualitative comparison to recent probabilistic generative model approaches that also use KF related techniques, see Table 1. This analysis shows that our approach is more general while allowing for more simple learning methods than other probabilistic SOTA methods.\\n\\n2.) We explicitly stated the update equations used by the RKN (see section 2.4) and clarified Figure 1 to better visualize the general structure.\\n\\n3.) We explicitly state the likelihood objective in section 2.5\\n\\n4.) We added a quantitative comparison to Embed to Control (Watter et. al 2015), Structured Inference Networks (Krishnan et al. 2017) and the Kalman Variational Autoencoder (Fraccaro et al. 2017) on the image imputation tasks for the pendulum and the quadlink. See Table 3 and section 3.3. Those experiments show that our approach outperforms the other models despite using a much easier optimization scheme and less information (smoothing vs. filtering).\\n\\n5.) Reedited abstract, introduction and related work to clarify the contribution of the paper in relation to recent SOTA.\\n\\nFor more specific answers to the comments, please see below:\\n\\n\\u201cThe observation noise sigma^obs is a function of the observation itself. This seems strange\\u201d - For high dimensional observations such as images, the amount of noise can often be inferred from the observation itself. For example, if certain relevant aspects of the scenario are occluded in an image this is useful information which can be extracted from the image. \\nHence, making the variance depend on the images is quite common. It is done in the approaches using variational autoencoder we compared to (Watter et al. 2015, Karl et al. 2016, Fraccaro et al. 2017) and was also a crucial aspect of the BackpropKF (Haarnoja et al. 2016)\\n\\nIn fact, making sigma^obs dependent on the observation is crucial for our approach since it allows the encoder to express the uncertainty of its current estimate and allows the model to ignore the current observation if it is not useful. This property is needed in all our experiments. \\n\\n\\u201cI believe that a more detailed...\\u201d - We reworked the related work section, added a table comparing our approach to a variety of recent work including probabilistic generative models on a qualitative level. The main result of this qualitative comparison is that, while our model is more general than most of the current SOTA approaches as it can be used straightforwardly for state estimation as well as image prediction, it also offers the simplest training method (end-to-end training by backpropagation) without the need of approximate inference methods such as variational inference, that is required by most probabilistic generative models. \\nWe also added a quantitative comparison in Section 3 on the image imputation task that allows a direct comparison to the probabilistic generative models. While our approach offers a much simpler and more direct learning approach, it outperforms more complex models considerably on this task. Note that we had to switch from a Gaussian to a Bernoulli distribution for generating images to make this comparison easier.\"}", "{\"title\": \"Paper revised based on reviewers comments\", \"comment\": \"Thank you for your time and valuable feedback that helped us to improve the paper considerably. We implemented the following improvements to the paper based on the comments of the reviewers:\\n\\n1.) We added a qualitative comparison to recent probabilistic generative model approaches that also use KF related techniques, see Table 1. This analysis shows that our approach is more general while allowing for more simple learning methods than other probabilistic SOTA methods.\\n\\n2.) We explicitly stated the update equations used by the RKN (see section 2.4) and clarified Figure 1 to better visualize the general structure.\\n\\n3.) We explicitly state the likelihood objective in section 2.5\\n\\n4.) We added a quantitative comparison to Embed to Control (Watter et. al 2015), Structured Inference Networks (Krishnan et al. 2017) and the Kalman Variational Autoencoder (Fraccaro et al. 2017) on the image imputation tasks for the pendulum and the quadlink. See Table 3 and section 3.3. Those experiments show that our approach outperforms the other models despite using a much easier optimization scheme and less information (smoothing vs. filtering).\\n\\n5.) Reedited abstract, introduction and related work to clarify the contribution of the paper in relation to recent SOTA.\\n\\nFor more specific answers to the comments, please see below:\\n\\n\\u201cThe article does not provide any probability density ..\\u201d - We stated the likelihood objective more explicitly in section 2.5.\\n\\n \\u201c...and there are no connections to probabilistic generative models.\\u201d - We reworked the related work section, added a table comparing our approach to a variety of recent work including probabilistic generative models on a qualitative level. The main result of this qualitative comparison is that, while our model is more general than most of the current SOTA approaches as it can be used straightforwardly for state estimation as well as image prediction, it also offers the simplest training method (end-to-end training by backpropagation) without the need of approximate inference methods such as variational inference, that is required by most probabilistic generative models. \\nWe also added a quantitative comparison in Section 3 on the image imputation task that allows a direct comparison to the probabilistic generative models. While our approach offers a much simpler and more direct learning approach, it outperforms more complex models considerably on this task. Note that we had to switch from a Gaussian to a Bernoulli distribution for generating images to make this comparison easier.\\n\\n\\u201cthe Preliminaries section uses formulas before defining them\\u201d - We fixed that.\\n\\n\\u201cAlso, explicitly writing...\\u201d - We reworked Figure 1 and moved some elaboration on the equations from the appendix to the main part (specifically, section 2.4). We hope that clarifies our presentation.\"}", "{\"title\": \"Interesting model needs more context\", \"review\": \"This paper presents a particular architecture for a probabilistic recurrent neural network that is based on ideas from Kalman filtering. Whereas Kalman filters are used to infer the state of a known generative model (a linear-Gaussian dynamical system), here, the authors jointly learn a recursive filter without explicitly formulating a generative model of the data.\", \"the_paper_deals_with_an_important_problem_and_the_approach_has_many_appealing_characteristics\": \"it learns a state representation and its associated transition dynamics, it learns nonlinear filter that can be used online and it learns encoders/decoders from high-dimensional observations to the state.\\n\\nThe article does not provide any probability density (even though learning happens by maximizing a likelihood) and there are no connections to probabilistic generative models. In my opinion this is a pity since this would shed more light into the characteristics of the proposed approach. \\n\\nI believe that the model could be presented more clearly. For example, the Preliminaries section uses formulas before defining them. Also, explicitly writing the high-level chain of computations from o_t and z_{t-1}^+ to o_t^+ and s_t^+ would be extremely useful. Even more than Fig. 1, in my opinion.\\n\\nAll in all, I have found this an interesting architecture for a RNN but would have appreciated more insight into its relationships with the large body of generative probabilistic state-space models and the methods to perform inference on them.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Interesting idea, but insufficient comparison to existing work\", \"review\": \"PAPER SUMMARY\\n-------------\\nThis paper proposes a method for inferring the latent state and making predictions based on a sequence of observations. The idea is to map the observation to a latent space where the relation to the latent state is linear, and the dynamics of the latent state are locally linear. Therefore, in this latent space a Kalman filter can be applied to infer the current state and predict the next state, including uncertainty estimates. Finally, the predicted latent state is mapped to a prediction for the observation or some other variable of interest.\\n\\nThe experiments show that the proposed approach slightly outperform LSTM and a GRU based approaches.\\n\\n\\nPOSITIVE ASPECTS\\n----------------\\n- The idea of applying a Kalman filter in a latent space is interesting.\\n- The experimental results show that the proposed approach outperforms LSTM and a GRU based approaches.\\n- The paper is well written.\\n\\nNEGATIVE ASPECTS\\n----------------\\n- The observation noise sigma^obs is a function of the observation itself. This seems strange, since typically the observation does not contain itself the information about how much it has been corrupted by noise. This choice should be discussed in more detail, especially what kind of assumptions this implies about the underlying process.\\n- I believe that a more detailed comparison to existing approaches finding a latent space from a sequence of observations would be necessary, both on a technical as well as on an experimental level. For instance, a technical comparison to the approach from Watter et al. 2015 would be appropriate, since it is similar in the sense that the latent space is optimized to have locally linear dynamics. \\nFurthermore, an experimental comparison to Watter et al. 2015 and [1] would be relevant.\\n\\n\\n\\n[1] Wahlstr\\u00f6m et al. 2015 - From Pixels to Torques - Policy Learning with Deep Dynamical Models\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Nice work but misses some significant work from the literature\", \"review\": \"This paper proposes, Recurrent Kalman Network, a modified Kaman filter in which the latent dynamics is projected into a higher dimensional space; efficient inference in this high-dimensional latent space is possible due to the space being locally linear. The state representation, transition, and observation models are learned jointly by backpropagation.\\nThe paper is well written and the model is clearly explained; I also like the simplicity of the idea that uses the same machinery of Kalman filter. However, I believe the authors can improve the presentation of the model and empirical evaluation. \\n\\nIn terms of model presentation, the authors can compare the model with a large set of deep recurrent models that have recently been proposed for modeling time series with nonlinear latent dynamics (e.g. Variational Sequential Monte Carlo, Structured inference networks for nonlinear state space models, Black box variational inference for state space models, Composing graphical models with neural networks for structured representations and fast inference, etc.). For instance, a table of some of these models with their pros and cons can be helpful for guiding the reader.\\n\\nIn terms of model evaluation, the paper needs a better evaluation section specifically on the generative models (see examples above) that are much more suitable for modeling uncertainty compared to LSTM/GRU. More specifically, another approach for alleviating the limitations of Kalman filter would be to use non-linear transitions based on some non-linear functions approximation. This approach has been proposed in deep Kaman filter (Krishnan 2015) and it would be interesting to see how well your model performs compared to that for modeling uncertainty and computing predictive log-likelihood. \\n\\nIn conclusion, I think the paper presents a nice idea but it requires more work in order to pass the ICLR acceptance threshold.\\n\\n----------------------------------------------------------\\nThe authors have addressed my comments and as a result I changed my rating to 6.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
Byey7n05FQ
Plan Online, Learn Offline: Efficient Learning and Exploration via Model-Based Control
[ "Kendall Lowrey", "Aravind Rajeswaran", "Sham Kakade", "Emanuel Todorov", "Igor Mordatch" ]
We propose a "plan online and learn offline" framework for the setting where an agent, with an internal model, needs to continually act and learn in the world. Our work builds on the synergistic relationship between local model-based control, global value function learning, and exploration. We study how local trajectory optimization can cope with approximation errors in the value function, and can stabilize and accelerate value function learning. Conversely, we also study how approximate value functions can help reduce the planning horizon and allow for better policies beyond local solutions. Finally, we also demonstrate how trajectory optimization can be used to perform temporally coordinated exploration in conjunction with estimating uncertainty in value function approximation. This exploration is critical for fast and stable learning of the value function. Combining these components enable solutions to complex control tasks, like humanoid locomotion and dexterous in-hand manipulation, in the equivalent of a few minutes of experience in the real world.
[ "deep reinforcement learning", "exploration", "model-based" ]
https://openreview.net/pdf?id=Byey7n05FQ
https://openreview.net/forum?id=Byey7n05FQ
ICLR.cc/2019/Conference
2019
{ "note_id": [ "HyloQUUmgV", "rJlrQ6CZxE", "HklU137bCQ", "r1grAPeCa7", "Syl5JztsaX", "rJxWryti67", "Byen-JFsa7", "SkeXEnOipQ", "rJewsWF937", "SJxZOqMtnQ", "BJgIdkoD2Q" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544934946865, 1544838429003, 1542695902058, 1542485964546, 1542324705559, 1542324024838, 1542323972491, 1542323242646, 1541210526580, 1541118569296, 1541021550297 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1318/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1318/Authors" ], [ "ICLR.cc/2019/Conference/Paper1318/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1318/Authors" ], [ "ICLR.cc/2019/Conference/Paper1318/Authors" ], [ "ICLR.cc/2019/Conference/Paper1318/Authors" ], [ "ICLR.cc/2019/Conference/Paper1318/Authors" ], [ "ICLR.cc/2019/Conference/Paper1318/Authors" ], [ "ICLR.cc/2019/Conference/Paper1318/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1318/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1318/AnonReviewer1" ] ], "structured_content_str": [ "{\"metareview\": \"The paper makes novel explorations into how MPC and approximate-DP / value-function approaches, with value-fn ensembles to model value-fn uncertainty, can be effectively combined. The novelty lies in exploring their combination. The experiments are solid. The paper is clearly written.\\n\\nOpen issues include overall novelty, and delineating the setting in which this method is appropriate.\\n\\nThe reviewers and AC are in agreement on what is in the paper. The open question is whether\\nthe combination of the ideas is interesting. \\n\\nAfter further reviewing the paper and results. the AC believes that the overall combination of ideas and related evaluations that make a useful and promising contribution. As evidenced in some of the reviewer discussion, there is often a\\nconsiderable schism in the community regarding what is considered fair to introduce in terms of\\nprior knowledge, and blurred definitions regarding planning and control. The AC discounted some of the\\nconcerns of R2 that related more to discrete action settings and theoretical considerations; these \\noften fail to translate to difficult problems in continuous action settings. The\\nAC believes that R3 nicely articulates the issues of the paper that can be (and should be) addressed in the writing, i.e., to\\ndescribe and motivate the settings that the proposed framework targets, as articulated in the reviews and ensuing discussion.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Useful combination of MPC, approximate-DP, and value-function ensembles\"}", "{\"title\": \"Any additional questions from the reviewers?\", \"comment\": \"Dear Reviewers,\\n\\nThank you once again for taking time to review our paper. Please let us know if we can answer any additional questions about the work, or if the answers to any of your questions require further discussion. If the responses were satisfactory, we kindly request that the reviewers adjust the rating appropriately. Thank you once again and we look forward to additional discussions about the work!\"}", "{\"title\": \"additional reviewer input, after consideration of author responses ?\", \"comment\": \"Thank you to everyone for the detailed reviews and the authors for their detailed responses.\\n\\nNow would be a great time to hear from the reviewers as to whether their concerns\\nhave been addressed, and if they wish to make any score adjustments.\\n\\nThanks in advance for this additional input.\\n-- area chair\"}", "{\"title\": \"Summary of responses\", \"comment\": \"We thank all the reviewers and the area chair for taking time to read our paper and providing feedback. The summary of our responses to common questions raised by the reviewers is below. We look forward to continued discussions to address any additional questions.\\n\\n>>> Source of dynamics model\\n\\nIn this work, we assume that we have access to the ground truth dynamics model. We *do not* believe that this is an unreasonable assumption, especially for our motivating problems. Good models based on knowledge of physics or through learning (system identification) are available in most engineering and robotics settings. Indeed, most successful robotics results have been through use of models or simulators (Boston Dynamics, Honda, OpenAI). The work is also directly relevant for fields where dynamics models are available (e.g. character animation in graphics) and simulation to reality transfer, which is gaining a lot of interest in robotic learning.\\n\\nWe also emphasize that knowing the dynamics does not make the problem trivial, and does not imply that one can simply \\u201cpre-solve\\u201d the MDP and deploy the solution. Some aspects of the MDP may be revealed only at deployment time, such as the states where we may want to concentrate the approximation power, or the reward function may be revealed only at deployment time (robot knows physics but does not know which task to solve). We thus feel that algorithms for real-time action selection is an important component to enable robots to behave competently in dynamic environments.\\n\\n>>> Novelty\\n\\nPOLO combines three threads of work into one coherent and elegant algorithm that produces impressive results. All reviewers have pointed out and noted that the motivation and presentation of the algorithm is clear and neat, and the results are impressive. While it may be easy to postulate that bringing together these threads of work is important, the specifics of how to do this to produce robust algorithms with impressive results is highly non-trivial and far from obvious. We feel that the quality of results should be taken into account when assessing the novelty. Indeed, one could argue that landmark results like AlphaGo and AlphaZero do not make deep contributions to any sub-field of RL/ADP, but it remains one of the most impressive feats due to bringing together different algorithmic sub-components and showing impressive results. We also note that the component of planning/MPC to explore, which we demonstrate in the maze example, has not been explored in continuous control.\\n\\n\\nWe hope that the above clarifications also help to resolve other questions that were raised. In particular, our goal is *not* to bridge model-free and model-based RL methods; nor is to provide strong performance bounds for MPC. We make no claims about the former, and we merely use the latter as a motivation to develop a practical algorithm. While these are very important questions, they are not our focus and beyond the scope of the current submission. We kindly request that our paper be evaluated on the basis of results for the problem setting we study, as opposed to insights to other problem domains/settings.\"}", "{\"title\": \"Response to review\", \"comment\": \"Thank you for taking time to review our paper, and for your analysis and review. We address your two concerns as follows:\\n\\n=======================\\nRegarding source of model\\n=======================\\nIn this work, we assume that we have access to the dynamics model of the environment. We do not believe that this is a severe limitation because reasonable dynamics models are known for a majority of complex engineered systems including robotics. Indeed, most success stories in robotics are through use of models/simulators. Notable examples include Boston Dynamics\\u2019 Atlas, Honda\\u2019s Asimo, and the recent in-hand manipulation results from OpenAI. There is also a growing body of work and interest in simulation to reality transfer in RL for robotics, and we believe that POLO would serve as a strong baseline method for this research direction. POLO is also complementary to fields like learning dynamics models and nonlinear system identification.\\n\\nFurthermore, we also want to emphasize that knowing the dynamics does not make the problem trivial. Certain aspects of the MDP may be revealed only at run-time, thereby ruling out the option of essentially pre-solving the MDP before deployment time. For instance, we may not know the states to concentrate the approximation power of policy search or dynamic programming methods till deployment time. The reward function may also be revealed only at deployment time (robot knows physics, but does not know what task to do till the human tells it). Thus, having algorithms that can compute good actions at run-time is critical for a variety of settings, and we show in our results that POLO outperforms MPC.\\n\\n=======================\\nComparisons to model-free RL\\n=======================\\nModel-free RL does not assume explicit knowledge of the dynamics, which is certainly a weaker assumption that in the POLO case. However, model-free RL has predominantly been demonstrated only in simulated environments where a model is available by definition (e.g. AlphaGo, Atari, MuJoCo). We believe that POLO would be an important contribution for researchers studying simulation to reality transfer, since it is orders of magnitude more efficient than running model-free RL in simulators. We have updated the paper to reflect this comparison more accurately.\\n\\n=======================\\nSignificance and novelty\\n=======================\", \"polo_combines_three_important_and_deep_threads_of_work\": \"MPC, approximate dynamic programming, and exploration. The primary contribution of this work is to connect the three threads of work, into a simple and elegant algorithm, as opposed to making a contribution to any one of the streams. We believe that combining the threads of work into a practical algorithm that produces impressive results to be important and valuable. We emphasize that all reviewers found the motivation and presentation of the algorithmic framework to be elegant. While the combination of MPC and value function may appear seemingly straightforward, it has not been found effective in continuous control in the past. For example, Zhong et al. study the setting of learning value function to help MPC and found the contribution of the value function to be minimal in their settings. We also emphasize that combining MPC and uncertainty quantification to do efficient and targeted exploration for continuous control has not been studied in the past.\\n\\n=======================\\nSummary\\n=======================\\nTo summarize, while each component of POLO is well studied, combining them into a practical algorithm that produces impressive results is far from obvious. Combining the different components allows POLO to synthesize and learn competent behaviors on the fly for high dimensional systems. While we assume to know the dynamics model, this is not an outlandish assumption given the prevalence of complex model based robotic control in the real world, and the growing body of work in learning dynamics models, intuitive physics, and simulation to reality transfer.\"}", "{\"title\": \"Response to review (2/2)\", \"comment\": \"=======================\\nRelated Works\\n=======================\\nThank you for pointing out the GATS paper, we have included a citation to it in our updated submission. As discussed earlier, the broad idea of combining planning and value function learning is not new. However, intuitions and lessons learned from discrete settings rarely transfer to continuous domains. For instance, global value or Q learning methods have not produced great results in continuous control with high-dimensional action spaces, while DQN performs very well in Atari which has a small number of discrete actions. Similarly, very different planning approaches are used in discrete action settings (e.g. UC-Trees) and continuous robotics problems (e.g. iLQG, PI^2, RRT). We emphasize that in the continuous control settings, we can synthesize controllers orders of magnitude more efficient than currently used approaches like PPO in the OpenAI dexterous hand work.\\n\\nThanks for pointing out the other papers studying Bayesian linear regression, we have included citations to those as well. We would like to emphasize that the computational view of Bayesian regression is not the contribution of this work. Rather, we use it as a means to perform uncertainty estimation and drive exploration in the POLO framework.\\n\\n=======================\\nAnswers to other questions\\n=======================\\n- Regarding equation 6, we actually *do not* claim that our approach corresponds to UCB. Rather, we only say that log-sum-exp is a risk seeking objective and corresponds to optimism in the face of uncertainty, and this broad heuristic has been used successfully in other works.\\n- Regarding Lemma 2, this is not a primary contribution of our work, and is fairly elementary. We use it primarily as a motivation for the practical algorithm we develop. We agree that the L_inf norm bounds are loose and tighter bounds would be great, but that is orthogonal to the main points of this paper.\\n\\n=======================\\nSummary\\n=======================\\nIn summary, we have presented an elegant framework and algorithm that offers tangible benefits in the space of continuous control. This enables solutions to complex control problems orders of magnitude more efficiently than currently used techniques. The work should be evaluated based on the clean presentation and strong empirical results as opposed to weak connections to problems and bounds we do not focus on.\"}", "{\"title\": \"Response to review (1/2)\", \"comment\": \"Thank you for taking time to review our paper and for the feedback. We address your concerns below, and hope that our clarifications would help appreciate the work better. We look forward to continued fruitful discussions.\\n\\n=======================\\nSignificance & Novelty\\n=======================\", \"polo_combines_three_important_and_deep_threads_of_work\": \"MPC, approximate dynamic programming, and exploration. The primary contribution of this work is to connect the three threads of work as opposed to making a contribution to any one. We believe that combining them into a simple and elegant algorithm that produces impressive results is important and valuable. We emphasize that all reviewers found the motivation and presentation of the algorithmic framework to be elegant. While the combination of MPC and value function may seem straightforward, it has not found wide applicability in continuous control settings in the past. For example, Zhong et al. study the setting of learning value function to help MPC and found the contribution of the value function to be minimal in their settings. We also emphasize that combining MPC and uncertainty quantification to do efficient and targeted exploration has not been studied in the past in continuous control settings.\\n\\nOur empirical study attempts to isolate individual benefits enabled by each component in the POLO framework. Firstly, we have clearly demonstrated that learned value functions can support short horizon MPC. This has not been explored extensively in controls applications, and most MPC works do not consider learning a value function using the interaction data. Secondly, we demonstrate the utility of uncertainty quantification and MPC for exploration, through the maze example. Further, we demonstrate that MPC accelerates value function learning. While individual components may have been suggested before (Bellman himself suggests using prior experience to reduce planning computation), we present all the benefits in one elegant framework that actually achieves very strong empirical results in practice as noted by other reviewers.\\n\\n=======================\\nKnown dynamics model\\n=======================\\nFirst and foremost, we emphasize that in the known dynamics setting our algorithm significantly outperforms model-free RL methods like policy gradient. While model-free RL obviously does not require access to a model, the overwhelming majority of results in RL are in simulated environments (e.g. AlphaGo, Atari etc.) where a model is available by design. Furthermore, the majority of successful results in robotics are also through model-based methods (eg Boston Dynamics' Atlas, Honda's Asimo, OpenAI's dexterous hands). Thus, one can interpret POLO as a very strong model-based baseline that model-free RL algorithms can strive to compete with, or as a powerful vehicle with direct applicability for simulation to reality transfer, which is a topic of immense interest in robot learning.\\n\\nFurthermore, we wish to point out that knowing the dynamics does not make the problem trivial. Certain aspects of the MDP may be revealed only at run-time, thereby ruling out the possibility of pre-solving the MDP. For instance, we may not know the states to concentrate the approximation power of policy search or dynamic programming methods till deployment time. The reward function may also be revealed only at deployment time (robot knows physics, but does not know what task to do till the human tells it). Thus, having algorithms that can compute good actions at run-time is critical for a variety of settings, and we show in our results that POLO outperforms MPC.\\n\\nFinally, we wish to point out that we make explicit the assumption of knowing the dynamics model, and do not even attempt to bridge model-free and model-based RL methods (as used in the connotation of recent papers). We feel that it is important to not judge the work on the basis of a problem we are not attempting to solve.\"}", "{\"title\": \"Response to review\", \"comment\": \"Thank you for taking time to review our paper and for the constructive feedback. We greatly appreciate the comment that you enjoyed the exposition, assertions, and results in the paper. We look forward to continued fruitful discussions!\\n\\n==================\\nProblem Setting\\n==================\\nThe agent knows the MDP dynamics, but the MDP can be very complex with some information about the MDP revealed only at deployment time. Hence, it is not feasible in general to \\u201cpre-solve\\u201d the MDP and simply deploy the solution. For instance, we may know the state distribution only at deployment time and hence not know where to concentrate the approximation power in policy gradient or dynamic programming methods. Also, the reward function may be revealed only at deployment time (the robot knows physics but doesn\\u2019t know which task to do until human command). This is the general premise of real-time MPC which has enjoyed tremendous success in controlling complex systems in engineering and robotics. At the same time, we note that if there is a possibility to pre-solve the MDP before deployment, POLO can be used for this purpose as well and our experiments show that POLO is more efficient than fitted value iteration.\\n\\n=======================\\nSignificance and novelty\\n=======================\\nFirst and foremost, we emphasize that POLO produces very impressive results for hard continuous control tasks as noted by all the reviewers. POLO requires 1 CPU hour as opposed to 500 CPU hours reported by OpenAI (our numbers with PG are similar as well, and we will include these with the final paper). While model-free RL obviously does not require access to a model, the overwhelming majority of results in RL (e.g. AlphaGo, Atari, MuJoCo) are in simulated environments where a model is available by design. Model based methods have also been very successful in robotics (e.g. Boston Dynamics\\u2019 Atlas, Honda\\u2019s Asimo, OpenAI\\u2019s dexterous hands). Thus, we believe that knowing the dynamics model is not a severe limitation. One can interpret POLO as a very strong model-based baseline that model-free RL algorithms can strive to compete with, or as a powerful vehicle with direct applicability for simulation to reality transfer, which is a topic of immense interest in robot learning.\", \"polo_combines_three_important_and_deep_threads_of_work\": \"MPC, approximate dynamic programming, and exploration. The primary contribution of this work is to connect the three threads as opposed to making a contribution to any one. We believe that combining these threads of work into a simple and elegant algorithm that produces impressive results to be important and valuable. We emphasize that all reviewers found the motivation and presentation of the algorithmic framework to be elegant. Furthermore, combining MPC and uncertainty quantification to do efficient and targeted exploration has not been explored in the past in continuous control.\\n\\n=======================\\nReg. alternate approach\\n=======================\\nYou are indeed correct that the core question is about action selection with bounded resources at run-time. In this setting, using any RL/DP algorithm on 7 cores, it is natural to focus the search process around the current state of interest due to limited resources. Thus, the suggested approach reduces to MPC -- 7 cores perform local rollouts which are then combined by the final core in some way -- either non-parametric blending with exponentiated costs (MPPI), a fitted form of iLQG, or some alternative. We show in our results that POLO outperforms trajectory centric RL which is synonymous with MPC.\\n\\n=======================\\nAdditional comments\\n=======================\", \"we_will_include_additional_discussion_about_the_following_suggested_components_in_the_final_version\": \"(a) trajectory optimization vs MPC; (b) H-step Bellman backups; (c) error bars for the plots.\\nWe agree that trajectory optimization has broader connotations than MPC. In this work, we used it in the context of real-time trajectory optimization which is synonymous with MPC. We will clarify the distinctions in the paper.\\nWe also emphasize that Lemma 2 is not a primary contribution of the paper -- it primarily serves as a motivation for the algorithm we develop. We agree that prior work should have Lemma 2, since it is fairly elementary, and will include additional citations if we find the appropriate sources.\"}", "{\"title\": \"Lucid paper with nice ideas, but problem setting not completely clear\", \"review\": [\"This paper was a joy to read. The description and motivation of the POLO framework was clear, smart, and sensible. The fundamental idea is to explore the interplay between value-function estimation and model-predictive control and demonstrate how they benefit one another. None of these ideas is fundamentally new, but the descriptions and their combination is very nice.\", \"As I finished the paper, though, I was left with a lingering lack of understanding of the exact problem setting that is being addressed. The name is cute but didn't help clarify. As I understand it:\", \"we have a correct dynamics model (I'm assuming that's what \\\"nominal dynamics model\\\" means) and a good trajectory optimization algorithm\", \"the agent has limited online cognitive capacity\", \"there is no opportunity for offline computation\", \"If offline computation time were available, then we could run this algorithm (or your favorite other RL algorithm) in the agent's head before taking any actions in the actual world. That does not seem to be the setting here, although it does seem to me that you might be able to show that POLO is a good algorithm for finding a value function, offline, with no actual interaction with the world.\", \"So, fundamentally, this paper is about action under computational time constraints. One strategy would be for the robot to use 7 of its cores to run your favorite approximate DP / RL algorithm in parallel with 1 core that's used for action selection. Why is that worse than your algorithm 1?\", \"Setting this question aside, I had some other comments:\", \"It is better *not* to use \\\"trajectory optimization\\\" and \\\"model-predictive control\\\" interchangeably. I can use traj opt in other circumstances (e.g. with open loop trajectory following) and could use other planners for MPC.\", \"Some version of lemma 2 probably (almost certainly) already exists somewhere in the literature; I'm sorry, though, that I can't point you to a concrete reference.\", \"The argument about MPC letting us approximate H Bellman backups is plausible, but seems somewhat subtle; it would be good to elaborate it in some more detail.\", \"The set of assertions and experiments is very nice.\", \"Why are no variances shown in figure 3? Why does performance seem to degrade after a certain horizon.\", \"This paper doesn't seem really to be about learning representations. I don't know if that's important to the ICLR decision-making.\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"limited insight and novelty\", \"review\": \"In this paper, the authors propose POLO, a reinforcement learning algorithm which has access to the model of the environment and performs RL to mitigate the planning cost. For the planning, POLO uses the known model of the environment up to a fixed horizon H and then use an approximated value function in the leaf nodes. This way, instead of planning for an infinite horizon, the planning is factored to a shorter horizon, resulting in lower computation cost.\\n\\nThe novelty and motivation behind this approach is limited. Similar or even more general approach for discrete action space is introduced in \\\"Sample-Efficient Deep RL with Generative Adversarial Tree Search\\\" where they also learn the model of the environment and additionally consider the error due to the model estimation. There is also a clear motivation in the mentioned paper while I could not find a convincing one for the current paper. \\nPutting the novel limitation aside, both of these paper, the current paper, and the paper I mentioned, suffer from very lose estimation bounds. Both of these works bound somewhat similar (not the same) things via L_inf error of value function which in practice does not necessarily result in useful or insightful upper bounds (distribution dependent bound is desired). Moreover, with the assumption of knowing the environment model, the implication of the current work is significantly limited.\\n\\nThe authors do a good job of writing the paper and the paper is clear which is appreciatable.\\n\\nIn equation 6 the authors use log-sum-exp and claim it corresponds to UCB, but they do not provide any evidence to support their claim. \\n\\nIn addition, the Bayesian linear regression in the tabular setting is firstly proposed in Generalization and Exploration via Randomized Value Functions and beyond tabular setting (the setting in the current paper) was proposed in Efficient Exploration through Bayesian Deep Q-Networks. \\n\\nThe claims in this paper are not strong enough and the empirical study does not strongly support or provide sufficient insight. For example experiments in section 3.2 does not provide much insight beyond common knowledge.\\n\\nWhile bridging the gap between model based and model free approaches in RL are significantly important research directions in RL, I do not find the current draft significant enough to shed sufficient light into this topic.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Nice results but lean technical contribution\", \"review\": \"This paper proposes to combine fitted value iteration with model predictive control (MPC) to speed up the learning process. The value iteration is the \\\"Learn offline\\\" subsystem while MPC is the \\\"Plan online\\\" subsystem. In addition, this paper also proposes an exploration technique that increases exploration if the multiple value function estimators disagree. The evaluation is complete and shows nice results.\\n\\nHowever, I did not rank this paper high for two reasons. First, it is not clear to me how the model is acquired in MPC. Does the method learn the model? Does the method linearize the dynamics and assume a linear model? I am not sure. I suspect that the method just uses the simulator as the model. If it is the case, the method is not so useful because for complexity systems, such as humanoids, we do not know the model. And the comparisons with model-free learning algorithms are not fair because the paper assumes that the model is given. If this is not the case, I suggest that a more detailed description of MPC should be presented in Section 2.3.\\n\\nSecond, the technical contributions are lean. The three main components, 1) fitted value iteration, 2) MPC and 3) exploration based on multiple value function estimates, are not novel. The combination of them seems straight forward. For example, the H-step Bellman update (Section 2.3) is a blend between Monte-Carlo method and Q learning. It seems to be similar to the TD(\\\\lambda) method. Thus, it is not surprising that it can accelerate convergence of value function.\\n\\nFor the above reasons, I would not recommend accepting this paper at this time.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}" ] }
Syx0Mh05YQ
Learning Grid Cells as Vector Representation of Self-Position Coupled with Matrix Representation of Self-Motion
[ "Ruiqi Gao", "Jianwen Xie", "Song-Chun Zhu", "Ying Nian Wu" ]
This paper proposes a representational model for grid cells. In this model, the 2D self-position of the agent is represented by a high-dimensional vector, and the 2D self-motion or displacement of the agent is represented by a matrix that transforms the vector. Each component of the vector is a unit or a cell. The model consists of the following three sub-models. (1) Vector-matrix multiplication. The movement from the current position to the next position is modeled by matrix-vector multi- plication, i.e., the vector of the next position is obtained by multiplying the matrix of the motion to the vector of the current position. (2) Magnified local isometry. The angle between two nearby vectors equals the Euclidean distance between the two corresponding positions multiplied by a magnifying factor. (3) Global adjacency kernel. The inner product between two vectors measures the adjacency between the two corresponding positions, which is defined by a kernel function of the Euclidean distance between the two positions. Our representational model has explicit algebra and geometry. It can learn hexagon patterns of grid cells, and it is capable of error correction, path integral and path planning.
[ "vector", "grid cells", "positions", "vector representation", "matrix representation", "representational model", "model", "agent", "matrix", "current position" ]
https://openreview.net/pdf?id=Syx0Mh05YQ
https://openreview.net/forum?id=Syx0Mh05YQ
ICLR.cc/2019/Conference
2019
{ "note_id": [ "SyeloZtfgE", "rklaoMVFCm", "B1xoKkC_0Q", "r1xO0OpuAX", "HJe9Ndau0m", "SyxSaDadCQ", "B1xEovpuRm", "Hke2XPpdC7", "B1l3yDTuCm", "Hkgw09yBaQ", "rJxh001bTX", "rJeFrS4J6X", "S1ls-yVtnm", "SkeifEYIhX" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544880535639, 1543221925129, 1543196546956, 1543194832405, 1543194673885, 1543194556625, 1543194524486, 1543194404136, 1543194340180, 1541892815353, 1541631700347, 1541518656979, 1541123843508, 1540949011262 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1317/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1317/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1317/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1317/Authors" ], [ "ICLR.cc/2019/Conference/Paper1317/Authors" ], [ "ICLR.cc/2019/Conference/Paper1317/Authors" ], [ "ICLR.cc/2019/Conference/Paper1317/Authors" ], [ "ICLR.cc/2019/Conference/Paper1317/Authors" ], [ "ICLR.cc/2019/Conference/Paper1317/Authors" ], [ "ICLR.cc/2019/Conference/Paper1317/Authors" ], [ "ICLR.cc/2019/Conference/Paper1317/Authors" ], [ "ICLR.cc/2019/Conference/Paper1317/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1317/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1317/AnonReviewer2" ] ], "structured_content_str": [ "{\"metareview\": \"The authors have presented a simple yet elegant model to learn grid-like responses to encode spatial position, relying only on relative Euclidean distances to train the model, and achieving a good path integration accuracy. The model is simpler than recent related work and uses a structure of 'disentangled blocks' to achieve multi-scale grids rather than requiring dropout or injected noise. The paper is clearly written and it is intriguing to get down to the fundamentals of the grid code. On the negative side, the section on planning does not hold up as well and makes unverifiable claims, and one reviewer suggests that this section be replaced altogether by additional analysis of the grid model. Another reviewer points out that the authors have missed an opportunity to give a theoretical perspective on their model. Although there are aspects of the work which could be improved, the AC and all reviewers are in favor of acceptance of this paper.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"meta-review\"}", "{\"title\": \"Thank you\", \"comment\": \"The response is thorough and my concerns are addressed, I have updated the score accordingly.\"}", "{\"title\": \"Thank for such a thorough response\", \"comment\": \"You have addressed my questions very well, and I appreciate that you have updated the document so much. I have raised my evaluation score.\"}", "{\"title\": \"Reply to Reviewer 2 (part 2)\", \"comment\": \"Q4:\\u201cWhat can you say about the quality of the path returned by (10)? Is it guaranteed to converge to a path that ends at y? Is it the globally optimal path? I don\\u2019t agree with your statement that your approach enables simple planning by steepest descent. First of all, are the plans that your method outputs high-quality? Second, if you had solved (10) directly in x-y coordinates, you could have done this easily since it is an optimization problem in just 2 variables. That could be approximately solved by grid search. \\u201d\", \"a4\": \"The convergence of a path can be quantified by success rate. Specifically, we can define a path planning to be successful if the distance between the agent\\u2019s end position and the target is less than 0.025, and the distance between each point on the path and the obstacle is larger than 0.025. When a = 0.5 and b = 6, the successful rate is larger than 99%.\\n\\nWe agree with your criticism of our statement. We have removed the statement that compares our method with reinforcement learning and path planning. We have re-positioned our work on path planning, only claiming that our system is capable of implementing a path planning algorithm that is similar to the potential field method in robotics, thus sharing the advantages and disadvantages of the latter. Please see the paragraph at the beginning of Section 5.4. \\n\\nWe agree with your comment about solving path planning in 2D coordinates. Our claim is only that our system based on (v(x), M(dx), A(x, y)) is capable of implementing path planning algorithms based on (x, dx, |x-y|) (here both x and y are 2D). This is actually non-trivial. A learned grid cells system that is capable of path integral is not necessarily capable of path planning. The fact that A(x, y) informs |x-y| in our system is important for path planning. \\n\\nAs to why the mammalian brain adopts the grid cells instead of directly representing the 2D coordinates (e.g., by two neurons), our explanation is that the high-dimensional v enables error correction.\", \"q5\": \"\\u201cI would remove section 5.4. The latent vector v is a high-dimensional encoding of low-dimensional data, so of-course it is robust to corruptions. The corruptions you consider don\\u2019t come from a meaningful noise process, however? I can imagine, for example, that the agent observes corrupted versions of (x,y), but why would v get corrupted?\\u201d\", \"a5\": \"Following your advice, we have moved the error correction part to the appendix. See Section D of the appendix.\\n\\nThe units in v are neurons, and they tend to be noisy in the biological system. The dropout may also be related to the asynchronous nature of neuron activities. Dropout may also be caused by the gradual loss of neurons due to aging or Alzheimer. \\n\\nError correction may provide a justification for the high-dimensional vector encoding of the two-dimensional coordinates.\"}", "{\"title\": \"Reply to Reviewer 2 (part 1)\", \"comment\": \"Thank you for your helpful comments and suggestions.\", \"q1\": \"\\u201cYour paper would be improved by making a similar argument, where you would need to draw much more explicitly on the neuroscience literature.\\u201d\", \"a1\": \"Your advice is followed. We have added a discussion in the related work (Section 2), indicating that the disentangled blocks assumption in our model is related to the \\u201cmodules\\u201d of grid cells. We have also added a quantitative analysis using measures from the neuroscience literature to analyze the spatial activity of the learned units. Please see Section B.3 of the appendix.\", \"q2\": \"\\u201cFurthermore, you should better justify why your simple model is better than prior work? What does the simplicity provide? Interpretability? Ease if optimization? Sample complexity for training? This is important because otherwise it is unclear why you need to perform representation learning. The tasks you present (path integral and planning) could be easily performed in basic x-y coordinates. You wouldn\\u2019t need to introduce a latent v. Furthermore, this would improve your argument for the importance of the block-diagonal M, since it would be more clear why interpretability matters.\\u201d\", \"a2\": \"Thanks for the thoughtful comments, which we agree.\\n\\nThe simplicity here is about explaining the patterns observed in grid cells, and simplicity is desired or even required of an explanation of an observed phenomenon. \\n\\nIn particular, in Section 5.1 of the revised version, we show that the emergence of the global hexagon patterns can be explained by a generic local kernel and a generic local motion model, both of which are very simple. \\n\\nIn our work, we show that this simple system is capable of path integral and path planning. \\n\\nWe agree with you that these tasks can be performed in the 2D coordinates. It is a deep question as to why the mammalian brain uses a latent v. The justification we can provide is that the system with a high-dimensional v is capable of error correction, considering the neural system is intrinsically noisy. But there may be deeper or stronger justifications. One speculation is that the neural system may prefer matrix-vector multiplication to addition and subtraction.\", \"q3\": \"\\u201cFinally, you definitely need to discuss the literature on randomized approximations to RBF kernels (random Fourier features). Given the way you pose the representation learning objective, I expect that these would be optimal. With this, it is clear why grid-like patterns would emerge.\\u201d\", \"a3\": \"Thanks for the reference and the insight. We have cited the related papers and compare them to our work at the end of Section 5.2.\\n\\nIn Section 5.1 of the revised version, we show that a local radial basis kernel and a local motion model are enough to explain the emergence of the global hexagon patterns of the grid cells. In Appendix A, we also provide a theoretical understanding. \\n\\nInspired by your comment, we have added an ablation study in Section B.4.1 of the appendix, we show that the motion model v(x+dx) = M(dx) v(x) is necessary for the emergence of the grid patterns. We cannot learn the grid patterns from the localization model A(x, y) = <v(x), v(y)> alone. \\n\\nCompared to random Fourier features, we learn the grid patterns without assuming Fourier basis, our RBF kernel is a local generic one based on the second order Taylor expansion, and we need a motion model for the emergence of grid patterns.\"}", "{\"title\": \"Reply to Reviewer 3 (part 2)\", \"comment\": \"Q7: \\u201cThe experiments about path planning are unconvincing. First of all, the algorithm requires to input absolute positions of every obstacle into equation (9) - (10), which assumes that there is perfect information about the map. Secondly, the search algorithm is greedy and it is not obvious how it would handle a complex maze with cul-de-sac. Saying that \\\"there is no need for reinforcement learning or sophisticated optimal control\\\" is very misleading: the problem here is simplified to the extreme, and fully observed, and any comparison with deep RL algorithms that can handle partial observations is just out of place.\\u201d\", \"a7\": \"We agree with your criticism. We have removed the statement about reinforcement learning and optimal control. We have re-positioned our work on path planning, by only claiming that our system is capable of implementing a path planning algorithm that is similar to the potential field method in robotics, thus sharing the advantages and disadvantages of the latter. Please see the paragraph at the beginning of Section 5.4.\", \"now_the_purpose_of_this_section_is_only_to_show_that\": \"our (v(x), M(dx), A(x, y)) system is capable of implementing path planning algorithms based on (v, dx, |x-y|), even though our system does not represent the 2D coordinates x = (x1, x2) explicitly. This is actually non-trivial. A learned system that is capable of path integral is not necessarily capable of path planning. The fact that A(x, y) informs |x-y| in our system is important for path planning.\\n\\nWe suspect that we need both path planning algorithm and a learned policy. The latter may be useful in a familiar environment, while the former may be necessary in an unfamiliar environment. During the path planning process, the grid cells are expected to be active even though the agent is not moving.\"}", "{\"title\": \"Reply to Reviewer 3 (part 1)\", \"comment\": \"Thank you for the helpful comments and suggestions.\", \"q1\": \"\\u201cThe assumption that A(x, y) can be modeled by a Gaussian or exponential (Laplacian?) kernel is limiting, in particular for positions x and y that are far apart.\\u201d\", \"a1\": \"We agree with your concern with the global adjacency. We have studied a generic local adjacency based on the second order Taylor expansion. Please see Sections 5.1, 5.2, and Section B of the appendix.\\n\\nThis generic local adjacency 1 \\u2013 alpha |x-y|^2 appears to be the key for the emergence of the global hexagon grid pattern where alpha determines the metric. \\n\\nMeanwhile, the global adjacency is necessary for the following two reasons. (1) Regulate the metrics of multiple blocks of the hexagon grid units. (2) Inform |x-y| for the purpose of path planning.\", \"q2\": \"\\u201cThere is no discussion about what egocentric vs. allocentric referentials, and dx is assumed to be aligned with (x, y) axes (which are also the axes defining the bounding box of the area). Unlike the other work on learning path integration using an RNN, the linear matrix model can only handle allocentric displacements dx_1, dx_2 (and optional dx_3 in 3D).\\u201d\", \"a2\": \"Inspired by your comment, we have added a section on egocentric model. Please see Section C of the appendix.\\n\\nThe model couples the grid system for head direction and the original grid system for self-position. The coupling is as follows: the vector of the head direction system determines the matrix of the self-position system via an attention or selection mechanism. We find this model quite interesting although we still need more work to refine it. \\n\\nThe head direction system can also be repurposed as a clock and timestamp system.\", \"q3\": \"\\u201cNo consideration is given to non-square areas: would the network also exhibit grid-like behavior if the area was circular?\\u201d\", \"a3\": \"To answer your question, we learn the system in circular and triangular areas and the results are shown in Figure 7 of Section B.2.2 of the appendix. Hexagon patterns emerge in both cases.\", \"q4\": \"\\u201cWhat happens if the quadratic parametetrisation of block diagonals is dropped?\\u201d\", \"a4\": \"To answer your question, we have added an ablation study in Section B.4.2 of the appendix, where we remove the block diagonal assumption and the quadratic parametrization, so that we learn a separate motion matrix for each displacement on the discretized 2D grid. With local adjacency, we can still learn hexagon grid patterns when the block size is relatively small. For global adjacency, we cannot learn hexagon grid patterns.\", \"q5\": \"\\u201cThe paper did not use metrics accepted in the neuroscience community for computing a gridness score of the grid cells (although the grid cell nature is evident). There should however be metrics for quantifying how many units represent the different scales, offsets and orientations.\\u201d\", \"a5\": \"Following your advice, we have added a quantitative analysis in Section B.3 of the appendix, using the measures from the neuroscience literature, including gridness score, grid scale and orientation. 76 out of 96 units are classified as grid units according the gridness score.\\n\\nAn interesting result is that the scale measure is proportional to the metric (1/sqrt(alpha_k)) explicitly defined and automatically learned by our method. Please see Figure 8.d.\", \"q6\": \"\\u201cThe authors did not consider (but mentioned) embedding locations from vision, and did not consider ambiguous position embeddings.\\u201d\", \"a6\": \"To address your comment, we have added the following paragraph in Section 3.2 to discuss embedding location for vision.\\n\\n \\u201cOur system can be embedded into the SLAM (simultaneous localization and mapping) system (\\\\cite{whyte2006simultaneous}), which is based on a state space model that consists of a dynamic sub-model for self-position due to self-motion, and an observation sub-model for the observed visual image given the self-position. We can represent the dynamic sub-model by our system, or reformulate the whole model using our scheme. We leave it to future work. \\u201d\\n\\nWe are currently pursuing this direction of research. \\n\\nFor ambiguous position embeddings, in Section D of the appendix, we consider errors in the units and show that our system is capable of error correction. We also added Section D.2 about noisy input of self-motion.\"}", "{\"title\": \"Reply to Reviewer 1\", \"comment\": \"We are very grateful for your positive review and insightful comments.\", \"q1\": \"\\u201cBut I feel the paper also stops just a few steps short of developing a fuller theoretical understanding of what is going on.\\u201d\", \"a1\": \"Following your advice, we have added theoretical analysis. Please see Section 5.1.3 and Section A of the appendix.\\n\\nIn the theoretical analysis, we provide an analytical solution that combines three Fourier plane waves. The analysis is based on a tight frame in 2D. \\n\\nWe believe this analytical solution helps us understand the emergence of hexagon patterns. Meanwhile, our model assumes much less than the analytical solution.\", \"q2\": \"\\u201cFor example the learned solution is quite Fourier like, and we know that Fourier transforms are good for representing position shift in terms of phase shift. That would correspond to block size of two (i.e., complex numbers) in terms of this model. So what's wrong with this solution (in terms of performance) and what is gained by having block size of six, beyond simply looking more grid like? It would be nice to go beyond phenomenology and look at what the grid-like solution is useful for.\\u201d\", \"a2\": \"Thanks for the insight.\\n\\nRestricting block size = 2 indeed enables us to learn Fourier plane waves. Please see Figure 7.a. \\n\\nFigure 3.c shows that the path integral error with block size 2 is bigger than other block sizes. \\n\\nIn terms of localization sub-model, a single pair of Fourier plane waves v(x) = exp(i<a,x>) in a block gives us an adjacency function <v(x), v(y)> = cos(<a, x-y>), which does not inform |x-y| very well due to the aperture problem, i.e., if x-y is perpendicular to a, the adjacency is always 1. \\n\\nIn our new result, if we assume a generic local kernel <v(x), v(y)> = 1 \\u2013 alpha |x-y|^2, then we can always learn hexagon grid patterns as long as the block size is greater than or equal to 6, where alpha controls the metric of the block.\"}", "{\"title\": \"List of New Results\", \"comment\": \"Dear Reviewers,\\n\\nThank you for your very helpful reviews. We have tried our best to address all the points you have raised. Please find our detailed replies under your reviews respectively. \\n\\nWe have uploaded the third revision. The following is a summary of the new results we have added relative to the original submitted version. \\n\\n\\n1. Hexagon grid patterns and metrics. \\n\\nPlease see Sections 5.1, 5.2, and Section B of the appendix. \\n\\nBy introducing a generic local kernel based on the second order Taylor expansion, we are able to learn hexagon grid patterns with explicitly defined and automatically learned metrics. \\n\\n\\n2. Theoretical analysis. \\n\\nPlease see Section 5.1.3 and Section A of the appendix. \\n\\n\\n3. Egocentric model that couples two grid systems. \\n\\nPlease see Section C of the appendix. \\n\\nThe model couples the grid system for head direction and the original grid system for self-position. The vector of the head direction system selects or pays attention to the matrix of the self-position system. \\n\\nThe head direction system can also be repurposed as a clock and timestamp system. \\n\\n\\n4. Evaluations in terms of gridness measures and non-square shapes of the region. \\n\\nPlease see Sections B.3 and B.2.2 of the appendix. \\n\\nAn interesting result is that the scale measure is proportional to the automatically learned metric. \\n\\n\\n5. Ablation studies on model assumptions. \\n\\nPlease see Section B.4 of the appendix.\"}", "{\"title\": \"Surprising new result (continued): multiple hexagon blocks with automatically learned metrics\", \"comment\": \"Dear Reviewers,\\n\\nWe have uploaded the second revision that includes a new Subsection 5.2 on learning multiple blocks of grid cells where the metrics or grid sizes alpha_k are automatically learned. \\n\\nFigure 2.a shows the learned blocks and their learned metrics alpha_k. You can see that the learned blocks again show hexagon patterns, and different blocks have different metrics or grid sizes. The metrics are explicitly defined as the curvatures of the local kernels and are learned together with the vector and matrix representations. \\n\\nIn Figure 2.a, the number of cells in each block is 6. In Appendix D, we show the learned blocks with different numbers of cells. As long as the number is greater than or equal to 6, the hexagon patterns emerge (for smaller number, the learned cells tend to exhibit square lattice patterns). \\n\\nFigure 2.b shows the heat maps <v_k, v_k(x)> for inferring the location of v = (v_k, k = 1, ..., K). While individual heat maps have multiple firing locations, they add up to the Gaussian kernel with a unique location. The global Gaussian kernel is used to regulate the metrics of different constituent blocks, who vote for the inferred position by their heat maps. \\n\\nTo summarize, our model, while being very simple, explains the following aspects of grid cells at the computational (not necessarily neuroscience) level: (1) hexagon grid patterns. (2) metrics or grid sizes. (3) path integral. (4) path planning. (5) error correction. \\n\\nWe removed the original Subsection 5.2 on learning multiple blocks in the original version. We also shortened the discussion to stay within the page limit. \\n\\nWe will continue to revise our paper according to your advice, and we will reply to your comments soon. \\n\\nThank you for your consideration.\"}", "{\"title\": \"Surprising new result: hexagon and metric, with theoretical analysis\", \"comment\": \"Dear Reviewers,\\n\\nThank you for your precious time and insightful comments. \\n\\nWe have uploaded the first revision to include a new result that we find surprising, interesting and important. Please see Subsection 5.1 of the first revision. \\n\\nTo summarize, we learn a single block of cells with a generic local kernel. Recall the adjacency A(x, y)=f(|x-y|). A second order Taylor expansion of f(r) at r = 0 gives us f(r) = 1-alpha r^2, for small r, where 2 alpha is the curvature of f(r) at 0. The first derivative is 0 because f(r) reaches maximum at 0. We use the localization loss term \\n\\n|<v(x), v(y)> - (1-alpha |x-y|^2)|^2, for |x-y|^2 <= 1.5/alpha. \\n\\nTogether with the motion loss, our learning method unfailingly produces hexagon patterns, and alpha determines the metric or grid size. The hexagon patterns emerge as long as the number of units is greater than or equals to 6 (for smaller number we may learn rectangle patterns). \\n\\nWe also provide a theoretical solution when the number of units equals to 6, based on a tight frame in 2D. \\n\\nWe want to emphasize that both the localization loss and the motion loss are LOCAL, and yet the global hexagon patterns always emerge. Our loss function does not assume any global periodic pattern. It is perhaps the simplest loss function one can find: (1) a second order Taylor expansion for localization loss. (2) a matrix-vector product for motion loss. This is really minimalistic. That is, what we put in is far less than what we get out. \\n\\nWe believe this is the most important result of our work, because after all, the grid cells are characterized by hexagon patterns of different sizes. This is why they are called grid cells in the first place. We now can explain this crucial piece of puzzle. \\n\\nWe will incorporate this local kernel loss term into our original global kernel loss so that we will learn the metric alpha for each block automatically. \\n\\nTo save space, we moved the 3D path planning to appendix. We also added 1D result in appendix. The 1D result can be interpreted as time2vec or time stamp for events.\", \"a_few_key_points\": \"We shall reply to your valuable comments soon and further revise our paper according to your comments and advice. But first please allow us to make a few key points here: \\n\\n(1)\\tIn our representation scheme, we NEVER represent the coordinates x = (x1, x2) explicitly. We only represent the position by heat map or one-hot map. Without explicit coordinates, it is not a trivial task to do path planning, and it is very different from path planning in robotics based on explicit coordinates. \\n(2)\\tAbout path planning. Consider a rat leaves his home to forage. He needs path integration to know where he is. But MORE importantly, when he needs to go back home, he needs path planning. Even when he is standing still, his grid cells are changing during path planning, i.e., he is imagining or fantasizing the steps. Our proposed steepest ascent algorithm is of this nature. The rat can also fantasize much bigger step sizes beyond his physical capability in path planning, and our method enables him to do that, see Figure 4(a) for straight path planning. \\n(3)\\tAbout error correction. When talking to people in CS and robotics, a common question is: how come the brain does not use two neurons to represent the two coordinates x = (x1, x2), and instead use many neurons to represent the position. Our error correction experiment may give a justification. The dropout experiment also points to the possibility that the grid cells can work asynchronously, which is typical of biological neural system. We shall explore this issue further. \\n\\nWe shall reply to your comments and upload the second revision soon. \\n\\nThank you for your consideration of our first revision and first reply.\"}", "{\"title\": \"A simple and elegant approach to grid cells that begs for theoretical insight\", \"review\": \"This paper proposes a simple and elegant approach to learning \\\"grid-cell like\\\" representations that uses a high-dimensional encoding of position, together with a matrix for propagating position that involves only local connections among the elements of the vector. The vectors are also constrained to have their inner products reflect positional similarity. The paper also shows how such a representation may be used for path planning.\\n\\nBy stripping away the baggage and assumptions of previous approaches, I feel this paper starts to get at the essence of what drives the formation of grid cells. It is still steps away from having direct ties to neurobiology, but is trying to get at the minimal components necessary for bringing about a grid cell like solution. But I feel the paper also stops just a few steps short of developing a fuller theoretical understanding of what is going on. For example the learned solution is quite Fourier like, and we know that Fourier transforms are good for representing position shift in terms of phase shift. That would correspond to block size of two (i.e., complex numbers) in terms of this model. So what's wrong with this solution (in terms of performance) and what is gained by having block size of six, beyond simply looking more grid like? It would be nice to go beyond phenomenology and look at what the grid-like solution is useful for.\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Elegant but simplistic model for grid cells; unnecessary extension to path planning\", \"review\": \"Updated score from 6 to 7 after the authors addressed my comments below.\", \"previous_review\": \"This paper builds upon the recent work on computational models of grid cells that rely on trainable (parametric) models such as recurrent neural networks [Banino et al, 2018; Cueva & Wei, 2018]. It focuses entirely on path integration in 2D and 3D, from velocity inputs only, and it relies on two sub-networks: the motion model (an RNN) and the localization model (a feed-forward network). The avowed goal of the paper is to build a very simple and linear model for grid cells.\\n\\nBy linearly embedding the position x into a high-dimensional hidden vector v(x) (e.g., 96 elements), it can model motion using a linear model relying on matrix-vector multiplication: v(x + dx) = M(dx) v(x), where dx is the 2D or 3D displacement, v(.) is a vector and M(.) is a matrix. The embeddings v(.) are learnable and the paper assumes a square or cubic grid of N*N or N*N*N possible positions x (with N=40); these embeddings are also normalized to unit length and obey the kernel constraint that the dot-product between any two positions' vectors v(x) and v(y) is a Gaussian or an exponential function. The motion matrix is represented as block diagonal, where each block is a rotation of subvector v_k(x) into v_k(x + dx), where each block corresponds to a specific grid cell, and where the diagonal block is further expressed as a quadratic function of dx_1, dx_2, dx_3 elements of the displacement vector.\", \"the_strengths_of_the_paper_are_that\": \"1) The supervision of the localization subnetwork only depends on Euclidean proximity between two positions x and y and therefore uses relative positions, not absolute ones. Similarly, the path integration supervision of the motion model uses only relative displacements.\\n2) The resulting rate maps of the hidden units seem perfect; the model exhibits multi-scale grid behaviour.\\n3) The idea of using disentangled blocks, rather than injecting noise or using dropout and a softmax bottleneck as in [Banino et al, 2018], is interesting.\\n4) The model accumulates little path integration error over 1000 step-long episodes.\", \"the_weakness_of_the_paper_is_its_simplicity\": \"1) The assumption that A(x, y) can be modeled by a Gaussian or exponential (Laplacian?) kernel is limiting, in particular for positions x and y that are far apart.\\n2) There is no discussion about what egocentric vs. allocentric referentials, and dx is assumed to be aligned with (x, y) axes (which are also the axes defining the bounding box of the area).\\n3) Unlike the other work on learning path integration using an RNN, the linear matrix model can only handle allocentric displacements dx_1, dx_2 (and optional dx_3 in 3D).\\n4) No consideration is given to non-square areas: would the network also exhibit grid-like behavior if the area was circular?\\n5) What happens if the quadratic parameterisation of block diagonals is dropped?\\n6) The paper did not use metrics accepted in the neuroscience community for computing a gridness score of the grid cells (although the grid cell nature is evident). There should however be metrics for quantifying how many units represent the different scales, offsets and orientations.\\n7) The authors did not consider (but mentioned) embedding locations from vision, and did not consider ambiguous position embeddings.\\n\\nThe experiments about path planning are unconvincing. First of all, the algorithm requires to input absolute positions of every obstacle into equation (9) - (10), which assumes that there is perfect information about the map. Secondly, the search algorithm is greedy and it is not obvious how it would handle a complex maze with cul-de-sac. Saying that \\\"there is no need for reinforcement learning or sophisticated optimal control\\\" is very misleading: the problem here is simplified to the extreme, and fully observed, and any comparison with deep RL algorithms that can handle partial observations is just out of place.\\n\\nIn summary, the authors have introduced an interesting and elegant model for grid cells that suffers from simplifications. The part on path planning should be cut and replaced with more analysis of the grid cells and an explanation of how the model would handle egocentric velocity.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"The motivation for this work needs to be clarified\", \"review\": \"=Major Comments=\\nThe prior work on grid cells and deep learning makes it clear that the goal of the work is to demonstrate that a simple learning system equipped with representation learning will produce spatial representations that are grid-like. Finding grid-like representations is important because these representations occur in the mammalian brain. \\n\\nYour paper would be improved by making a similar argument, where you would need to draw much more explicitly on the neuroscience literature. Namely, the validation of your proposed representations for position and velocity are mostly validated by the fact that they yield grid-like representations, not that they are useful for downstream tasks.\\n\\nFurthermore, you should better justify why your simple model is better than prior work? What does the simplicity provide? Interpretability? Ease if optimization? Sample complexity for training?\\n\\nThis is important because otherwise it is unclear why you need to perform representation learning. The tasks you present (path integral and planning) could be easily performed in basic x-y coordinates. You wouldn\\u2019t need to introduce a latent v. Furthermore, this would mprove your argument for the importance of the block-diagonal M, since it would be more clear why interpretability matters.\\n\\n\\nFinally, you definitely need to discuss the literature on randomized approximations to RBF kernels (random Fourier features). Given the way you pose the representation learning objective, I expect that these would be optimal. With this, it is clear why grid-like patterns would emerge.\\n\\n=Additional Comments=\\nWhat can you say about the quality of the path returned by (10)? Is it guaranteed to converge to a path that ends at y? Is it the globally optimal path? \\n\\nI don\\u2019t agree with your statement that your approach enables simple planning by steepest descent. First of all, are the plans that your method outputs high-quality? Second, if you had solved (10) directly in x-y coordinates, you could have done this easily since it is an optimization problem in just 2 variables. That could be approximately solved by grid search.\\n\\nI would remove section 5.4. The latent vector v is a high-dimensional encoding of low-dimensional data, so of-course it is robust to corruptions. The corruptions you consider don\\u2019t come from a meaningful noise process, however? I can imagine, for example, that the agent observes corrupted versions of (x,y), but why would v get corrupted?\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
BJe0Gn0cY7
Preventing Posterior Collapse with delta-VAEs
[ "Ali Razavi", "Aaron van den Oord", "Ben Poole", "Oriol Vinyals" ]
Due to the phenomenon of “posterior collapse,” current latent variable generative models pose a challenging design choice that either weakens the capacity of the decoder or requires altering the training objective. We develop an alternative that utilizes the most powerful generative models as decoders, optimize the variational lower bound, and ensures that the latent variables preserve and encode useful information. Our proposed δ-VAEs achieve this by constraining the variational family for the posterior to have a minimum distance to the prior. For sequential latent variable models, our approach resembles the classic representation learning approach of slow feature analysis. We demonstrate our method’s efficacy at modeling text on LM1B and modeling images: learning representations, improving sample quality, and achieving state of the art log-likelihood on CIFAR-10 and ImageNet 32 × 32.
[ "Posterior Collapse", "VAE", "Autoregressive Models" ]
https://openreview.net/pdf?id=BJe0Gn0cY7
https://openreview.net/forum?id=BJe0Gn0cY7
ICLR.cc/2019/Conference
2019
{ "note_id": [ "H1g8Uwb9eE", "r1liliKglV", "SyegyQ0LCm", "S1eyafA807", "BJxg9M0LCQ", "BJgVLMCIAm", "SJxpwu6A2m", "Byl_786uhQ", "r1ixv5dnX", "HJlopG40cX", "BJl4aRZTq7" ], "note_type": [ "comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review", "official_comment", "comment" ], "note_created": [ 1545373517722, 1544751859308, 1543066327969, 1543066294732, 1543066247891, 1543066188130, 1541490789086, 1541096992491, 1541084914516, 1539355331075, 1539280571790 ], "note_signatures": [ [ "~Jaemin_Cho1" ], [ "ICLR.cc/2019/Conference/Paper1316/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1316/Authors" ], [ "ICLR.cc/2019/Conference/Paper1316/Authors" ], [ "ICLR.cc/2019/Conference/Paper1316/Authors" ], [ "ICLR.cc/2019/Conference/Paper1316/Authors" ], [ "ICLR.cc/2019/Conference/Paper1316/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1316/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1316/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1316/Authors" ], [ "~Adji_Bousso_Dieng1" ] ], "structured_content_str": [ "{\"comment\": \"Nice work & Congrats for acceptance!\", \"i_would_like_to_point_our_work_which_also_mitigates_posterior_collapse\": \")\", \"https\": \"//arxiv.org/abs/1804.03424\", \"title\": \"Missing reference for posterior collapse mitigation\"}", "{\"metareview\": \"Strengths: The proposed method is relatively principled. The paper also demonstrates a new ability: training VAEs with autoregressive decoders that have meaningful latents. The paper is clear and easy to read.\", \"weaknesses\": \"I wasn't entirely convinced by the causal/anticausal formulation, and it's a bit unfortunate that the decoder couldn't have been copied without modification from another paper.\", \"points_of_contention\": \"It's not clear how general the proposed approach is, or how important the causal/anti-causal idea was, although the authors added an ablation study to check this last question.\", \"consensus\": \"All reviewers rated the paper above the bar, and the objections of the two 6's seem to have been satisfactorily addressed by the rebuttal and paper update.\", \"confidence\": \"3: The area chair is somewhat confident\", \"recommendation\": \"Accept (Poster)\", \"title\": \"A new and not too hacky VAE training trick\"}", "{\"title\": \"overall response\", \"comment\": \"We thank all the reviewers for their valuable feedback. All three reviewers agree that the paper is clear and well-written. R1 and R2 highlighted the convincing results of learning useful representations with autoregressive decoders and noted our extensive experiments. R3 was concerned about experiments demonstrating the utility of our technique over other approaches (like beta-VAE and free bits), so we have added additional experiments that show delta-VAEs perform best at learning representations for downstream tasks across a large range of rates (updated Figure 4).\\n\\nWe believe our revised paper presents compelling evidence that delta-VAEs are a simple and effective strategy for training VAEs by constraining the parameters of the variational family to target a minimum rate. We have demonstrated improvements in log-likelihood over prior work and an ability to leverage the most recent advances in autoregressive decoders while learning latent representations that are useful for downstream tasks.\\n\\nWe have addressed each of their reviews in detail individually.\"}", "{\"title\": \"re anti-causal structure and altered training objective\", \"comment\": \"Thank you for the comments and questions!\\n\\n> importance of lower-bounding KL vs. encoder architecture\\nWe have performed additional experiments to address these questions. We found that the anti-causal encoder structure alone is not sufficient for preventing posterior collapse, while the delta-VAE alone (constraining the rate) is sufficient. Combining the anti-causal encoder with a beta-VAE objective prevents posterior collapse with small beta, but resulted in worse representations for downstream classification than delta-VAEs (see: new Figure 4).\\n\\n> ablations\\nAblations were performed on a smaller model on CIFAR-10. We have replaced the ablation table with a more extensive figure that shows the performance in terms of log-likelihood and linear classification accuracy for multiple techniques (beta-VAE, free bits, delta VAE in Fig. 4). We see that across all hyperparameter settings delta-VAEs results in better features for classification, and heldout ELBOs that are at least as good as other techniques.\\n\\n> claim of not altering the training objective\\ndelta-VAEs impose a *hard* constraint on the variational family, which is enforced through parameterization of the variational family. This differs from the typical soft or functional constraints that require modifying the objective and solving a constrained optimization problem using e.g. dual ascent or ALM. As we discuss in the text, by imposing hard constraints through parameterization, we do not have to alter the ELBO objectived used at training time.\"}", "{\"title\": \"re auxiliary prior and the generality of our approach\", \"comment\": \"Thank you for your positive comments and valuable feedback!\\n\\n> The quality of Figure 4 is too low.\\nWe have improved the quality of Figure 4 and added plots of rate vs. distortion and accuracy for all techniques (beta-VAE, free bits, delta-VAE). This updated figure highlights the robustness of delta-VAEs across different hyperparameters and rates, and shows that it outperforms other approaches at all rates. This supersedes the earlier results we had in Table 1 that contained only the best achieved performance (in terms of ELBO) for each method.\\n\\n> auxiliary prior\\nFor models that operate at higher rates, the auxiliary prior is critical to achieve SOTA performance and improve sample quality. Fig. 9 in the appendix shows that samples from the AR-1 prior are smoother and exhibit less fine-grained details than samples from the auxiliary prior. Quantitatively for our best CIFAR-10 model the difference in log-likelihood as reported per dimension does not seem large, but the auxiliary prior reduces the KL term by 72% (from 71 bits to 20 bits per image), which translates to the increased coding efficiency (i.e., reduction in distortion per transmitted bit) of 263%! \\n\\n> specific approach vs. framework?\\nWe consider the temporal and independent version of delta-VAEs as two instantiations of the general principle that the variational family should be chosen to not match the prior. Typically variational families are chosen to be maximally flexible (e.g. the work on normalizing flows), and here we present evidence that simpler and more constrained variational families are effective at regularizing generative models with rich decoders to learn more useful representations.\"}", "{\"title\": \"re evidence for the effectiveness of delta-VAEs\", \"comment\": \"Thank you for your thoughtful review.\\n\\n> minimal experimentation\\nIn the original text we performed experiments on CIFAR-10, ImageNet, and LM1B to highlight the versatility of our approach. We have performed additional ablations and experiments on CIFAR-10 that shows that our proposed delta-VAE approach outperforms beta-VAE and free bits approaches for learning useful representations across a wide range of rates (Fig. 4).\\n\\n> lack of theory\\nWhile we agree that different training methods may perform better in different settings, we present three reasons in the paper for why delta-VAEs may be preferable:\\n\\n\\n1. Throughout the text we highlight that delta-VAEs do not require altering the training objective of the ELBO. For beta-VAEs, deviations from the ELBO at beta=1 result in an encoder, prior, and decoder that do not obey Bayes rule (Hoffman & Johnson 2016), and thus lead to worse performance in terms of log-likelihood. \\n\\n2. For representation learning, the temporal-VAE approach of pairing an independent prior with a correlated prior resembles slow feature analysis which has been argued to learn more robust invariant features (Turner & Sahani, 2007).\\n\\n3. Ease of hyperparameter tuning. Given a target rate, we can analytically determine and parameterize the variational family such that the the rate is greater than or equal to the minimum target rate. This takes the form of a constraint on the mean and variances for independent delta-VAEs, and a constraint on the correlation for temporal delta-VAEs. In contrast, the relationship between beta and rate is complicated and mode- and data-dependent, thus tuning beta in beta-VAEs can be challenging. Free bits can be unstable and difficult to train, as the gradient goes from 0 to large when the constraint becomes active (see: VLAE). This motivated the authors of VLAE to use beta-VAE (which they name \\u201csoft free bits\\u201d).\"}", "{\"title\": \"Well written paper detailing a slightly different approach to preventing posterior collapse in VAEs.\", \"review\": \"The majority of approaches for preventing posterior collapse in VAEs equipped with powerful decoders to better model local structure involve either: alteration of the ELBO training objective, or a restriction on the decoder structure.\\n\\nThis paper presents an approach which broadly falls into the latter category; by limiting the family of the variational approximation to the posterior, the minimum KL divergence between the prior and posterior is lower bounded to a 'delta' value, preventing collapse.\\n\\nThe paper is well written, and the methodology clearly explained.\\n\\nThe experiments show that the proposed approach (delta VAE combined with the 'anti-causal' architecture) captures both local and global structure, and appears to do so while preserving SOTA discriminative performance on some tasks. Tests are performed on both generative image and language tasks.\", \"i_believe_that_the_paper_is_of_low_medium_significance\": \"whilst it does outline a different method of restricting the family of posteriors, it does not give a detailed reasoning (empirical or theoretical) as to why this should be a generally better solution as compared to other approaches.\", \"pros\": [\"Very clear and well written.\", \"Good execution and ablation/experimentation section.\"], \"cons\": [\"Lack of theory (and minimal experimentation) as to why this approach should be better than competing methods.\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"An interesting idea for an important problem\", \"review\": \"General:\\nThe paper attacks a problem of the posterior collapse that is one of the main issues encountered in deep generative models like VAEs. The idea of the paper relies on introducing a constraint on the family of variational posteriors in such a way that the KL term could be controlled.\\n\\nThe authors propose to use a linear autoregressive process (AR(1)) as the prior. Alternatively, they trained a single-layer LSTM network with conditional-Gaussian outputs as the prior (the auxiliary prior). Additionally, the authors claim that the encoder should contain anti-causal dependencies in order to introduce additional bias that may diminish the posterior collapse.\\n\\nThe experiments present various results on image and text datasets. Interestingly, the proposed techniques allowed to perform on a par with purely autoregressive models, however, the latent variables were utilized (i.e., no posterior collapse). For instance, in Figure 3(a) we can notice that a decoder is capable of generating similar images for given latent variable. A similar situation is obtained for text data (e.g., Figure 12).\\n\\nIn general, I find the paper interesting and I believe it should be discussed during ICLR.\", \"pros\": [\"The paper is well-written and all ideas are clearly presented.\", \"The idea of \\u201chard-coded\\u201d constraints is interesting and constitutes an alternative approach to utilizing either quantized values in the VAE (VQ-VAE) or a constrained family of variational posteriors (e.g., Hyperspherical VAE).\", \"The obtained results are convincing. Additionally, I would like to highlight that at the first glance it might seem that there is no improvement over the autoregressive models. However, the proposed approach allows to encode an image or a document and then decode it. This is not a case for purely autoregressive models.\", \"The introduction of the Slow Features into the VAE framework constitutes an interesting direction for future research.\"], \"cons\": [\"The quality of Figure 4 is too low.\", \"I am not fully convinced that the auxiliary prior is significantly better than the AR(1) prior. Indeed, the samples seem to be a bit better for the aux. prior but it is rather hard to notice by inspecting quantitative metrics.\", \"In general, the proposed approach is a specific solution rather than a general framework. Nevertheless, I find it very interesting with a potential for future work.\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Anti-causal encoder/causal decoder\", \"review\": \"The paper proposes a method to prevent posterior collapse, which refers to the phenomenon that VAEs with powerful autoregressive decoders tend to ignore the latent code, i.e., the decoder models the data distribution independently of the code. Specifically, the encoder, decoder, and prior distribution families are chosen such that the KL-term in the ELBO is bounded away from 0, meaning that the encoder output cannot perfectly match the prior. Assuming temporal data, the authors employ a 1-step autoregressive (across) prior with an encoder whose codes are independent conditionally on the input. Furthermore, they propose to use a causal decoder together with an anti-causal or non-causal encoder, which translates into a PixelSNAIL/PixelCNN style decoder and an anti-causal version thereof as encoder in the case of image data. The proposed approach is evaluated on CIFAR10, Imagenet 32x32, and the LM1B data set (text).\", \"pros\": \"The method obtains state-of-the-art performance in image generation. The paper features extensive ablation experiments and is well-written. Furthermore, it is demonstrated that the code learns an abstract representation by repeatedly sampling form the decoder conditionally on the code.\", \"cons\": \"One question that remains is the relative contribution of 1) lower-bounding the KL-term 2) using causal decoder/anti-causal encoder to the overall result. Is the encoder-decoder structure alone enough to prevent posterior collapse? In this context it would also be interesting to see how the encoder-decoder structure performs without \\\\delta-constraint, but with regularization as in \\\\beta-VAE.\\n\\nWhat data set are the ablation experiments performed on? As far as I could see this is not specified.\\n\\nAlso, I suggest toning down the claims that the proposed method works \\\"without altering the ELBO training objective\\\" in the introduction and conclusion. After all, the encoding and decoding distributions are chosen such that the KL term in the ELBO is lower-bounded by \\\\delta. In other words the authors impose a constraint to the ELBO.\", \"minor_comments\": [\"Space missing in the first paragraph of p 5: \\\\kappaas\", \"\\\"Auxiliary prior\\\"-paragraph on p 5: marginal posterior -> aggregate posterior?\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"RE: Missing reference\", \"comment\": \"Thank you for the pointer. We will consider the paper carefully and will update our citations after the review period.\"}", "{\"comment\": \"Hi,\", \"just_wanted_to_point_out_our_related_paper_https\": \"//arxiv.org/abs/1807.04863 .\", \"title\": \"Missing reference\"}" ] }
HyxAfnA5tm
Deep Online Learning Via Meta-Learning: Continual Adaptation for Model-Based RL
[ "Anusha Nagabandi", "Chelsea Finn", "Sergey Levine" ]
Humans and animals can learn complex predictive models that allow them to accurately and reliably reason about real-world phenomena, and they can adapt such models extremely quickly in the face of unexpected changes. Deep neural network models allow us to represent very complex functions, but lack this capacity for rapid online adaptation. The goal in this paper is to develop a method for continual online learning from an incoming stream of data, using deep neural network models. We formulate an online learning procedure that uses stochastic gradient descent to update model parameters, and an expectation maximization algorithm with a Chinese restaurant process prior to develop and maintain a mixture of models to handle non-stationary task distributions. This allows for all models to be adapted as necessary, with new models instantiated for task changes and old models recalled when previously seen tasks are encountered again. Furthermore, we observe that meta-learning can be used to meta-train a model such that this direct online adaptation with SGD is effective, which is otherwise not the case for large function approximators. We apply our method to model-based reinforcement learning, where adapting the predictive model is critical for control; we demonstrate that our online learning via meta-learning algorithm outperforms alternative prior methods, and enables effective continuous adaptation in non-stationary task distributions such as varying terrains, motor failures, and unexpected disturbances.
[ "meta-learning", "model-based", "reinforcement learning", "online learning", "adaptation" ]
https://openreview.net/pdf?id=HyxAfnA5tm
https://openreview.net/forum?id=HyxAfnA5tm
ICLR.cc/2019/Conference
2019
{ "note_id": [ "HJgjx5y-xE", "Syx2MNoERQ", "HylGj6OV0m", "HJg0BTu4C7", "Hkg_xYA3n7", "rygBgFLi3m", "BJgYeRFP2X" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544776179332, 1542923284432, 1542913433655, 1542913349826, 1541363952277, 1541265644565, 1541017073225 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1315/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1315/Authors" ], [ "ICLR.cc/2019/Conference/Paper1315/Authors" ], [ "ICLR.cc/2019/Conference/Paper1315/Authors" ], [ "ICLR.cc/2019/Conference/Paper1315/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1315/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1315/AnonReviewer1" ] ], "structured_content_str": [ "{\"metareview\": \"The reviewers appreciated this contribution, particularly its ability to tackle nonstationary domains which are common in real-world tasks.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Solid contribution, relevant to some interesting real world settings\"}", "{\"title\": \"Thank you for your review!\", \"comment\": \"Thank you for your review. We added an appendix to the paper that addresses your question, and we have also added this information (as well as illustrative videos) to the project website. To illustrate results with less meta-training data, we have evaluated the test-time performance of models from various meta-training iterations, showing that performance does indeed improve with more meta-training data. To clarify, this statement of performance improving with meta-training data is different from the statement in the text regarding online updating the meta-learner not improving results. We meant that incorporating the EM weight updates during meta-training did not improve results, but we did not mean that additional meta-learning was harmful. We added text at the end of section 5 in the updated paper to reduce the potential for confusion.\\n\\nRegarding the amount of data used, the number of datapoints used during metatraining on each of the agents in our experiments is 382,000: This is 12 iterations of alternating model training plus on-policy rollouts, where each iteration collects data from 16 different environment settings, and each setting consists of 2000 datapoints. At a simulator timestep of 0.02sec/step, this sample complexity converts to around only 2 hours of real-world data.\"}", "{\"title\": \"Thank you for your review!\", \"comment\": \"Thank you for your review. We have corrected the typo in the test in the middle of Algorithm 1: it should have been argmin instead of argmax. We have also clarified the caption of figure 3 to indicate that the two plots simply illustrate two different runs for the indicated agent, showing that our method chooses to assign only a single task variable even throughout runs including changing terrain slopes.\"}", "{\"title\": \"Thank you for your review!\", \"comment\": \"Thank you for your review. We have corrected the typo in both places of Algorithm 1: it should indeed have been the opposite inequality sign, and argmin instead of argmax.\\n\\nWe definitely agree with your comment that a mixture model that grows with time can sometimes be considered quite heavyweight. This is precisely where we plan to focus the efforts of our future work, by introducing a refreshing scheme where an offline retraining step can periodically condense the mixture model into fewer components (perhaps in a batch-mode training setting, so not all past data needs to be saved). We are also interested in goals such as making this mixture only as big as the agent \\u201cneeds\\u201d it to be, allowing for better and more compressed sharing and organization of seen data. The performance of this current method makes us hopeful and excited to work toward such future work in this area.\"}", "{\"title\": \"Nice work\", \"review\": \"The authors proposed a new method to learn streaming online updates for neural networks with meta-learning and applied it to multi-task reinforcement learning. Model-agnostic meta-learning is used to learn the initial weight and task distribution is learned with the Chinese restaurant process. It sounds like an interesting idea and practical for RL. Extensive experiments show the effectiveness of the proposed method.\\n\\nThe authors said that online updating the meta-learner did not improve the results, which is a bit surprised. Also how many data are meta-trained is not clearly described in the paper. Maybe the authors can compare the results with less data for meta-training.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"This was a nice proposal of a nonparametric mixture model of NNs initialized with meta-learning for supervised learning under nonstationary distributions.\", \"review\": \"The paper presents a nonparametric mixture model of neural networks for learning in an environment with a nonstationary distribution. The problem setup includes having access to only a few \\\"modes\\\" of the distribution. Training of the initial model occurs with MAML, and distributional changes during test/operation are handled by a combination of online adaptation and creations of new mixture components when necessary. The mixture is nonparametric and modeled with a CRP. The application considered in the paper is RL, and the experiments compare proposed model against baselines that do not utilize meta-learning (achieved in the proposed method with MAML), and baselines which utilize only a single model component.\\n\\nI thought the combination of meta-learning and a CRP was a neat way to tackle the problem of modeling and learning the \\\"modes\\\" of a nonstationary distribution. Applications in other domains would have been nice, but the presented results in RL sufficiently demonstrate the benefits of the proposed method.\\n\\n* Questions/Comments\\n\\nFigure 3 left vs right?\\n\\nIs the test in the middle of Algorithm 1 correct?\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Useful method for online adaptation to sudden changes in the modeled environment\", \"review\": \"The paper introduces a method for online adaptation of a model that is expected to adapt to changes in the environment the model models. The method is based on a mixture model, where new models are spawned using a Chinese restaurant process, and where each newly spawned model starts with weights that have been trained using meta-learning to quickly adapt to new dynamics. The method is demonstrated on model-based RL for a few simple benchmarks.\\n\\nThe proposed method is well justified, clearly presented, and the experimental results are convincing. The paper is generally clear and well written. The method is clearly most useful for situations where the environment suddenly changes, which is relevant in some real-world problems. As a drawback, using a mixture model (that also grows with time) for such modelling can be considered quite heavy in some situations. Nevertheless, the idea of combining a spawning process with meta-learned priors is neat, and clearly works well.\", \"minor_comments\": [\"Algorithm 1: is the inequality correct, and is T* supposed to be an argmin instead of argmax?\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}" ] }
H1gRM2A5YX
Analysis of Memory Organization for Dynamic Neural Networks
[ "Ying Ma", "Jose Principe" ]
An increasing number of neural memory networks have been developed, leading to the need for a systematic approach to analyze and compare their underlying memory capabilities. Thus, in this paper, we propose a taxonomy for four popular dynamic models: vanilla recurrent neural network, long short-term memory, neural stack and neural RAM and their variants. Based on this taxonomy, we create a framework to analyze memory organization and then compare these network architectures. This analysis elucidates how different mapping functions capture the information in the past of the input, and helps to open the dynamic neural network black box from the perspective of memory usage. Four representative tasks that would fit optimally the characteristics of each memory network are carefully selected to show each network's expressive power. We also discuss how to use this taxonomy to help users select the most parsimonious type of memory network for a specific task. Two natural language processing applications are used to evaluate the methodology in a realistic setting.
[ "memory analysis", "recurrent neural network", "LSTM", "neural Turing machine", "neural stack", "differentiable neural computers" ]
https://openreview.net/pdf?id=H1gRM2A5YX
https://openreview.net/forum?id=H1gRM2A5YX
ICLR.cc/2019/Conference
2019
{ "note_id": [ "BJedwzOel4", "HyxmFMHdJN", "SJxaPnl1yE", "B1xznfekkE", "r1eAq4JkJV", "rklC1N1kyE", "rygFFm111N", "BkxX1hS9Cm", "Byg3_iBcAQ", "BJx18slFA7", "ByxuxqBECm", "B1gmZWimCX", "rkgpvJsmAQ", "HkgHWyimRQ", "BJenaa9QRX", "H1gP7OO5h7", "SkeykfvchX", "HkgtX-7Gh7" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544745568152, 1544209019126, 1543601252543, 1543598762183, 1543595157881, 1543594982426, 1543594880933, 1543293915249, 1543293812251, 1543207750568, 1542900207632, 1542856955301, 1542856548583, 1542856445011, 1542856131793, 1541208094663, 1541202390901, 1540661537057 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1314/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1314/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1314/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1314/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1314/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1314/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1314/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1314/Authors" ], [ "ICLR.cc/2019/Conference/Paper1314/Authors" ], [ "ICLR.cc/2019/Conference/Paper1314/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1314/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1314/Authors" ], [ "ICLR.cc/2019/Conference/Paper1314/Authors" ], [ "ICLR.cc/2019/Conference/Paper1314/Authors" ], [ "ICLR.cc/2019/Conference/Paper1314/Authors" ], [ "ICLR.cc/2019/Conference/Paper1314/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1314/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1314/AnonReviewer2" ] ], "structured_content_str": [ "{\"metareview\": \"This paper presents a taxonomic study of neural network architectures, focussing on those which seek to map onto different part of the hierarchy of models of computation (DFAs, PDAs, etc). The paper splits between defining the taxonomy and comparing its elements on synthetic and \\\"NLP\\\" tasks (in fact, babi, which is also synthetic). I'm a fairly biased assessor of this sort of paper, as I generally like this topical area and think there is a need for more work of this nature in our field. I welcome, and believe the CFP calls for, papers like this (\\\"learning representations of outputs or [structured] states\\\", \\\"theoretical issues in deep learning\\\")). However, despite my personal enthusiasm, the reviews tell a different story.\\n\\nThe scores for this paper are all over the place, and that's after some attempt at harmonisation! I am satisfied that the authors have had a fair shot at defending their paper and that the reviewers have engaged with the discussion process. I'm afraid the emerging consensus still seems to be in favour of rejection. Despite my own views, I'm not comfortable bumping it up into acceptance territory on the basis of this assessment. Reviewer 1 is the only enthusiastic proponent of the paper, but their statement of support for the paper has done little to sway the others. The arguments by reviewer 3 specifically are quite salient: it is important to seek informative and useful taxonomies of the sort presented in this work, but they must have practical utility. From reading the paper, I share some of this reviewer's concerns: while it is clear to me what use there is the production of studies of the sort presented in this paper, it is not immediately clear what the utility of *this* study is. Would I, practically speaking, be able to make an informed choice as to what model class to attempt for a problem that wouldn't be indistinguishable from common approaches (e.g. \\\"start simple, add complexity\\\"). I am afraid I agree with this reviewer that I would not.\\n\\nMy conclusion is that there is not a strong consensus for accepting the paper. While I wouldn't mind seeing this work presented at the conference, but due to the competitive nature of the paper selection process, I'm afraid the line must be drawn somewhere. I do look forward to re-reading this paper after the authors have had a chance to improve and expand upon it.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Borderline\"}", "{\"title\": \"Noted\", \"comment\": \"Thank you for your suggestion that I read the paper and reviews carefully. Every paper with borderline scores typically receives a closer look, so that is what was planned.\\n\\nI'm afraid that the paper's inclusion in \\\"a world-famous professor\\u2019s deep learning course\\\", while great news for the authors, is not something I can take into account when considering what recommendation to give.\"}", "{\"title\": \"Revised assessment\", \"comment\": \"You are welcome to adjust your score if you think it reflects your understanding of the paper's strength, but please be reminded there is absolutely no need to agree with other reviewers, or reconcile scores. If you think it's worth a high score, you are more than welcome to keep it like that: you just ideally need to provide further justification for your position to enable the AC and PC to make an informed decision.\"}", "{\"title\": \"Revised assessment: lowered score and confidence based on discusssions\", \"comment\": \"The discussions with the two other reviewers have shown me that my enthusiasm for the matter at hand may have clouded my judgement.\\n\\nAlthough I maintain that the paper may be helpful for practitioners, in particular because of its identifying and comparing different kinds of architectures, the other reviewers do see the lack of clear novelty, the writing and some over-simplifications as important issues. Therefore, although I still believe the paper to be a good paper, I was probably wrong in my first assessment and changed it based on the elements presented in the discussions.\"}", "{\"title\": \"Assessment\", \"comment\": \"Thank you for participating in the discussion with the authors, Reviewer 3. Could you please clarify, in your revised assessment, given the discussion you have had with the authors, where the paper still falls short so as to merit a score of 5?\\n\\nAdditionally, there is a wide spread of scores for this paper. Could you please consider the reviews provided by R1 and R2 and see if there is anything you agree or disagree with in their assessments (and if so, please comment or discuss). There is no requirement that scores be harmonized, but you must ideally at least show consideration for the opinions of the other reviewers.\\n\\nBest,\\nAC\"}", "{\"title\": \"Please consider reviewer1's comments\", \"comment\": \"Hello R2,\\n\\nYour review is at odds with R1's (in particular). This is absolutely fine, and I am happy to see you have participated in the discussion/rebuttal process with the authors below. If you could take a moment to read R1's comments and briefly discuss where you disagree and agree, that would be helpful as we need to at least attempt harmonization (but it should not be forced, so it's fine if you agree to disagree).\\n\\nBest,\\nAC\"}", "{\"title\": \"Discuss with other reviewers\", \"comment\": \"Dear Reviewer 1,\\n\\nThank you for your detailed review. Your score is at odds with the other reviewers, which is absolutely fine! I would appreciate if you could take a minute to consider author comments and the other reviews. If you are willing to champion the paper, which you seem to be given your score, please attempt to convince the other reviewers of your viewpoint. Alternatively, if you see merit in their concerns, please adjust your assessment accordingly.\\n\\nIt is okay to agree to disagree at the end of this process, but there must be some attempt at harmonization. \\n\\nThanks!\\n\\nAC\"}", "{\"title\": \"Re.Re.Re_part1\", \"comment\": \"Q1) Unfortunately I still think this is misleading (for me at least). We have seen over the past two decades that RNNs, and LSTMs in particular, have an amazing ability to store multiple different pieces of information, reason over these disparate chunks and access sub-components. Viewing the single vector state as 'one compound event that can only be accessed as a whole' gives the impression of something more rigid in my opinion.\\n\\nIt can be clearly seen from the memory update equations of LSTM,\\nh_t=o_t*tanh(m_t) (READ)\\nm_t=f_t*m_{t-1}+i_t*c_t (WRITE)\\nWhen LSTM reads from its external memory m_t, o_t*tanh(m_t) is obtained. Notice that o_t is a scalar. m_t stores ONE compound event (it can also be seen as the state of the network, different from vRNN, this state memory of LSTM has more flexibility due to its gate mechanism), it cannot be read partially. When there is a new event needs to be stored, it has to either be combined with the old one m_{t-1}( 0<f_t<1, 0<i_t<1) or replace the old event ( f_t=0, i_t=1). The problems for which LSTM give good results only use one composite event at a time.\\nIf the reviewer still insists on his/her opinion, please provide some reference paper. We will be so grateful if we know we make a mistake so we can correct it and improve the quality of the paper. \\n\\nQ2) Thanks for the response about sentiment analysis! My suspicion is that whenever an LSTM performs better than a MANN this paper can just summarize the reason as an explanation of why the underlying task is a low memory-capacity task. However some tasks like language modelling appear (not sentiment analysis, language modelling) appear as though they would benefit from a longer context and are not implicitly low-memory. Thus I don't feel like this really explains what holds back current MANN architectures (LSTMs and the Transformer always beat them) - I would suspect it is because the optimization for DNCs etc. is more difficult because they have a small number of scalar gates (e.g. write gate) which have a big impact on performance. It would be nice to strive for a taxonomy which can make a prediction about task suitability that is non-obvious (e.g. isn't just \\\"does this task require a lot of memory up-front\\\"). \\n\\n\\nFirst of all, please don\\u2019t confuse memory-capacity and how many events can be stored. Memory capacity represents the size of the memory which is related to the number of neurons and parameters. In our paper, we talked about the memory access flexibility, i.e., if the memory can be separated into different sub-blocks to store multiple events. For example, even if one memory network can at most store one event, it\\u2019s may still have high memory capacity. (like LSTM, if the size of its external memory slot is large, its memory capacity can be very high). \\n\\nFor tasks which need one event to be stored, LSTM always performs better than DNC (both of them can store the event and LSTM is easier to train, but the trainability is not the goal of this paper). For high memory-capacity language modeling tasks as the reviewer mentioned, we have to make sure whether it needs multiple events or not. If not, LSTM beats neural RAM makes sense. However, if task needs multiple events, the current neural RAM architectures may still not work well. In this case, the current architectures need to be improved. And with the proposed taxonomy, at least we know we should either revise LSTM to store multiple events (like stacked LSTM) or change the architecture of neural RAM to make it easier to train (other than randomly pick several architectures, test them on the task and compare their relatively error).\\n\\nLike what we said in our last reply, \\u201cmemory is a very abstract concept and the specific memory requirement of a specific task is implicit\\u201d. There are some quantifiers of memory such as memory depth, memory resolution, memory capacity, etc., but none of them can really help to differentiate these memory architectures. Hence, we decide to use \\u201chow many events can be stored and if the access order is restricted\\u201d to quantify the capabilities of different memory networks, which is simple but useful in practice.\"}", "{\"title\": \"Re.Re.Re._part2\", \"comment\": \"Q3) I agree I am not saying that we should use MANNs to simulate neural stacks. I am saying that the reduction is either (a) not possible for some architectures, e.g. a Memory Network, (b) possible but only with significant architectural changes, e.g. a santoro et al. (2016) style MANN, (c) trivial, e.g. a DNC with the temporal linkage matrix, which can implement a stack. Thus I am not sure if it's a valuable reduction. I think the more interesting distinction between memory systems is whether or not the number of trainable parameters is tied to the memory size (as this dictates how much fidelity we have over memory), whether or not the system requires backpropagation-through-time (e.g. MemNet = No, DNC = Yes) for optimization, as this dictates how easy it is to train and whether it will generalize well to longer sequences, how much information in memory can be modified at any given time-step, ...\\n\\nThe reduction from neural RAM to neural stack can not apply to all the architectures (such as memory network). We include it in the paper just for completeness. However, whether this reduction is valid for all types of neural RAM does not affect our basic argument (whether the stored events can be accessed in arbitrary order or not). We will add some comments to make this clear in the final paper. \\n\\nThe differences between neural RAM and stack can be analyzed in many aspects. Our paper focuses on whether the stored events can be accessed in arbitrary order or not. There are some other inter-class/intra-class differences exists, like what reviewer mentioned, whether the number of parameters equals to the number of memory slots. If a paper\\u2019s target is only to compare neural stack and neural RAM then YES, it should be discussed in detail (although we think it\\u2019s not hard to infer from our proposed taxonomy). And whether the model is BPTT-free is also an interesting problem, but it is not unique to memory networks. Since the goal of this paper is to propose a unifying framework to analyze the memory structures of these four popular networks, we think these distinctions are not closely related to the goal, although they are very interesting. We may leave them for future work.\"}", "{\"title\": \"Reply to response\", \"comment\": \"Thanks for detailed response. Having read the response and other reviews, I have raised my rating to 5. The work is definitely helpful to practitioners, and should be pursued further to the point of suggesting future venues of research.\"}", "{\"title\": \"Re. Re.\", \"comment\": \"Thanks for your response and for the updates that you have made to the paper.\\n\\nQ1 response\\n\\\"We agree with your statement and we are sorry for the misleading \\u201csingle event\\u201d. By \\u201csingle event\\u201d we mean that if there is only one useful event, it can be stored as it is, but if there are multiple useful events, they have to be compressed into one compounded event and can only be accessed as a whole. This has been clarified in the text.\\\" - \\n\\nUnfortunately I still think this is misleading (for me at least). We have seen over the past two decades that RNNs, and LSTMs in particular, have an amazing ability to store multiple different pieces of information, reason over these disparate chunks and access sub-components. Viewing the single vector state as 'one compound event that can only be accessed as a whole' gives the impression of something more rigid in my opinion. \\n\\nQ2 response\\nThanks for the response about sentiment analysis! My suspicion is that whenever an LSTM performs better than a MANN this paper can just summarize the reason as an explanation of why the underlying task is a low memory-capacity task. However some tasks like language modelling appear (not sentiment analysis, language modelling) appear as though they would benefit from a longer context and are not implicitly low-memory. Thus I don't feel like this really explains what holds back current MANN architectures (LSTMs and the Transformer always beat them) - I would suspect it is because the optimization for DNCs etc. is more difficult because they have a small number of scalar gates (e.g. write gate) which have a big impact on performance. It would be nice to strive for a taxonomy which can make a prediction about task suitability that is non-obvious (e.g. isn't just \\\"does this task require a lot of memory up-front\\\").\\n\\nQ3)\\nI agree I am not saying that we should use MANNs to simulate neural stacks. I am saying that the reduction is either (a) not possible for some architectures, e.g. a Memory Network, (b) possible but only with significant architectural changes, e.g. a santoro et al. (2016) style MANN, (c) trivial, e.g. a DNC with the temporal linkage matrix, which can implement a stack. Thus I am not sure if it's a valuable reduction.\\n\\nI think the more interesting distinction between memory systems is whether or not the number of trainable parameters is tied to the memory size (as this dictates how much fidelity we have over memory), whether or not the system requires backpropagation-through-time (e.g. MemNet = No, DNC = Yes) for optimization, as this dictates how easy it is to train and whether it will generalize well to longer sequences, how much information in memory can be modified at any given time-step, ... \\n\\nUnfortunately I still do not feel positive about accepting this paper but I have raised my score.\"}", "{\"title\": \"Reply to AnonReviewer2\", \"comment\": \"The intent of our paper was to analyze the type of memory utilized by different architectures to solve sequence learning problems. This is not an easy issue because \\u2018memory\\u2019 is a very abstract concept and the specific memory requirements for a specific task are implicit, which means that quantitatively conceptualizing and analyzing memory is a very hard problem. Cognitive scientists have defined many different types of memory, which shows the richness of the topic, and there are only a few engineering quantifiers of memory such as memory depth and memory resolution, but they are not enough for the ever-growing applications of machine learning. Hence memory quantification is lacking in the current machine learning literature and it is our main contribution. The proposed taxonomy for the four most conventional memory architectures appears as a simple way to quantify the capabilities of extracting past information of each class.\\n\\nOur goal of providing methodologies for the practitioner relegated to a second objective of the paper. It is clear from your questions that our writing was not successful, and we have modified the writing in the final submission to make this point more explicit. As far as we know, our paper addresses for the first time how to exploit the knowledge gained from the different characteristics of the memory within the taxonomy to help users select the type of memory network for an application. However, we agree that the analysis is not complete yet because on the one hand users have to analyze task\\u2019s memory requirements by themselves which is not trivial and on the other hand, the algorithm accuracy is also affected by the size and specific network structure even within a given class of models. But we firmly believe that classifying memory architectures into these four classes and linking the architecture of the learning machine to its descriptive power, as we did in this paper, is a fundamental first step. At least, in this respect, we think this paper is important to the machine learning community. \\n\\n\\n\\nQ1) I actually felt, in the endeavor of creating a simple taxonomy the authors have created confusing simplifications, e.g. \\\"LSTM: state memory and memory of a single external event\\\" to me is mis-leading as we know an LSTM can compress many external events into its hidden units. \\n\\nWe agree with your statement and we are sorry for the misleading \\u201csingle event\\u201d. By \\u201csingle event\\u201d we mean that if there is only one useful event, it can be stored as it is, but if there are multiple useful events, they have to be compressed into one compounded event and can only be accessed as a whole. This has been clarified in the text.\\n\\nQ2) It would be more interesting to me, for example, if the paper could thus formalize why NTMs & DNCs (say) do not outperform LSTMs at language modeling, for example. \\n\\nPlease see our reply to Q4) of the second reviewer.\\n\\nQ3) I found the reductions somewhat shady, e.g. the RAM simulation of a stack is possible, however the model could only learn the proposed reduction if the number of write heads was equal to the number of memory slots --- or unless it had O(N) thinking steps per time step, where N is the number of memory slots, so it's not a very realistic reduction. You would never see a memory network, for example, simulating a stack due to the fixed write-one-slot-per-timestep interface.\\n\\nThe purpose of deriving the reductions was to get some insights by comparing neural stack and neural RAM, we didn\\u2019t suggest using neural RAM simulating neural stack to solve a problem, and that is the reason why we said \\u201cfor the tasks where the previous memory needs to be addressed sequentially, the stack neural network is our first choice.\\u201d \\n\\n\\nQ4) Nit: I'm not sure the authors should be saying they 'developed' four synthetic tasks, when many of these tasks have previously been proposed and published (counting, copy, reverse copy). \\n\\nWe said \\u2018developed\\u2019 because some of our experiments were not same as before. Our experiments were slightly revised to highlight the advantage and limitation of different memory types. For example, compared to the previous counting task, we added some interferences to the input sequences (see details in our \\u2018counting with interference\\u2019 task). By comparing vRNN\\u2019s performance on our \\u2018counting\\u2019 and \\u2018counting with interference\\u2019 task, the limitation of internal memory in vRNN was shown more clearly. But since our revision is less novel, we have changed \\u2018develop\\u2019 to \\u2018select\\u2019.\"}", "{\"title\": \"Reply to AnonReviewer3_part2\", \"comment\": \"Q3) To verify the models really learn the task, the authors should include tests on unseen sequence lengths.\\n\\nWe have these results and will include a table showing the accuracy for each model on longer sequence lengths in the revised version.\\n\\nQ4) There remains questions unexplained in NLP tasks such as why multi-slot memory did not show more advantages in Movie Review and why Neural Stack performed worse than LSTM in bAbI data. \\n\\nFrom our observation of the results, all the models solved the sentiment analysis problem mainly based on some discriminating words. Specifically, when feeding the input sequence to the model, the output value would be increased when meeting positive words such as \\u201cgood, love\\u201d and decreased when meeting negative words such as \\u201cdislike, boring\\u201d. If there were many positive words appearing in text, the sentiment would be judged as positive. Hence, we only need an external memory whose value can be affected by the discriminating words. Although multi-slot memory can store more than one event, as long as it cannot understand the logic of the text, its performance cannot be improved compared to the model with single slot memory(LSTM). Hence the multiple slots memory didn\\u2019t not show more advantage compared to the single slot memory. \\n\\nFor the bAbI data, in order to solve the problem, learning machines need to store all the potential useful facts and read any of them when needed, so a multi-slot external memory whose contents can be randomly accessed is necessary. Hence, both LSTM and neural stack are not suitable for this task. If we apply LSTM or neural stack to the problem, they will try their best to find their approximate solutions. If we apply LSTM to this task, LSTM would compress all the useful information in its external memory, although the right answer is mixed with other information, the output can at least get some information from it. However, if we apply neural stack to this task, the push signal sometimes is very large (d_push~=1, d_pop~=0, d_no_op~=0), which means that the right answer will then be pushed down in the stack which cannot be accessed when needed. Hence, we think although the neural stack tries its best to find the right answer, its more complicated operation may make it is more likely to be stuck in local points. This has been mentioned in the revised manuscript. \\n\\nQ5) Minor potential errors: In Eq. (6), r_{t-1} should be r_t.\\n\\nActually, it\\u2019s not an error. Since in Eq.(3), c_t is a function of h_t, it should be r_{t-1} in Eq(6). But we will change r_{t-1} to r_t in Eq.(6) and h_t to h_{t-1} in Eq.(3) if this form is more formal.\\n\\nQ6) The LSTM presented in Section 3.2 is not the common one. Normally, there should be x_t term in Eq. (3) and h_t=g_{o,t}*\\\\tanh(r_t) in Eq. (6). The author should follow the common LSTM formulas (which may lead to different proofs) or include reference to their LSTM version. \\n\\nWhen the models discussed in our paper were first proposed, their dynamical equations look very different (e.g., in LSTM, c_t is a function of h_{t-1} and x_t; in neural stack, c_t is a function of h_{t-1}). Since the goal of our paper is to analyze the connection and difference between different models, it\\u2019s better to use uniform dynamical equations to describe the memory system. In this way, it\\u2019s much easier to see their essential differences. Hence, we used the revised version of LSTM in our paper since it doesn\\u2019t affect the basic working mechanism of the architecture.\"}", "{\"title\": \"Reply to AnonReviewer3_part1\", \"comment\": \"Q1) The proposed taxonomy is not new. It is a little bit obvious and mentioned before in [1] (Unfortunately, this was not cited in the manuscript). The theorems on inclusion relationship are also obvious and the main contribution of the paper is to formally show that in mathematical forms\\n\\nThank you for mentioning [1] (we have cited it in the revised version for completeness) but we disagree that \\u201cthe proposed taxonomy is not new\\u201d. The authors in [1] simply divided these models into sequential, random access and stack memory architectures, which bears some similarity with the taxonomy proposed in our paper, but it is more superficial and does not go to the mechanisms behind the memory types. Indeed, classifying models according to the type of memory seems obvious, but finding the essential relationship between classes and linking the descriptive power of learning machines to the properties of task data is not a trivial work. (For example, what\\u2019s the difference between internal and external memory and what kind of tasks can they address? Our taxonomy showed clearly that the gate mechanism in LSTM and the push/pop/no-op operators in the stack augmented memory had the same function in nature, which had never been mentioned before. Many discussions like these first appeared in our paper.) Many papers proposed fancy models to improve the existing work, however there is still no paper providing a good approach to analyze what the memory architectures can learn and how to select the most parsimonious memory model for a specific task. As far as we know, our paper addresses for the first time how to codify and exploit the knowledge gained from the different characteristics of the memory within the taxonomy to help users select the type of memory network for an application. Moreover, the effectiveness of this analysis framework was also verified in some simple experiments. However, we agree that the analysis is not complete because on the one hand people has to analyze memory requirements of a task by themselves which is not trivial and on the other hand, the accuracy is also affected by the size and specific network structure even within a class. But we firmly believe that classifying architectures into these four classes and linking the architecture of the learning machine to its descriptive power, as we did in this paper, is a fundamental first step, and we think this paper is important to the machine learning community. We admit that the proofs of the theorems are not very hard and we included them to make our argument rigorous.\\n\\nQ2) The experiments on synthetic tasks give some insights into the models\\u2019 operations, yet similar analyses can be found in [2, 3]. \\n\\nAlthough [2][3] and our paper used similar synthetic tasks in experiments, our goal was again very different. In [2][3], their goal was to show the effectiveness of their proposed architecture, so each of them only focused on analyzing the operation of one specific model. However, since our goal is to verify the proposed taxonomy, our experiment focused on showing the connections between different memory types and the growing capability of these four classes of models. For example, although [2] showed the details of the operation of neural stack and the neural stack performing better than LSTM on some tasks, we believe readers still won\\u2019t understand what was the connection between LSTM and neural stack and why LSTM could be seen as a special case of neural stack. However, in our \\u201ccounting with interference\\u201d task, the results showed that (Appendix D.2) content of the top element of the stack (M0 in Fig.12) had the same changing trend as the external memory of LSTM (M0 in Fig.11) and other content in the stack below the top one was redundant. Hence it helped verify our argument \\u201cLSTM can be seen as neural stack with only the top element\\u201d. Because of page limitation, we didn\\u2019t show how the three gates in LSTM related to the push/pop/no-op operators in neural stack, but our argument would be more convincing if these operator comparison results were added. \\nWe struggled to demonstrate the capabilities of each memory architecture, and our decision was to construct four representative tasks that would fit optimally the characteristics of each memory organization. Therefore, these four representative tasks were carefully selected to allow practitioners to compare their own problem with these four tasks and give them some hints to select the right model. This has been better explained in the revised paper, but the point is that we are not just simply repeating some existing experiments.\"}", "{\"title\": \"Reply to AnonReviewer1\", \"comment\": \"Q1) The taxonomy presented in the paper relies on an analysis of what the architectures can do, not what they can learn. I believe the authors should acknowledge that the presence of Long Range Dependence in sequences is still hard to capture by dynamic neural networks (in particular RNNs) and that alternate analysis have been proposed to understand the impact of the presence of such Long Range Dependence in the data on sequential learning. I believe that mentioning this issue along with older (http://ai.dinfo.unifi.it/paolo/ps/tnn-94-gradient.pdf) and more recent (e.g. http://proceedings.mlr.press/v84/belletti18a/belletti18a.pdf and https://arxiv.org/pdf/1803.00144.pdf) papers on the topic is necessary for the paper to present a holistic view of the matter at hand.\\n\\nWe agree that the LRD problem is hard for RNNs, and this is the major reason external memories are needed, In the revised version, we have motivated the need for the taxonomy by the LRD problem and included these three papers for completeness. However, we would like to say that the main goal of our paper is indeed to explain what each architecture can learn from data, against your first observation. Based on this we further analyzed what they can do. From your comment we may have emphasized too much the aspect of how they can be used to help the practitioner. We have characterized what information each architecture can extract from the data stream in the revised manuscript.\\n\\nQ2) The arguments given in 5.2 are not most convincing and could benefit from a more thorough exposition, in particular for the sentiment analysis task. It is not clear enough in my view that it is true that \\\"since the goal is to classify the emotional tone as either 1 or 0, the specific contents of the text are not very important here\\\". One could argue that a single word in a sentence can change its meaning and sentiment. \\n\\nFrom our analysis of the results, all the models solve the sentiment analysis problem mainly based on the occurrence and reoccurrence of some discriminating words. Specifically, when feeding the input sequence to the model, the output value would be increased when meeting positive words such as \\u201cgood, love\\u201d and decreased when meeting negative words such as \\u201cdislike, boring\\u201d. If there were many positive words appearing in text, the sentiment would be judged as positive. Since the model only cares about whether the word is positive or negative and its number of occurrences in the text (kind of a \\u201cdensity\\u201d), we deduct that a specific word is not crucial (for example, as long as it is a positive word, whether it is \\u201clove\\u201d,\\u2019like\\u2019 or \\u2018happy\\u2019 is not that important), and translated this as \\u201cthe specific contents of the text are not very important\\u201d. We have elaborated more on this point in the final version because as the reviewer pointed out the explanation is too brief and not specific. \\n\\nBut we have to be aware that this \\u201cdiscriminating words based\\u201d method does not really solve the problem as human. As the reviewer mentioned \\u201ca single word in a sentence can change its meaning and sentiment\\u201d, therefore, in order to really solve this problem, the machine should learn how to decode the text meaning. But none of the current models can really achieve this. Although these models can capture some temporal dependencies (for example, if there is a \\u201cdon\\u2019t\\u201d before \\u201clike\\u201d, the sentence is more likely to be negative), the final decision still mostly depends on how many \\u201cdiscriminating words\\u201d appear in the text. That\\u2019s also the reason why they cannot get 100% accuracy.\\n\\nQ3) The written could be more polished. \\nWe have further polished the language.\"}", "{\"title\": \"Very interesting consolidation paper on the analysis of dynamic neural networks\", \"review\": \"I really liked this paper and believe it could be useful to many practitioners of NLP, conversational ML and sequential learning who may find themselves somewhat lost in the ever-expanding field of dynamic neural networks.\\n\\nAlthough the format of the paper is seemingly unusual (it may feel like reading a survey at first), the authors propose a concise and pedagogical presentation of Jordan Networks, LSTM, Neural Stacks and Neural RAMs while drawing connections between these different model families.\\n\\nThe cornerstone of the analysis of the paper resides in the taxonomy presented in Figure 5 which, I believe, should be presented on the front page of the paper. The taxonomy is justified by a thorough theoretical analysis which may be found in appendix.\\n\\nThe authors put the taxonomy to use on synthetic and real data sets. Although the data set taxonomy is less novel it is indeed insightful to go back to a classification of grammatical complexity and structure so as to enable a clearer thinking about sequential learning tasks. \\n\\nAn analysis of sentiment analysis and question answering task is conducted which relates the properties of sequences in those datasets to the neural network taxonomy the authors devised. In each experiment, the choice of NN recommended by the taxonomy gives the best performance among the other elements presented in the taxonomy.\", \"strength\": \"o) The paper is thorough and the appendix presents all experiments in detail. \\no) The taxonomy is clearly a novel valuable contribution. \\no) The survey aspect of the paper is also a strength as it consolidates the reader's understanding of the families of dynamic NNs under consideration.\", \"weaknesses\": \"o) The taxonomy presented in the paper relies on an analysis of what the architectures can do, not what they can learn. I believe the authors should acknowledge that the presence of Long Range Dependence in sequences is still hard to capture by dynamic neural networks (in particular RNNs) and that alternate analysis have been proposed to understand the impact of the presence of such Long Range Dependence in the data on sequential learning. I believe that mentioning this issue along with older (http://ai.dinfo.unifi.it/paolo/ps/tnn-94-gradient.pdf) and more recent (e.g. http://proceedings.mlr.press/v84/belletti18a/belletti18a.pdf and https://arxiv.org/pdf/1803.00144.pdf) papers on the topic is necessary for the paper to present a holistic view of the matter at hand.\\no) The arguments given in 5.2 are not most convincing and could benefit from a more thorough exposition, in particular for the sentiment analysis task. It is not clear enough in my view that it is true that \\\"since the goal is to classify the emotional tone as either 1 or 0, the specific contents of the text are not very important here\\\". One could argue that a single word in a sentence can change its meaning and sentiment.\\no) The written could be more polished.\\n\\nAs a practitioner using RNNs daily I find this paper exciting as an attempt to conceptualize both data set properties and dynamic neural network families. I believe that the authors should address the shortcomings I think hinder the paper's arguments and exposition of pre-existing work on the analysis of dynamic neural networks.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Useful taxomony of memory-based neural network\", \"review\": \"Summary\\n=========\\nThe paper analyses the taxonomy over memory-based neural networks, in the decreasing order of capacity: Neural RAM to Neural Stack, Neural Stack to LSTM and LSTM to vanilla RNN. The experiments with synthetic and NLP datasets demonstrate the benefits of using models that fit with task types. \\n\\nComment\\n========\\nOverall, the paper is well written and presents interesting analysis of different memory architectures. However, the contribution is rather limited. The proposed taxonomy is not new. It is a little bit obvious and mentioned before in [1] (Unfortunately, this was not cited in the manuscript). The theorems on inclusion relationship are also obvious and the main contribution of the paper is to formally show that in mathematical forms. The experiments on synthetic tasks give some insights into the models\\u2019 operations, yet similar analyses can be found in [2, 3]. To verify the models really learn the task, the authors should include tests on unseen sequence lengths. There remains questions unexplained in NLP tasks such as why multi-slot memory did not show more advantages in Movie Review and why Neural Stack performed worse than LSTM in bAbI data.\", \"minor_potential_errors\": \"In Eq. (6), r_{t-1} should be r_t \\n\\nThe LSTM presented in Section 3.2 is not the common one. Normally, there should be x_t term in Eq. (3) and h_t=g_{o,t}*\\\\tanh(r_t) in Eq. (6). The author should follow the common LSTM formulas (which may lead to different proofs) or include reference to their LSTM version. \\n\\n[1] Yogatama et al. Memory Architectures in Recurrent Neural Network Language Models. ICLR\\u201918 \\n\\n[2] Joulin et al. Inferring algorithmic patterns with stack-augmented recurrent nets. NIPS\\u201915 \\n\\n[3] Graves et al. Neural Turing Machines. arXiv preprint arXiv:1410.5401 (2014).\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Taxonomy is not illuminating\", \"review\": \"The authors propose a review-style overview of memory systems within neural networks, from simple RNNs to stack-based memory architectures and NTM / MemNet-style architectures. They propose some reductions to imply how one model can be used (or modify) to simulate another. They then make predictions about which type of models should be best on different types of tasks.\\n\\nUnfortunately I did not find the paper particularly well written and the taxonomy was not illuminating for me. I actually felt, in the endeavor of creating a simple taxonomy the authors have created confusing simplifications, e.g.\\n\\n\\\"LSTM: state memory and memory of a single external event\\\"\\n\\nto me is mis-leading as we know an LSTM can compress many external events into its hidden units. Furthermore the taxonomy did not provide me with any new insights or display a prediction that was actually clairvoyant. I.e. it was clear from the outset that a memory network (say) will be much better at bAbI than a stack-augmented neural network. It would be more interesting to me, for example, if the paper could thus formalize why NTMs & DNCs (say) do not outperform LSTMs at language modeling, for example. I found the reductions somewhat shady, e.g. the RAM simulation of a stack is possible, however the model could only learn the proposed reduction if the number of write heads was equal to the number of memory slots --- or unless it had O(N) thinking steps per time step, where N is the number of memory slots, so it's not a very realistic reduction. You would never see a memory network, for example, simulating a stack due to the fixed write-one-slot-per-timestep interface.\", \"nit\": \"I'm not sure the authors should be saying they 'developed' four synthetic tasks, when many of these tasks have previously been proposed and published (counting, copy, reverse copy).\", \"rating\": \"3: Clear rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}" ] }
SJlpM3RqKQ
Expanding the Reach of Federated Learning by Reducing Client Resource Requirements
[ "Sebastian Caldas", "Jakub Konečný", "Brendan McMahan", "Ameet Talwalkar" ]
Communication on heterogeneous edge networks is a fundamental bottleneck in Federated Learning (FL), restricting both model capacity and user participation. To address this issue, we introduce two novel strategies to reduce communication costs: (1) the use of lossy compression on the global model sent server-to-client; and (2) Federated Dropout, which allows users to efficiently train locally on smaller subsets of the global model and also provides a reduction in both client-to-server communication and local computation. We empirically show that these strategies, combined with existing compression approaches for client-to-server communication, collectively provide up to a 9.6x reduction in server-to-client communication, a 1.5x reduction in local computation, and a 24x reduction in upload communication, all without degrading the quality of the final model. We thus comprehensively reduce FL's impact on client device resources, allowing higher capacity models to be trained, and a more diverse set of users to be reached.
[ "reduction", "federated learning", "communication", "reach", "global model", "users", "local computation", "client resource requirements" ]
https://openreview.net/pdf?id=SJlpM3RqKQ
https://openreview.net/forum?id=SJlpM3RqKQ
ICLR.cc/2019/Conference
2019
{ "note_id": [ "BkeodeLNlV", "H1gklIwgAX", "HJxbjnnJAQ", "HkgOK2nk0X", "SJxeXnnJAX", "HJl3ehnkAX", "ryg0Jo2kRQ", "Bke_mOmC2m", "ByeY8Xq63Q", "SJek5lVo3m" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544999027285, 1542645222532, 1542601881138, 1542601855809, 1542601751568, 1542601715679, 1542601445753, 1541449759862, 1541411665484, 1541255303181 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1311/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1311/Authors" ], [ "ICLR.cc/2019/Conference/Paper1311/Authors" ], [ "ICLR.cc/2019/Conference/Paper1311/Authors" ], [ "ICLR.cc/2019/Conference/Paper1311/Authors" ], [ "ICLR.cc/2019/Conference/Paper1311/Authors" ], [ "ICLR.cc/2019/Conference/Paper1311/Authors" ], [ "ICLR.cc/2019/Conference/Paper1311/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1311/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1311/AnonReviewer1" ] ], "structured_content_str": [ "{\"metareview\": \"This paper focuses on communication efficient Federated Learning (FL) and proposes an approach for training large models on heterogeneous edge devices. The paper is well-written and the approach is promising, but all reviewers pointed out that both novelty of the approach and empirical evaluation, including comparison with state-of-art, are somewhat limited. We hope that suggestions provided by the reviewers will be helpful for extending and improving this work.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"A well-written paper addressing an important problem, but somewhat limited novelty and empirical evaluation\"}", "{\"title\": \"We thank all reviewers for their suggestions. We think a common misunderstanding among the reviews is that they don\\u2019t fully recognize some aspects of Federated Learning.\", \"comment\": \"We thank all reviewers for their suggestions and helping us see how the paper can be improved.\\n\\nWe think the common misunderstanding among the reviews is that they don\\u2019t fully recognize some aspects and challenges of Federated Learning (FL). We provide individual responses of why some of the reviewers\\u2019 suggestions are infeasible in FL, and explain other concerns.\\n\\nIn addition, we have discovered a minor flaw in how we explained Federated Dropout in the context of convolutional layers (unnoticed by the reviewers). Additional change: We have made an improvement with respect to Federated Dropout applied to convolutional layers. Previously, we used it similarly as in the standard dropout, which did not let us realize space savings. In the updated version, we drop whole filters, which leads to both computational and communication savings. We repeated the experiments, and the conclusions still hold.\\n\\nWe thank all the reviewers for their comments highlighting the paper is overall well written!\"}", "{\"title\": \"Thank you for your constructive feedback. Please find answers to the specific concerns below (Part 1)\", \"comment\": \"Thank you for your feedback, helping us see which parts of our contributions are not getting across clearly enough. Please find answers to the specific concerns below:\\n\\n- \\u201cRandomly dropping coefficients as suggested in this paper seems odd to me...\\u201d\", \"answer\": \"No, the experiment in Fig 4 explores only the effect of Federated Dropout without other changes. The combination of all proposed ideas is in Figure 5.\"}", "{\"title\": \"Thank you for your constructive feedback. Please find answers to the specific concerns below (Part 2)\", \"comment\": [\"\\u201c...why with some amounts of dropout, the accuracy may improve but at a slower pace?\\u201d\"], \"answer\": \"In practice, using compression and Federated Dropout will make the rounds complete faster. Thus, without access to an actual production deployment, it is generally impossible to say what will best in terms of runtime. Therefore, we think the number of rounds is the best \\u201cfair\\u201d comparison. At the same time, note that slightly longer runtime would be a welcome price to pay for higher final accuracy. We see this point is not clear in the paper and we will add a remark on this.\"}", "{\"title\": \"We thank the reviewer for their thorough review. However, we think the review does not fully recognize the challenges of FL (Part 1)\", \"comment\": \"We thank the reviewer for their thorough review and for highlighting that the paper is well written. However, we think the review does not fully recognize the challenges of FL, and consequently misunderstands the nature (and therefore novelty) of our techniques. Please see a detailed explanation below.\\n\\nThe first point we want to address is the (reported lack of) novelty of the following two contributions:\\n1) The lossy compression of the model sent from server to clients (the review points to other related works).\\n2) Federated Dropout, which the review mentions can be seen as a \\u201c\\u2018coordinate descent\\u2019 type of a technique\\u201d.\", \"let_us_address_the_two_in_turn\": \"1) We are not aware of previous work (and please correct us if we have missed something) that compresses the *state of a model* being trained when such compression has to be done repeatedly throughout the iterative training procedure and in a data-independent fashion. Techniques such as DeepCompression modify the whole training procedure, are data dependent, and produce one final compact model (i.e. compression is performed once). As such, not only do they become infeasible in the setting of FL (no data is available on the server), but they are not directly comparable with our method. Note that we do call this out in the last paragraph of Section 2 in the original submission, and highlight it could be *compatible* with the overall objective of FL. A proper exploration of such an idea, however, would likely deserve a complete paper.\\nFurthermore, the idea of using Kashin\\u2019s representation can be of independent interest. We are not aware of any example of this idea being practically used in Machine Learning and, in the Appendix, we show its relationship to some recent theoretical results.\\n\\n2) Claiming that Federated Dropout can be seen as coordinate descent, or that it can be reduced to subsampling gradients, is incorrect. In each client, we are not computing partial derivatives of the global model, but the full gradients of a smaller, and different, model. Furthermore, several SGD steps are taken for each local model. The facts that (a) by design of the procedure, we can then map these updates to the larger global model, and that (b) performing training this way leads to savings both in communication and local computation, are our key insights. We are not aware of this conceptual idea being addressed in previous literature. Finally, we do (optionally) use subsampling to further compress the final learned updates (together with basis transform and quantization), but this is complementary to (and not equivalent to) Federated Dropout.\\n\\nIn summary, we believe that not only is the combination of our techniques interesting (as the reviewer points out), but that each individual technique does indeed bring novel ideas that address challenges where there is no state of the art at all.\"}", "{\"title\": \"We thank the reviewer for their thorough review. However, we think the review does not fully recognize the challenges of FL (Part 2)\", \"comment\": \"The second point we want to address is our lack of comparisons against previous existing work:\\n\\n1) Comparison with QSGD or Terngrad: We did not compare with these for two reasons. \\na) These methods were proposed for compression of gradient updates. In particular, the Terngrad paper argues for using the empirical distributions of the coefficients of such gradients. Even though those arguments would not directly apply to our setting, we could probably still use it for the Client-to-Server compression. However, we do not see a good reason why the proposal would be useful for compressing the state of the model being trained (i.e. Server-to-Client), which is the central concern of our paper.\\nb) We performed a series of preliminary experiments where we compressed a variety of random vectors using QSGD and other techniques. The results of these small experiments suggested that in the tradeoff between accuracy and representation size, (I) uniform quantization was dominated by QSGD, and (II) QSGD was in turn dominated by the combination of Kashin\\u2019s representation and uniform quantization.\\n\\nWe are happy to improve our Related Work section but, unfortunately, the rebuttal period will not be enough to fully recreate experiments using QSGD and Terngrad. What we could do in the time given is add the results of the simple experiments we mention above. We thus ask the reviewer, in light of our previous reasoning and the findings of our preliminary results, whether they consider the full comparison necessary, or whether adding the simpler evaluation would be sufficient.\\n\\n2) Comparison with HALP: As far as we can see, the ideas introduced in that paper are largely compatible with our proposed methods (particularly regarding how we compute gradients locally) but would not replace them. We were previously unaware of this paper though, and we will add an appropriate reference to it.\\n\\n3) Comparison with https://arxiv.org/abs/1610.05492: We clearly call out that we build on that work, and extend in two significant aspects. First, we introduce the use of Kashin\\u2019s representation (novel in ML in general) to further improve efficiency of uniform quantization. Second, we show how we can use the techniques in reducing Server-to-Client communication as well.\"}", "{\"title\": \"We thank the reviewer for their comments and proceed to address the three points they raised.\", \"comment\": \"We thank the reviewer for their comments and for highlighting the relevance of our work for the broader distributed learning community. We proceed to address the three points you raised:\\n\\n1) The particular observation you mention is in line with previous empirical observations of the effect of (standard) dropout. We don\\u2019t analyse this effect, however, as we are not aware of any rigorous argument of why standard dropout works in the first place. We understand dropout as a heuristic that has proven to be incredibly useful and is backed by some interesting intuitions, but not as a principled approach for which we can prove convergence. \\n\\n2) The ZipML framework proposes using lower precision at various parts of the training pipeline. Many of these ideas are orthogonal, yet compatible with what we propose. The parts that can be seen as alternatives to our methods (i.e. compressing gradients) are best summarized in algorithms such as QSGD or Terngrad (also called out by another reviewer). We copy our response here: \\n\\nWe did not compare with these for two reasons. \\na) These methods were proposed for compression of gradient updates. In particular, the Terngrad paper argues for using the empirical distributions of the coefficients of such gradients. Even though those arguments would not directly apply to our setting, we could probably still use it for the Client-to-Server compression. However, we do not see a good reason why the proposal would be useful for compressing the state of the model being trained (i.e. Server-to-Client), which is the central concern of our paper.\\nb) We performed a series of preliminary experiments where we compressed a variety of random vectors using QSGD and other techniques. The results of these small experiments suggested that in the tradeoff between accuracy and representation size, (I) uniform quantization was dominated by QSGD, and (II) QSGD was in turn dominated by the combination of Kashin\\u2019s representation and uniform quantization.\\n\\n3) The proof of this is elementary, and we do not want to appear to claim it is a novel insight. We are happy to provide explicit reference to an existing, more general argument, e.g., one in Suresh et al. or in Konecny and Richtarik, both of which we cite.\\n\\nIf you have other concrete comments on what would strengthen the paper, we will be more than happy to incorporate them.\"}", "{\"title\": \"The paper presents some new approaches for communication efficient Federated Learning that allows for training of large models on heterogeneous edge devices.\", \"review\": \"The paper presents some new approaches for communication efficient Federated Learning (FL) that allows for training of large models on heterogeneous edge devices. In FL, heterogeneous edge devices have access to potentially non-iid samples of data points and try to jointly learn a model by averaging their local models at a parameter server (the cloud). As the bandwidth of the up/downlink-link may be limited communication overheads may become the bottleneck during FL. Moreover, due to the heterogeneity of the hardware, large models may be hard to train on small devices. Due to that, there are several recent approaches that aim to minimize communication via methods of quantization, which also aim to allow for smaller models via methods of compression and model quantization.\\n\\nIn this paper, the authors suggest a combination of two methods to reduce communication and allow for large model training by 1) using a lossy compressed model when that is communicated from the cloud to the edge devices, and 2) subsampling the gradients, a form of dropout, at the edge device side that allows for an overall smaller model update. The novelty of either of those techniques is quite limited as individually they have been suggested before, but the combination of both of them is interesting. \\n\\nThe paper is overall well written, however there are two aspects that make the contribution lacking in novelty. First of all, the presented methods are a combination of existing techniques, that although interesting to combine together, are neither theoretically analyzed nor extensively tested. The model/update quantization technique has been used in the past extensively [eg 1-3]. Then, the \\u201cfederated dropout\\u201d can be seen as a \\u201ccoordinate descent\\u201d type of a technique, i.e., randomly zeroing out gradient elements per iteration. \\n\\nSince this is a more experimental paper, the setup tested is quite limited in its comparisons. For example, one would expect to see extensive comparisons with methods for quantizing gradients, eg QSGD, or Terngrad, and combinations of that with DeepCompression. Although the authors do make an effort to experiment with a different set of hyperparameters (dropout probability, quantization levels, etc), a comparison with state of the art methods is lacking.\\n\\nOverall, although the combination of the presented ideas has some merit, the lack of extensive experiments that would compare it with the state of the art is not convincing, and the overall effectiveness of this method is unclear at this point.\\n\\n[1] https://arxiv.org/pdf/1510.00149.pdf\\n[2] https://arxiv.org/pdf/1803.03383.pdf\\n[4] https://arxiv.org/pdf/1610.05492.pdf\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"The paper adress the ressource issue of federated learning by introducing a lossy compression on the global model and what they coin a Federated Dropout. While not completely familiar with compression schemes, I saw a couple of statements requiring formal support.\", \"review\": \"The paper tackles a major issue in distributed learning in general (and not only the federated scheme), which is communication bottleneck.\\n\\nI am not fully qualified to judge and would rather listen to the opinion of more qualified reviewers, I was annoyed by some aspects of the paper:\\n\\n1) many claims required formal support (proofs), as an example: \\\"more aggressive dropout rates ted to slow down the convergence rate of the model, even if they sometimes result in a higher accuracy\\\" is a statement that would benefit from analyzing the dropout out effect on convergence, something that wouldn't be hard to do given the extensive theoretical toolbox on distributed optimization.\\n\\n2) no comparison with other compression schemes (see e.g. Alistarh et al.'s ZipML (NIPS or ICML 2017) and followups)\\n\\n3) proving an unbiased-ness guarantee out of the Probabilistic quantization (section 3.1) would have been a minimal requirement in my opinion.\\n\\nI encourage the authors to further expand those points, but would happily lighten-up my skepticism if more qualified reviewers say that we do not need such guarantees as the one in point 1 and 3. (the few compression papers I know provide that)\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"This paper focuses on lossy compression techniques and federated dropout strategies to control the update burden that\\u2019s needed to coordinate nodes in a federated learning setting.\", \"review\": \"The paper is well written and addresses an interesting problem. Overall, I do find the federated dropout idea quite interesting. As for the lossy compression part, I am a bit skeptical on its application for this problem. In general, I believe that the manuscript could greatly benefit from answering the questions that I am raising below. It would certainly help me better appreciate the contributions of this work.\\n\\nThe lossy aspect of the compression inevitably introduces performance downgrades. However, compression/communication systems are designed to make sure that the information dropped is not important for the task at hand (e.g., high frequencies that are not perceived by our eyes in the spatial domain are typically dropped when compressing images through zig zag scanning after transformation). Randomly dropping coefficients as suggested in this paper seems odd to me (the subsampling technique that is used). Can you justify this approach? The manuscript does hint that this approach provides lukewarm results. Could there be a better approach that focuses on parts of the model that deemed \\u201cless\\u201d important if a notion of coefficient importance can be derived? \\n\\nCan you emphasize more the benefits of compression and federated drop out, versus training a low capacity model with less parameters? The introduction refers to the low capacity approach as a naive model. Could this be compared experimentally? This would help better appreciate the benefits of the federated dropout strategies that are proposed here. In the experiments, could you explain why increases in q (quantization steps) seems to lead to limited or marginal accuracy improvements? \\n\\nFor the results shown in Figure 4, did you also use any form of subsampling and quantization? Also, do you have a justification for why with some amounts of dropout, the accuracy may improve but at a slower pace (pretty much the punch line of these experiments)? It is an interesting finding but it is counter intuitive and requires explanations in my view. \\n\\nOn the communication cost experiments, can you explain precisely how did you compute these reduction factors? Did you tolerate some form of accuracy degradation? Also, did you consider the fact that more \\\"rounds\\\" are needed to get to a target accuracy level? Is there a cost associated with these additional rounds and was that cost taken into consideration? Adding clarity on this would certainly help.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
SJz6MnC5YQ
DEEP GRAPH TRANSLATION
[ "Xiaojie Guo", "Lingfei Wu", "Liang Zhao" ]
The tremendous success of deep generative models on generating continuous data like image and audio has been achieved; however, few deep graph generative models have been proposed to generate discrete data such as graphs. The recently proposed approaches are typically unconditioned generative models which have no control over modes of the graphs being generated. Differently, in this paper, we are interested in a new problem named Deep Graph Translation: given an input graph, the goal is to infer a target graph by learning their underlying translation mapping. Graph translation could be highly desirable in many applications such as disaster management and rare event forecasting, where the rare and abnormal graph patterns (e.g., traffic congestions and terrorism events) will be inferred prior to their occurrence even without historical data on the abnormal patterns for this specific graph (e.g., a road network or human contact network). To this end, we propose a novel Graph-Translation-Generative Adversarial Networks (GT-GAN) which translates one mode of the input graphs to its target mode. GT-GAN consists of a graph translator where we propose new graph convolution and deconvolution layers to learn the global and local translation mapping. A new conditional graph discriminator has also been proposed to classify target graphs by conditioning on input graphs. Extensive experiments on multiple synthetic and real-world datasets demonstrate the effectiveness and scalability of the proposed GT-GAN.
[ "graphs", "input graphs", "tremendous success", "deep generative models", "continuous data", "image", "audio", "discrete data" ]
https://openreview.net/pdf?id=SJz6MnC5YQ
https://openreview.net/forum?id=SJz6MnC5YQ
ICLR.cc/2019/Conference
2019
{ "note_id": [ "rJgRcajgl4", "r1ek3L5VkV", "Bkez3clqRX", "ByliURT_Rm", "HJeWPj6eA7", "SyeX3DBwpm", "ryly9wHvTQ", "Hkex58HP6Q", "SJgG2EBDpQ", "SyeBGNHD6m", "Hyeo_6oEaQ", "SJlugpiVaX", "H1lkUZCQp7", "SJlERxAm67", "SkemSsMfpQ", "ryl8Oq-53X", "Hyx-b0Vtnm", "S1l4P2HuhX" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_review" ], "note_created": [ 1544760725772, 1543968422538, 1543273130076, 1543196242620, 1542671192913, 1542047658518, 1542047623235, 1542047368327, 1542046890057, 1542046733190, 1541877107408, 1541876975981, 1541820743254, 1541820619712, 1541708602993, 1541180013645, 1541127672621, 1541065819639 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1309/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1309/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1309/Authors" ], [ "ICLR.cc/2019/Conference/Paper1309/Authors" ], [ "ICLR.cc/2019/Conference/Paper1309/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1309/Authors" ], [ "ICLR.cc/2019/Conference/Paper1309/Authors" ], [ "ICLR.cc/2019/Conference/Paper1309/Authors" ], [ "ICLR.cc/2019/Conference/Paper1309/Authors" ], [ "ICLR.cc/2019/Conference/Paper1309/Authors" ], [ "ICLR.cc/2019/Conference/Paper1309/Authors" ], [ "ICLR.cc/2019/Conference/Paper1309/Authors" ], [ "ICLR.cc/2019/Conference/Paper1309/Authors" ], [ "ICLR.cc/2019/Conference/Paper1309/Authors" ], [ "ICLR.cc/2019/Conference/Paper1309/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1309/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1309/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1309/AnonReviewer2" ] ], "structured_content_str": [ "{\"metareview\": \"Although one reviewer recommended accepting this paper, they were not willing to champion it during the discussion phase and did not seem to truly believe it is currently ready for publication. Thus I am recommending rejecting this submission.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"No reviewer was willing to champion this work\"}", "{\"title\": \"Wow, that's a lot of replies\", \"comment\": \"Thanks for the comments. I still think the work is interesting and the comments and improvements to the paper help. I'm unfortunately not convinced that it's yet good enough to go up to the next category.\"}", "{\"title\": \"Recent modifications of Paper1309\", \"comment\": \"Dear Reviewer,\\n\\nThank you very much for your new and previous comments. We have revised our paper again in order to address all of them in the paper. The modifications are listed as followings:\\n\\n1. For graph deconvolution, we have modified and reorganized the content. The Section 3.2.2 on \\u201cGraph Deconvolution\\u201d has been reorganized to two subsections \\u201cnode-to-edge deconvolution\\u201d and \\u201cedge-to-edge deconvolution\\u201d. We also extended them to make the description on deconvolution operations clearer and more comprehensive.\\n\\n2. For graph deconvolution, we have also added a new figure and refined the equations\\u2019 descriptions. Figure 3 is added to describe the mechanism of our proposed deconvolution operators as well as their correlation to the convolution operations. Equation 6, Equation 7, and their descriptions have also been revised to make them clearer and concrete. Specifically, Figure 3 describes how the node representation and edge representation are respectively decoded by our deconvolution layers, while Equation 6 and Equation 7 describe how to aggregate the decoded information into the final weighted adjacent matrix.\\n\\n3. We have referred to all the figures in the body of text. \\n\\n4. We have added statements to describe how to introduce random noises in the whole architecture, see in the 2nd paragraph of Section 3.1 in Page 4.\\n\\n5. We have added statements of describing the reason to use L1 loss and how L1 loss is applied, please see in the paragraph before Equation 2 in Page 4. Additionally, we also added the statements of how L1 norm and GAN loss function jointly, see in the paragraph after Equation 2.\\n\\n6. We have added the statements why the metrics are chosen to evaluate the scale-free dataset, please see in the 2nd Paragraph of Section 4.2.2.\\n\\nAdditionally, to improve the reproducibility of the proposed methodologies and experiments, we have already released our code in https://github.com/anonymous1025/Deep-Graph-Translation-. More architecture parameters are also provided in Appendix E.\\n\\nThank you very much again for the comments and please let us know if there are any other issues.\"}", "{\"title\": \"Re: good feedback\", \"comment\": \"Thank you!\"}", "{\"title\": \"good feedback\", \"comment\": \"Dear Authors thank you for your extensive feedback ~\\n\\nI am able to better understand your paper ~ and I believe it would be beneficial to have it at conference.\", \"i_am_thus_changing_my_rating\": \"Marginally below acceptance threshold ==> Marginally ABOVE acceptance threshold\\n\\nThank you!\\n\\n<AnonReviewer3>\"}", "{\"title\": \"Some statement explanations and modifications: Part III\", \"comment\": \"----------------------\", \"q\": \"Hard to parse \\u201cwe randomly add another kjEj edges on it to form the target graph\\u201d\", \"a\": \"We have modified it as: \\u201cwe randomly add kjEj edges on it to form the target graph\\u201d\\n\\nWe hope we were able to explain everything clearly to your satisfaction, please let us know if there are any more open points.\\n\\nThank you once again!\"}", "{\"title\": \"Some statement explanations and modifications: Part II\", \"comment\": \"----------------------\", \"q\": \"It\\u2019s not clear in equation 1 how you represent G_X. Only much later is it mentioned about adjacency matrix.\", \"a\": \"Yes, the graphs here are represented as the weighted adjacent matrix of a graph.\"}", "{\"title\": \"Some statement explanations and modifications: Part I\", \"comment\": \"Next, we would like to reply to more specific comments.\\n\\n----------------------\", \"q\": \"\\u201cour GT-GAN is able to provide a scalable (i.e., O(N2)) algorithm that can generate general graphs.\\u201d - what sizes have you tested this up to?\", \"a\": \"We have test size up to 300.\"}", "{\"title\": \"Explanations for concerns\\uff1aPart II\", \"comment\": \"-------------------------\", \"q\": \"The real-world dataset seems rather odd and not fully explored. Given that you have this data it is surprising that you didn\\u2019t complete the loop by showing that you could take data from before a hack attempt and show that you could predict that in the future you had a hack attempt. Perhaps this is due to the fact that you didn\\u2019t have the ground-truth data in here to show a graph going from good to bad? But if not, it would have been good to have shown, either through this data or some other, how your approach does match in with real-world results.\", \"a\": \"(1) The real-world dataset and its application are authoritative and motivated this research. The dataset is recent, authoritative, and provided by the prestigious \\u201cLos Alamos National Laboratory\\u201d (https://csr.lanl.gov/data/cyber1/ ). The research problem behind this dataset raised up their needs to predict the future hacking behavior of a user with no historical hacking behavior has been a highly practical but prohibitively challenging. Such application strong motivates this new domain of graph translation where we transfer the hacking behavior from those users with historical hacking behavior in different network structure.\\n(2) The dataset has been fully explored by a loop. We indeed have predicted the hack attempt in the future and validated it against ground truth with accuracy metrics such as accuracy, F1, Precision, and recall explained in the last paragraph of Section 4.2.3. Specifically, we use half of users\\u2019 good graphs (data before attack) and real hacker graphs (data after attack) to train the translator. We then generate graphs with this translator for the other users. To evaluate indirectly, we use the generated graphs and good graphs to train an attacker prediction model (classifier) for each user, if it can recognize his real hacker graphs, this prediction model works in real-world and our translator is good.\\n(3) We have also shown the case studies on \\u201ca graph going from good to bad\\u201d. Specifically, in Figures 7 and 8 in Appendix C, we have shown: 1) the \\u201cgood\\u201d graph, 2)the \\u201chacked\\u201d graph generated by our methods, and 3) the ground-truth \\u201chacked\\u201d graph. And the results show that our methods can well predict the hack attempts. During our experiments, we have observed numerous such case studies and put them as representatives.\"}", "{\"title\": \"Explanations for concerns\\uff1aPart I\", \"comment\": \"\", \"dear_reviewer\": \"Thanks very much for your comments and questions. We would like to first explain your concerns.\\n\\n-------------------------\", \"q\": \"Your approach seems to be \\u2018fixed\\u2019 in the set of nodes which are in both in the input and output graphs - needing to be the same. This would seem significantly limiting as graphs are rarely of the same node set.\", \"a\": \"Yes, we admit that our model has a limitation in dealing with the variable-size input graphs. This limitation largely exists in the existing deep graph learning methods, especially those based on graph convolution. This problem itself is a challenging open problem that requires significant future efforts in the community. However, the focus of our work in this paper is the translation mapping establishment, optimization, and evaluation. We are indeed considering one of our next extensions to deal with this problem. Thanks for the comments.\"}", "{\"title\": \"Clarifications of some points: Part I\", \"comment\": \"\", \"dear_reviewer\": \"Thank you very much for your comments and suggestions. We would like to answer your questions in detail as follows:\\n \\n-----------------------------------------------------------------------\", \"q\": \"It is also not clear what are the assumptions made on the connectivity of the input graph and the target graph. Do we know how does the connectedness of the input graph affect the translation quality in the case of strongly connected directed graphs? Or what happens if the target graph has a strong connectivity? Towards this, how does the computational complexity scale wrt to the connectedness?\", \"a\": \"(1) Similar to all the existing graph deep generative learning methods for generic graphs, we do not have additional assumptions on the graphs. The domain of graph deep generative learning methods typically do not require to distinguish or preprocess specific topological types of graphs before applying it, no matter it is strongly- or weakly- connected graph, complete graph, planar graph, scale-free graph, or graphs that have other specific patterns. This is actually one of the core advantages of deep learning based models where the graph patterns are not extracted or pre-identified manually by the human but automatically discovered by the end-to-end deep models.\\n(2) This paper has given the time complexity in the worst case: O(n^2) as shown in 3.4. The worst case happens when the graph is a complete graph. The time complexity of a strongly-connected graph will not be worse than that.\"}", "{\"title\": \"Clarifications of some points: Part II\", \"comment\": \"-----------------------------------------------------------------------\\nQ\\uff1aA lot of clarity is required on the choice of evaluation metric; for example, choice of distance measure? What is the L1 norm applied on?\", \"a\": \"As explained in Section 4.2.4 \\u201cResults on Scale-Free Graphs\\u201d, the \\u201cInf\\u201d in Tabel 1 represents the distance more than 1000.\\n\\nWe really hope that we have explained every confused point clearly and please let us know if there are any other points.\\nThank you once again for your reviews.\", \"answer_about_l1_norm\": \"(1) L1 norm is applied to the weight adjacent matrix of the graph. Our methodology is achieved by a trade-off between L1 loss and adversarial loss (GAN-D). Specifically, L1 makes generated graphs share the same rough outline of sparsity pattern like generated graphs, while under this outline, adversarial loss allows them to vary to some degree.\\n(2) L1 norm is commonly used in GAN in relevant domains, e.g., in image-translation domain, for example, reference [1] (with 600+ citations) and reference [6] (with 1300+ citations). They have done extensive experiments to show the advantage of such a strategy. (3) The experiment demonstrates its effectiveness. Specifically, the proposed GT-GAN that uses L1 norm outperformed all the other comparison methods shown in Table 2,3 and 4.\\n\\n-------[2] Schieber, T. A., Carpi, L., D\\u00edaz-Guilera, A., Pardalos, P. M., Masoller, C., & Ravetti, M. G. (2017). Quantification of network structural dissimilarities. Nature Communications, 8, 13928.\\n-------[3] Bauckhage, C., Kersting, K., & Hadiji, F. (2015, July). Parameterizing the Distance Distribution of Undirected Networks. In UAI (pp. 121-130).\\n-------[4] Chiang, S., Cassese, A., Guindani, M., Vannucci, M., Yeh, H. J., Haneef, Z., & Stern, J. M. (2016). Time-dependence of graph theory metrics in functional connectivity analysis. NeuroImage, 125, 601-615.\\n-------[5] You, J., Ying, R., Ren, X., Hamilton, W. L., & Leskovec, J. (2018). GraphRNN: A Deep Generative Model for Graphs. arXiv preprint arXiv:1802.08773.\\n-------[6] Isola, P., Zhu, J. Y., Zhou, T., & Efros, A. A. (2017). Image-to-image translation with conditional adversarial networks. arXiv preprint.\\n \\n-----------------------------------------------------------------------\\nQ\\uff1aI did not completely follow the arguments towards directed graph deconvolution operators. There is lack of clarity and the explanation seems lacking in parts in this particular section; especially since this is the key contribution of this work.\", \"q\": \"Typo:. The \\u201cInf\\u201d in Tabel 1\"}", "{\"title\": \"Clarifications of confused points, and the philosophy behind the architecture: Part II\", \"comment\": \"-----------------------------------------------------------------\", \"q\": \"Four, could you please explain the setting for the \\u201cgold standard\\u201d experiment. I'd have to assume, for instance, you train a GNN in a supervised way by using both source (non-suspicious) and target (suspicious) behavior, and label accordingly? That said I am not 100% sure of this problem setting.\", \"a\": \"Yes, \\u201cgold standard\\u201d method is directly trained based on real target graphs instead of generated ones. Specifically, as you know, all the comparison methods in our paper are generative models which generate graphs, and our experiment is to evaluate how real the generated graphs are. One way to evaluate this is by \\u201cindirect evaluation\\u201d, where we use the graphs generated by different comparison methods as training data to train a classifier based on KCNN (see reference (Nikolentzos, et al.,2017) in the paper), and then compare which model generates \\u201cmore-real graphs\\u201d by testing their corresponding trained classifier on test set which consists of real graphs. In \\u201cgold standard\\u201d method, it directly uses the real graphs to train the classifier (still based on KCNN), so it is expected to get the best performance. Therefore, \\u201cgold standard\\u201d method acts as the \\u201cbest-possible-performer\\u201d, and is used as a benchmark to evaluate all the different generative models on how \\u201creal\\u201d the graphs they can generate: the closer (and better) their performance is to the \\u201cgold standard\\u201d one, the \\u201cmore real\\u201d their generated graphs are.\\n\\nWe hope we were able to answer everything to your satisfaction, please let us know if there are any more open points.\\n\\nThank you once again!\"}", "{\"title\": \"Clarifications of confused points, and the philosophy behind the architecture: Part I\", \"comment\": \"Dear Reviewer:\\n\\nThanks very much for your comments and questions. We would like to explain them in detail and modify our paper accordingly.\\n\\n----------------------------------------------------------------------------\", \"q\": \"how exactly do you do a L1 loss on graphs? I'd have to assume the topology of the graph is unchanged between Gy and T(Gx) ~ and then maybe take L1 of weight matrix? But then is this general enough ~ given your stated goal of modeling different topologies? Either ways, more explanation / and perhaps equations to clarify this loss would be very helpful.\", \"a\": \"(1) L1 norm is applied to the weight matrix. Our methodology is still general enough which is achieved by a trade-off between L1 loss and adversarial loss (GAN-D), which jointly enforces Gy and T(Gx) to follow a similar topological pattern but may not necessarily the same. Specifically, L1 makes T(Gx) share the same rough outline of sparsity pattern like Gy, while under this outline, adversarial loss allows the T(Gx) to vary to some degree.\\n(2) Combining L1 loss and adversarial loss is well-recognized and validated. Works on image-translation have proposed and utilized L1 loss and adversarial loss jointly in GAN, for example, reference [1] (with 600+ citations) and reference [2] (with 1300+ citations). They have done extensive experiments to show the advantage of such a strategy. Furthermore, in our experiments, we found the performance when using L1 loss and adversarial loss jointly is better than using either of them.\\n------[1] Pathak, D., Krahenbuhl, P., Donahue, J., Darrell, T., & Efros, A. A. (2016). Context encoders: Feature learning by inpainting. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 2536-2544).\\n------[2] Isola, P., Zhu, J. Y., Zhou, T., & Efros, A. A. (2017). Image-to-image translation with conditional adversarial networks. arXiv preprint.\", \"the_logic_behind_edge_to_edge_convolution\": \"(1) Generally speaking, the purpose of edge-to-edge convolution layers is to aggregate the neighborhood information of nodes. Specifically, the n-th edge-to-edge convolution layer aggregates the n-th hop connection information of nodes related to each edge.\\n(2) Different from image convolution, for each hidden channel, we have two filters, one is a column vector while the other is a row vector. To learn the nth hop information of edge <i,j>, row filter aggregates all the (n-1)-th hop information of outgoing edges of node i and column filter aggregates all the (n-1)-th hop information of incoming edges of node j. \\n(3) Edge-to-edge layers are important to extract some higher-level graph features, e.g., the n-hop reachability from a node to another; n-hop in-degree and out-degree, and many other higher-order patterns.\", \"different_blocks_in_the_graph_translator\": \"Translator consists of an encoder, a decoder, and a skip network. \\n(1) Encoder. The encoder does n-hop edge information aggregation from the input graphs using edge-to-edge layers and then uses the edge-to-node layer to learn the latent representation of nodes. \\n(2) Decoder. Reversely, the graph decoder first uses node-to-edge layers to decode the node representations to aggregated edge information and then further decode that into adjacency matrix, which is the final generated graphs. \\n(3) Skip-network. Over the encoder-decoder framework, we also added skip-network (the black line of Fig.1) which can directly map the edge aggregation information in every hop from the input graph to the output graph so that can preserve the local information in every resolution (i.e., every hop).\\n \\n----------------------------------------------------------------------------\"}", "{\"title\": \"Novel idea but requesting clarifications.\", \"review\": \"The paper presents a novel idea of generating discrete data such as graphs that is conditional on input data to control the graph structure that is being generated.\\n\\nGiven an input graph, the proposed method infers a target graph by learning their underlying translation mapping by using new graph convolution and deconvolution\\nlayers to learn the global and local translation mapping.\\n\\nThe idea of learning generic shared common and latent implicit patterns across different graph structure is brilliant.\\n\\nTheir method learns a distribution over graphs conditioned on the input graph whilst allowing the network to learn latent and implicit properties. \\n\\nThe authors claim that their method is applicable for large graphs. However, it seems the experiments do not seem to support this. \\n\\nIt is not clear how the noise is introduced in the graphs. I would have expected to see some analysis and results on the translation quality over systematic noise applied to the input graph. \\n\\nIt is also not clear what are the assumptions made on the connectivity of the input graph and the target graph.\\nDo we know how does the connectedness of the input graph affect the translation quality in the case of strongly connected directed graphs? Or what happens if the target graph has a strong connectivity? Towards this, how does the computational complexity scale wrt to the connectedness?\\n\\nA lot of clarity is required on the choice of evaluation metric; for example choice of distance measure ? What is the L1 norm applied on? \\n\\nI did not completely follow the arguments towards directed graph deconvolution operators. There is lack of clarity and the explanation seems lacking in parts in this particular section; especially since this is the key contribution of this work\", \"typo\": \". The \\u201cInf\\u201d in Tabel 1\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}", "{\"title\": \"Good problem setting, interesting results, needs more clarifications.\", \"review\": \"This paper addresses the important / open problem of graph generation, and specifically in a conditional/transductive setting.\\n\\nGraph generations is a new topic, it is difficult, and has many important applications, for instance generating new molecules for drug development.\\n\\nAs stated by the authors, this is a relatively open field: there are not many papers in this area, with most approaches today resorting to domain specific encodinings, or \\\"flattening\\\" of graphs into sequences to then allow for the use recurrence (like in MT); this which per se is an rather coarse approximation to graph topology representations, thus fully motivating the need for new solutions that take graph-structure into account.\\n\\nThe setting / application of this method to graph synthesis of suspicious behaviours of network users, to detect intrusion, effectively a Zero-shot problem, is super interesting.\\n\\nThe main architectural contribution of this paper are graph-deconvolutions, practically a graph-equivalent of CNN's depth-to-space - achieved by means of transposed structural matrix multiplication of the hidden GNN (graph-NN) activation - simple, reasonable and effective.\\n\\nWhile better than most of the baseline methods, the N^2 memory/computational complexity is not bad, but still too high to scale to very large graphs.\\n\\nResults are provided on relatively new tasks so it's hard to compare fully to previous methods, but the authors do make an attempt to provide comparisons on synthetic graphs and intrusion detection data. The authors do published their code on GitHub with a link to the datasets as well.\\n\\nAs previously mentioned in public comments on this forum, some points in the paper are not very clear; specifically regarding the loss function, the definition of \\\"edge-to-edge\\\" convolutions and generally the architectural choice related to the conditional GAN discriminator. Clarifications of these points, and more in general the philosophy behind the architectural choices made, would make this paper a much clearer accept.\\n\\nThank you!\\n\\nps // next my previous public comments, in detail, repeated ...\\n\\n--\\n\\n- the general architecture, and specifically the logic behind the edge-to-edge convolution, and generally the different blocks in fig.1 \\\"graph translator\\\".\\n\\n- how exactly do you do a L1 loss on graphs? I'd have to assume the topology of the graph is unchanged between Gy and T(Gx) ~ and then maybe take L1 of weight matrix? But then is this general enough ~ given your stated goal of modeling different topologies? Either ways, more explanation / and perhaps equations to clarify this loss would be very helpful. \\n\\n- why do you need a conditional GAN discriminator, if you already model similarity by L1? Typically one would use a GAN-D() to model \\\"proximity\\\" to the source-distribution, and then a similarity loss (L1 in your case) to model \\\"proximity\\\" to the actual input sample, in the case of trasductional domains. Instead here you seem to suggest to use L1 and GAN to do basically the same thing, or with significant overlap anyways. This is confusing to me. Please explain the logic for this architectural choice.\\n\\n- could you please explain the setting for the \\u201cgold standard\\u201d experiment. I'd have to assume, for instance, you train a GNN in a supervised way by using both source (non-suspicious) and target (suspicious) behaviour, and label accordingly? That said I am not 100% sure of this problem setting.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"A few of comments and request for clarifications\", \"comment\": \"Dear Authors, thank you for your submission.\\n\\nThe problem setting is very interesting, especially the problem of malicious graph activity synthesis \\\"forecast and synthesize the future potential malicious authentication graphs of the users without any historical malicious behaviors, by the graph translator from normal to malicious graph trained based on the users with historical malicious-behavior records.\\\".\\n\\nThat said, I have few points that need clarity:\\n\\nFirst, the general architecture, and specifically the logic behind the edge-to-edge convolution, and generally the different blocks in fig.1 \\\"graph translator\\\".\\n\\nSecond, how exactly do you do a L1 loss on graphs? I'd have to assume the topology of the graph is unchanged between Gy and T(Gx) ~ and then maybe take L1 of weight matrix? But then is this general enough ~ given your stated goal of modeling different topologies? Either ways, more explanation / and perhaps equations to clarify this loss would be very helpful. \\n\\nThird, and slightly related to the previous point, why do you need a conditional GAN discriminator, if you already model similarity by L1? Typically one would use a GAN-D() to model \\\"proximity\\\" to the source-distribution, and then a similarity loss (L1 in your case) to model \\\"proximity\\\" to the actual input sample, in the case of trasductional domains. Instead here you seem to suggest to use L1 and GAN to do basically the same thing, or with significant overlap anyways. This is confusing to me. Please explain the logic for this architectural choice.\\n\\nFour, could you please explain the setting for the \\u201cgold standard\\u201d experiment. I'd have to assume, for instance, you train a GNN in a supervised way by using both source (non-suspicious) and target (suspicious) behaviour, and label accordingly? That said I am not 100% sure of this problem setting.\\n\\nThank you!\"}", "{\"title\": \"Interesting work with some odd issues on implementation and results\", \"review\": \"The paper presents an approach for translating graphs in one domain to graphs in the same domain using a GAN approach. A graph Translator approach is defined and a number of synthetic data sets and one real-world data set are used to evaluate the approach. Most of the paper is written well, though there are some odd sentence structure issues in places. The paper could do with a thorough check for grammatical and spelling mistakes. For example you miss-spell NVIDIA.\", \"the_main_concerns_with_the_work\": \"1) Equation 2 is used to minimise the distance between graphs from X and graphs in Y. Yet, the main metric which is used to evaluate the paper is this distance. This would seem to give an unfair advantage to your approach. I would also be concerned about the fact that later you use this for stating if a graph represents good or hacker activity. If you have drawn translated graphs towards real graphs, how do you know that you haven\\u2019t pulled a good graph closer to a hacker graph? This is more concerning considering work which came out of NIPS which suggested that GAN\\u2019s tend to favour producing similar output rather than spreading it evenly over the domain.\\n\\n2) It isn\\u2019t entirely clear what your results are trying to show. Presumably P, R, AUC and F1 are generated from the results produced from your Discriminator? Were each of the other approaches optimised against your discriminator or not? Also, it is unclear as to what the Gold Standard method is - we\\u2019re only told that its a classifier, but what type and how constructed?\\n\\n3) Your approach seems to be \\u2018fixed\\u2019 in the set of nodes which are in both in the input and output graphs - needing to be the same. This would seem significantly limiting as graphs are rarely of the same node set.\\n\\n4) Although you comment on other graphs approaches being limited to very small graphs, you do not test your approach on graphs with over 150 nodes. These would also seem to be very small graphs in comparison to real-world graphs. Further evaluation on larger graphs would seem to be essential - how long would it take on graphs with 10^6 nodes?\\n\\n5) The real-world dataset seems rather odd and not fully explored. Given that you have this data it is surprising that you didn\\u2019t complete the loop by showing that you could take data from before a hack attempt and show that you could predict that in the future you had a hack attempt. Perhaps this is due to the fact that you didn\\u2019t have the ground-truth data in here to show a graph going from good to bad? But if not it would have been good to have shown, either through this data or some other, how your approach does match in with real-world results.\\n\\nGiven the points above, I would be very concerned on an approach which used the above to identify a future hacking attempt.\", \"some_more_specific_comments_on_the_paper\": [\"\\\"The tremendous success of deep generative models on generating continuous data like image and audio\\u201d - it is not clear what this continuous data is.\", \"Hard to parse : \\u201cwhich barely can be available for the accounts worth being monitored.\\u201d\", \"\\u201cThis requires us to learn the generic distribution of theft behaviors from historical attacks and synthesize the possible malicious authentication graphs for the other accounts conditioning on their current computer networks\\u201d - given that these historical attacks are (hopefully) rare, is there enough data here to construct a model?\", \"Please define GCNN\", \"\\u201cOur GT-GAN is highly extensible where underlying building blocks, GCNN and distance measure in discriminator, can be replaced by other techniques such as (Kipf & Welling, 2017; Arjovsky et al., 2017) or their extensions.\\u201d - this sounds more like a feature of what you have contributed rather than a contribution in its own right.\", \"In the context of synthetic data, what is ground-truth?\", \"Hard to parse \\u201cModern deep learning techniques operating on graphs is a new trending topic in recent years.\\u201d\", \"Hard to parse \\u201cHowever, these methods are highly tailored to only address the graph generation in a specific type of applications such as molecules generation\\u201d\", \"Hard to parse \\u201cExisting works are basically all proposed in the most recent year,\\u201d\", \"\\u201cTypically we focus on learning the translation from one topological patterns to the other one\\u201d -> \\u201cTypically we focus on learning the translation from one topological pattern to the other\\u201d\", \"It\\u2019s not clear in equation 1 how you represent G_X. Only much later is it mentioned about adjacency matrix.\", \"Hard to parse \\u201cDifferent and more difficult than graph generation designed only for learning the distribution of graph representations, for graph translation one needs to learn not only the latent graph presentation but also the generic translation mapping from input graph to the target graph simultaneously.\\u201c\", \"Hard to parse \\u201cgraph translation requires to learn\\u201d\", \"Hard to parse \\u201cin most of them the input signal is given over node with a static set of edge and their weights fixed for all samples\\u201d\", \"\\u201cwe propose an graph\\u201d -> \\u201cwe propose a graph\\u201d\", \"Hard to parse \\u201cThe two components of the formula refers to direction filters as talked above\\u201d\", \"Hard to parse \\u201cNext, graph translator requires to\\u201d\", \"\\u201cas shown in Equations equation 7 and Equations equation 6,\\u201d -> \\u201cas shown in Equation 6 and Equation 7\\u201d\", \"Hard to parse \\u201cThe challenge is that we need not only to learn the\\u201d\", \"Figure 2 would seem to need more explanation.\", \"The end of section 3.3 is a bit vague and lacks enough detail to reproduce.\", \"\\u201cour GT-GAN is able to provide a scalable (i.e., O(N2)) algorithm that can generate general graphs.\\u201d - what sizes have you tested this up to?\", \"Hard to parse \\u201cwe randomly add another kjEj edges on it to form the target graph\\u201d\", \"\\u201cThe goal is to forecast and synthesize the future potential malicious authentication graphs of the users without any historical malicious behaviors, by the graph translator from normal to malicious graph trained based on the users with historical malicious-behavior records.\\u201d - This isn\\u2019t entirely clear. Are you trying to create new malicious graphs or show that a current graph will eventually go malicious?\", \"\\u201cAll the comparison methods are directly trained by the malicious graphs without the conditions of input graphs as they can only do graph generation instead of translation.\\u201d - not clear. For the synthetic data sets how did you choose which ones were malicious?\", \"\\u201cGraphRNN is tested with graph size within 150. GraphGMG, GraphVAE is tested within size 10 and RandomVAE is tested on graphs within size 150.\\u201d -> \\u201cGraphRNN and RandomVAE are tested with graph up to size 150. GraphGMG, GraphVAE is tested with graphs up to size 10.\\u201d\", \"\\u201cHere, beyond label imbalance, we are interested in \\u201clabel missing\\u201d which is more challenging.\\u201d - \\u201cmissing labels\\u201d?\", \"\\u201cIn addition, we have also trained a \\u201cgold standard\\u201d classifier based on input graphs and real target\", \"graphs.\\u201d - need to say more about this.\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
SJG6G2RqtX
Value Propagation Networks
[ "Nantas Nardelli", "Gabriel Synnaeve", "Zeming Lin", "Pushmeet Kohli", "Philip H. S. Torr", "Nicolas Usunier" ]
We present Value Propagation (VProp), a set of parameter-efficient differentiable planning modules built on Value Iteration which can successfully be trained using reinforcement learning to solve unseen tasks, has the capability to generalize to larger map sizes, and can learn to navigate in dynamic environments. We show that the modules enable learning to plan when the environment also includes stochastic elements, providing a cost-efficient learning system to build low-level size-invariant planners for a variety of interactive navigation problems. We evaluate on static and dynamic configurations of MazeBase grid-worlds, with randomly generated environments of several different sizes, and on a StarCraft navigation scenario, with more complex dynamics, and pixels as input.
[ "Reinforcement Learning", "Value Iteration", "Navigation", "Convolutional Neural Networks", "Learning to plan" ]
https://openreview.net/pdf?id=SJG6G2RqtX
https://openreview.net/forum?id=SJG6G2RqtX
ICLR.cc/2019/Conference
2019
{ "note_id": [ "B1lSzKybxN", "S1gEGt6KC7", "rklgnOpKA7", "BJxWBITYAX", "B1lNYtd03m", "rkexJb4i2X", "B1lJJiUc27" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544775948703, 1543260428009, 1543260327719, 1543259704547, 1541470588037, 1541255384211, 1541200598936 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1308/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1308/Authors" ], [ "ICLR.cc/2019/Conference/Paper1308/Authors" ], [ "ICLR.cc/2019/Conference/Paper1308/Authors" ], [ "ICLR.cc/2019/Conference/Paper1308/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1308/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1308/AnonReviewer3" ] ], "structured_content_str": [ "{\"metareview\": \"Interesting idea, reviewers were positive and indicated presentation should be improved.\", \"confidence\": \"3: The area chair is somewhat confident\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Interesting idea, reviewers were positive and indicated presentation should be improved.\"}", "{\"title\": \"Rebuttal to AnonReviewer3\", \"comment\": \"We thank the reviewer for the suggestions and positive comments. We would like to answer some of their questions:\\n\\n1) This is an interesting question, thank you for asking. In general MVProp is applicable to any path-planning task where the dynamics are deterministic, the agent is generally interested in finding the path with lowest cost (or, in this case, highest reward), and the reward function associated to the task does correspond to simply in one based on path-planning (i.e. some cost is associated to moving and getting into invalid states, and some reward is given for reaching goals). This applies to a variety of path-planning problems, but if the environment has a more nuanced reward function that would provide quicker feedback from minimising negative cost, VProp might learn faster. Our experiments show that our models work outside of the deterministic assumption (see experiments on stochastic environments in Section 4.2).\\n\\n2) We are not certain it is clear we would gain further intuition from looking at DMLab, VizDoom, or other maze environments, since a birds-eye version of the tasks available in these environments would be relatively similar to either our static grid-world setup or the dynamic ones, but much slower and resource intensive. That said, while we might indeed explore in the future such tasks, we feel that VProp and MVProp might be best used as planning modules within a larger and more complex planning architecture that we mentioned in Section 1.1, such as Niu et al. (2017), and Gupta et al. (2017). Also note that these environments provide a standard setup (i.e. first person view, high degree of partial observability, some stochasticity) that is definitely beyond the scope of all our models and baselines.\\n\\n3) At each agent step, our model (and the baseline) needs to convolve for K steps, where K is equal to roughly the distance to the goal (step-wise) for each step. If the environment is deterministic and \\u201cstatic\\u201d, i.e. goal state and unreachable states do not change, the agent only needs to do this process once, however if the environment changes stochastically the model assumptions are broken, thus we need to at least allow to replan after some amount of steps (and we chose 1 to give the VIN baseline a fair chance). There are ways to improve upon this, such as building a hierarchical planner directly inside the VProp modules, but such methods require significant changes and are pretty much future work.\\n\\n4) Figure 1 shows the (average) final reward obtained in the static settings, where the optimal reward is $goal_reward - optimal_steps * step_cost$, where reward_goal = 1 and cost_step = 0.01. MVProp very quickly reaches optimality, which on average gives reward slightly below 1 in all test environments, while VProp is more unstable when generalising to larger instances. Please see Table 1 in the Appendix for more precise numbers. In terms of videos, we will show a demo of the models working at the conference.\\n\\n5) We don\\u2019t think of our work as necessary just an extension of VIN, but more of a principled way to learn planning modules that are fully convolutional and that can generalize across wildly different planning horizons and input sizes while interacting with the environment (thus via RL). The great majority of agents / planners based on deep neural networks tends to either ignore the problem altogether, or use fixed transformations on the input, leading to resolution and/or information loss. We look forward to seeing more work tackling this problem, and we hope VProp and MVProp will provide a good step in that direction.\"}", "{\"title\": \"Rebuttal to AnonReviewer2\", \"comment\": \"We thank the reviewer for the positive comments. Here\\u2019s some answers which will hopefully clarify some of the questions posed:\\n\\n>d_{rew} is not defined \\n\\nRegarding \\\\drew, in the case of VIN we do implicitly refer to it with \\u201coutput channels\\u201d, as it really is just some variable defining the number of channels used for the embedding function. We have adjusted Section 2.2 to make this more evident. Note also that in the case of VProp, it is 3 (r^in, r^out, p), and 2 (r, p) for MVProp.\\n\\n>the shared weights should be explained in more details\\n\\nWhen we say \\u201cshared weights\\u201d we mean it literally: the recurrence step is done by the same network layers, as opposed to convolving at each step using different parameters. That (trivially) reduces the amount of parameters needed when the network is fully unrolled, and it allows us to generalise to larger environments by unrolling more.\\n\\n>Inconsistent use of theta in \\\\psi [sic.], missing gamma, and definition of 1_{s' \\\\neq \\\\emptyset }\\n\\nWe are not using \\\\psi anywhere. If instead you are referring to \\\\Phi, for consistency we have removed the theta from previous equations prior to Section 4 in the new revision.\\n\\nIndeed, there should be a \\\\gamma behind V_{w^t} for both updates, and there's a missing \\\"not\\\" in the definition of 1_{s' \\\\neq \\\\emptyset }, thank you for noticing both. We have fixed them in the new revision.\\n\\n>why do you need the parameters w to represent the value function V, if you already have v^k_{i,j} available? is it just to say that your NN is updated with two distinct cost functions? \\n\\nExactly, the loss in off-policy actor-critic for the value head is indeed different from the one applied to the policy head, and the parameters updated are not the same, so we felt that it was clearer to split the two. Furthermore, to be completely clear, V^k{i, j} is the value function inside the planning module, while V is the overall value function of the final \\u201cpolicy\\u201d layer used within actor-critic.\\n\\n>I did not understand the assumptions made by VProp, do you consider that the transition function T is known? this seems to be the case when you explain that transitions are deterministic and that there is a mapping between the actions and the positions, but is never really said\\n\\nWe do not assume we know the transition function T, and in fact we do learn some parameters of it. However we do assume that the function is constrained in certain ways, i.e. at most one state may be accessible from another state, given an action.\\n\\n>Compared to VIN, VProp uses an extra maximum to compute v^k_{i, j}, why? In this case, the approximation of the value function can never decrease.\\n\\nThe extra \\u201cmax\\u201d is equivalent to adding an extra action of staying in the same state with zero immediate reward. We would not lose any generality if immediate rewards are always positive. In our case, it is a convenient way to represent absorbing states (i.e., goal states).\\n\\n>How is R_{a, i, j, i ', j'} broken into r^{in}_{i ', j'} - r^{out}_{i, j} in VProp? Is the reward function known to the agent at all points?\\n\\nThe reward broken down in two values is a choice of parametrization. This is an assumption, which drastically reduces the number of parameters to learn (because there are only two values per state instead of R(i,j,a,i\\u2019,j\\u2019) ). Our point here is to say that this parametrization is sufficient to represent cases of interest, but in general this is an assumption that does not always hold.\\n\\n>In MVProp, can r_{i, j} be negative?\\n\\nIt could (in which case it would decrease the values of nearby states that are propagated to the current state). In practice, stopping the propagation can be carried out by near-0 values of the propagation gate, and rewards are constrained to be positive by a sigmoid activation function on the reward channel.\\n\\n>In MVProp, how does the rewriting in p * v + r * (1-p) shows that only positive rewards are propagated? Does not it come only from the max?\\n\\nYes, the max with the current value is what makes negative values of the reward in a state not propagate to nearby states (because a high value of r_{i,j} propagates to nearby states by first increasing v_{i,j}, which itself is propagated to the neighboring v_{i\\u2019,j\\u2019} at further iterations. Negative values of r_{i,j} would not update v_{i,j} and thus wouldn\\u2019t be propagated to nearby states. We believed the rewriting as pv + r(1-p) helps understanding that negative values of r_{i,j} will not be used to update v_{i,j} in the first iteration. We will clarify this statement.\\n\\n>In the experiments, S is not fully described, \\\\phi(s) neither\\n\\nFor the grid-world experiments, the state is described in Section 2.2, and \\\\phi(s) is a fixed function that splits each feature into its own channel; for the StarCraft experiments, the grid-world featurization is similarly done (and based on TorchCraft\\u2019s API), while in pixel space \\\\phi(s) is added to the network as two extra convolutional -> max pooling layers, as described in Appendix D.\"}", "{\"title\": \"Rebuttal to AnonReviewer1\", \"comment\": \"We thank the reviewer for the very positive comments. Regarding their point about comparison against standard model-free algorithms, we didn\\u2019t compare against them because these agents can\\u2019t be applied well to the experimental setup. More precisely:\\n\\n- typical models used with these algorithms cannot deal with varying input sizes, unless you engineer a function that would downsample / upsample the observation to a particular shape. One could learn in principle such a function, but that would require an entirely different experimental and evaluation loop, e.g. one that would keep the \\u201ccore\\u201d agent model frozen while training only an embedding function (which would however likely result in unstable learning). Making standard model size-invariant w.r.t. the observation size is a problem that we have decided to tackle by employing fully convolutional models, but this differs from most DRL work.\\n\\n- the models would need to be much bigger in terms of hyperparameters, and would most likely need a better curriculum to deal with the sparse positive signals, thus also making comparison trickier.\\n\\nTamar et al. (2016) (whose experimental setup is similar to ours) show these points pretty clearly. \\n\\nPlease also note that we could replace our actor-critic update rule (and agent setup) with PPO (and DDPG if we were to use a continuous action space), but the focus of our paper was the planning module, so all our experiments share the same agent setup.\"}", "{\"title\": \"Interesting extension of the original value iteration networks (VIN), promising work\", \"review\": \"The paper presents an extension of the original value iteration networks (VIN) by considering state-dependent transition function, which alleviates the limitation of VIN to translation-invariant transition functions and further constraining the reward function parametrization to improve sample efficiency of learning to plan algorithms. The first problem is addressed by interpreting transition probabilities as state-dependent discount factors, given by a sigmoid function that takes as input state features. The second problem is addressed by defining the reward function as the difference between an input reward and an output cost. Obstacle states are given a high cost. The proposed method is evaluated on random grids of different sizes, of the same type as the grids considered in the VIN paper. Comparaisons with VIN show that the proposed MVProp approach outperforms VIN by several orders of magnitude and can learn optimal plans in less than a thousand episodes, compared to VIN that doesn't seem here to learn much even after 30 thousands episodes.\\nThe paper is well-written in general. Certain aspects of value iteration networks were explained too briefly and the reviewer had to re-read the original VIN paper to grasp certain details of the proposed approach. This work is an interesting improvement of VIN, but somehow incremental in nature as the improvement is limited to slightly changing the reward and transition representations. However, the resulting performance seems very impressive, especially for larger grids. One question that needs to be clarified is: how is this work situated with respect to the body of work on RL? How does this method compare empirically to model-free algorithms such as DDPG and PPO?\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Missing information in the exposition\", \"review\": \"Update:\\nI thank the authors for their clarifications. I have raised my rating, however I believe the exposition of the paper should be improved and some of their responses should be integrated to the main text.\\n\\nThe paper proposes two new modules to overcome some limitations of VIN, but the additional or alternative hypotheses used compared to VIN are not clearly stated and explained in my opinion.\", \"pros\": [\"experiments are numerous and advanced\", \"transition probabilities are not transition-invariant compared to VIN\", \"do not need pretraining trajectories\"], \"cons\": [\"limitation and hypotheses are not very explicit\", \"Questions/remarks :\", \"d_{rew} is not defined\", \"the shared weights should be explained in more details\", \"sometimes \\\\psi(s) is written as parametrized by \\\\theta, sometime not\", \"is it normal that the \\\\gamma never appears in your formula to update the \\\\theta and w? yet reading the background part I feel that you optimize the discounted sum of the rewards, is it the case?\", \"I think there is a mistake in the definition of 1_{s' \\\\neq \\\\emptyset }, it is 1 if s' is NOT terminal and 0 otherwise, am I wrong?\", \"why do you need the parameters w to represent the value function V, if you already have v^k_{i,j} available? is it just to say that your NN is updated with two distinct cost functions?\", \"I did not understand the assumptions made by VProp, do you consider that the transition function T is known? this seems to be the case when you explain that transitions are deterministic and that there is a mapping between the actions and the positions, but is never really said\", \"Compared to VIN, VProp uses an extra maximum to compute v^k_{i, j}, why? In this case, the approximation of the value function can never decrease.\", \"How is R_{a, i, j, i ', j'} broken into r^{in}_{i ', j'} - r^{out}_{i, j} in VProp? Is the reward function known to the agent at all points?\", \"In MVProp, can r_{i, j} be negative?\", \"In MVProp, how does the rewriting in p * v + r * (1-p) shows that only positive rewards are propagated? Does not it come only from the max?\", \"In the experiments, S is not fully described, \\\\phi(s) neither\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Official review\", \"review\": \"Update:\\n\\nI thank the authors for the response. Unfortunately, the response does not mention modifications made to the paper according to the comments. According to pdfdiff, modifications to the paper are very minor, and none of my comments are addressed in the paper. I think the paper shows good results, but it could very much benefit from improved presentation and evaluation. I do recommend acceptance, but if the authors put more work in improving the paper, it could have a larger impact.\\n\\n------\\n\\nThe paper proposes a learnable planning model based on value iteration. The proposed methods can be seen as modifications of Value Iteration Networks (VIN), with some improvements aimed at improving sample efficiency and generalization to large environment sizes. The method is validated on gridworld-type environments, as well as on a more complex StarCraft-based domain with raw pixel input.\", \"pros\": \"1) The topic of the paper is interesting: combining the advantages of learning and planning seems like a promising direction to achieving adaptive and generalizable systems.\\n2) The presentation is quite good, although some details are missing.\\n3) The proposed method can be effectively trained with reinforcement learning and generalizes well to much larger environments than trained on. It beats vanilla VIN by a large margin. The MVProp variant of the method is especially successful.\", \"cons\": \"1) I would like to see a more complete discussion of the MVProp method. Propagation of only positive rewards seems like somewhat of a hack. Is this a general solution or is it only applicable to gridworld navigation-type tasks? Why? If not, is the area of applicability of MVProp different from VProp? Also, is the area of applicability of VProp different from VIN? It\\u2019s important to discuss this in detail.\\n2) I wonder how would the method behave in more realistic gridworld environments, for instance similar in layout to those used in RL navigation literature (DMLab, ViZDoom, MINOS, etc). The presented environments are quite artificial and seem to basically only require \\u201cobstacle avoidance\\u201d, not so much deliberate long-distance planning.\\n3) Some details are missing. For instance, I was not able to find the exact network architectures used in different tasks. \\nRelated to this, I was confused by the phrase \\u201cAs these new environments are not static, the agent needs to re-plan at every step, forcing us to train on 8x8 maps to reduce the time spent rolling-out the recurrent modules.\\u201d I might be misunderstanding something, but is there any recurrent network in VProp? Isn\\u2019t it just predicting the parameters once and then rolling our value iteration forward without any learning? Is this so time-consuming?\\n4) Why does the performance even of the best method not reach 100% even in the simpler environments in Figure 2? Why is the performance plateauing far from 100% in the more difficult case? It would be interesting to see more analysis of how the method works, when it fails, and which parts still need improvement. On a related topic, it would be good to see more qualitative results both in MazeBaze and StarCraft - in the form of images or videos.\\n5) Novelty is somewhat limited: the method is conceptually similar to VIN. \\n\\nTo conclude, I think the paper is interesting and the proposed method seems to perform well in the tested environments. I am quite positive about the paper, and I will gladly raise the rating if my questions are addressed satisfactorily.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}" ] }
rJl6M2C5Y7
Online Hyperparameter Adaptation via Amortized Proximal Optimization
[ "Paul Vicol", "Jeffery Z. HaoChen", "Roger Grosse" ]
Effective performance of neural networks depends critically on effective tuning of optimization hyperparameters, especially learning rates (and schedules thereof). We present Amortized Proximal Optimization (APO), which takes the perspective that each optimization step should approximately minimize a proximal objective (similar to the ones used to motivate natural gradient and trust region policy optimization). Optimization hyperparameters are adapted to best minimize the proximal objective after one weight update. We show that an idealized version of APO (where an oracle minimizes the proximal objective exactly) achieves global convergence to stationary point and locally second-order convergence to global optimum for neural networks. APO incurs minimal computational overhead. We experiment with using APO to adapt a variety of optimization hyperparameters online during training, including (possibly layer-specific) learning rates, damping coefficients, and gradient variance exponents. For a variety of network architectures and optimization algorithms (including SGD, RMSprop, and K-FAC), we show that with minimal tuning, APO performs competitively with carefully tuned optimizers.
[ "hyperparameters", "optimization", "learning rate adaptation" ]
https://openreview.net/pdf?id=rJl6M2C5Y7
https://openreview.net/forum?id=rJl6M2C5Y7
ICLR.cc/2019/Conference
2019
{ "note_id": [ "B1x0eoSQgV", "ryl2LHzGJE", "Syx2qR4gyN", "S1l-tamlkN", "S1ei3QGoCQ", "rye-WhkcRX", "Bklg17AYCQ", "rylbu-RtRm", "Bkg51P6KCX", "Bkld8r3FRQ", "rkxxb72KCX", "SylicetE67", "r1labrqphm", "SJl8R-eTnQ", "SklCKPP1h7" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544932086279, 1543804244153, 1543683731803, 1543679352669, 1543345074981, 1543269369315, 1543262936008, 1543262568993, 1543259874025, 1543255376483, 1543254775733, 1541865618788, 1541412101097, 1541370317539, 1540482950494 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1307/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1307/Authors" ], [ "ICLR.cc/2019/Conference/Paper1307/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1307/Authors" ], [ "ICLR.cc/2019/Conference/Paper1307/Authors" ], [ "ICLR.cc/2019/Conference/Paper1307/Authors" ], [ "ICLR.cc/2019/Conference/Paper1307/Authors" ], [ "ICLR.cc/2019/Conference/Paper1307/Authors" ], [ "ICLR.cc/2019/Conference/Paper1307/Authors" ], [ "ICLR.cc/2019/Conference/Paper1307/Authors" ], [ "ICLR.cc/2019/Conference/Paper1307/Authors" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper1307/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1307/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1307/AnonReviewer3" ] ], "structured_content_str": [ "{\"metareview\": \"This paper proposes an amortized proximal optimization method to adapt optimization hyperparameters. Empirical results on many problems are performed.\\n\\nReviewers overall find the ideas interesting, however there still are some questions whether strong baselines are used in the experimental comparisons. The reviewers also point that the theoretical results are not useful ones since the assumptions are not satisfied in practice. One of the reviewer increased their score, but the other has maintained that the paper requires more work.\\n\\nThe presentation of the result is also a bit problematic; the font sizes in the figure are too small to read.\\n\\nThe paper contains interesting ideas, but it does not make the bar for acceptance in ICLR. Therefore I recommend a reject. I encourage the authors to resubmit this work after improving the presentation and experiments.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Interesting idea but does not make the bar.\"}", "{\"title\": \"WideResNet-28-10, noisy quadratic problem, and lambda\", \"comment\": \"Q: Weight decay\\n\\nWe used weight decay 1e-5 for all SGD/SGDm/RMSprop experiments on CIFAR. We apologize that this was not clearly described in the paper, and we will add this information to the final version.\\n\\nUsing a larger weight decay of 5e-4 improves the baselines for SGD and SGDm to 94.32% and 94.82%, respectively, on CIFAR-10. Our current results for SGD-APO (93.82%) and SGDm-APO (94.59%) are still comparable to their respective baselines. We also verified that SGD-APO and SGDm-APO perform well with weight decay 5e-4, achieving test accuracies 94.22% and 94.62%, respectively.\\n\\nIn addition, we show in the WideResNet experiment below that APO performs well when using weight decay 5e-4.\", \"q\": \"It would be nice to have an experiment similar to the one given in \\\"No More Pesky Learning Rates\\\" by Tom Schaul, Figure 2.\\n\\nWe have performed this experiment, and will add the plot to the final version. We followed the experimental setup from [2], which analyzed a quadratic cost function based on [3]. In the two-dimensional experiment of the noisy quadratic problem, we observed that SGD-APO approaches the minimum in the low curvature direction faster than the myopic best learning rate, which suggests that APO does not suffer from short-horizon bias.\\n\\n\\n[1] Sergey Zagoruyko and Nikos Komodakis. Wide Residual Networks. BMVC 2016.\\n[2] Yuhuai Wu, Mengye Ren, Renjie Liao, Roger Grosse. Understanding Short-Horizon Bias in Stochastic Meta-Optimization. ICLR 2018.\\n[3] Tom Schaul, Sixin Zhang, Yann LeCun. No More Pesky Learning Rates. ICML 2013.\"}", "{\"title\": \"Response\", \"comment\": \"Thank you for your updates.\\n\\n1) Please clarify why you didn't use weight decay for your CIFAR experiments except for the case without momentum? Weight decay is used in most experimental setups for CIFAR-10 and CIFAR-100. In other words, it seems that you used a weaker baseline which makes \\\"showed that it converges faster and generalizes better than optimal fixed learning rates\\\" too strong. \\n\\n2) I don't think that ResNet34 is a state-of-the-art/\\\"strong baseline\\\" network. It is a tiny network. Wide ResNets showed better results (about 4% compared to about 6% for your ResNet) back in 2016 (2.5 years ago). I was asking for better networks to make it easier to compare with known results. \\n\\n3) Could you please be a bit more specific about \\\"a small fraction\\\" of the computational overhead and whether it is related to the size of the network. I guess that the size of the network might also affect some results, e.g., for layer-wise learning rates. \\n\\n4) I think that the paper would benefit from a more detailed interpretation of the hyperparameter lambda. To which extent it is wrong/correct to view it as just a higher level learning rate (step-size, scaling factor) especially when only one hyperparameter, learning rate of the low-level optimizer is considered. \\n\\n5) It would be nice to have an experiment similar to the one given in \\\"No More Pesky Learning Rates\\\" by Tom Schaul, Figure 2. This is definitely too late to ask it for this review but you may consider it for your future works. \\n\\nThank you.\"}", "{\"title\": \"Updated baselines, and comparison to learning rate schedules (summary)\", \"comment\": [\"Thank you for your helpful feedback. We have incorporated your suggestions into the updated paper.\", \"Specifically, we have:\", \"Updated the baseline model for CIFAR-10/100 from VGG11 to ResNet34.\", \"Used manual learning rate decay schedules for the CIFAR-10/100 baselines. We obtained 93-94% test accuracy on CIFAR-10 (SGD/SGDm/RMSprop/K-FAC) and 73-74% test accuracy on CIFAR-100 (SGD/SGDm). All are compared to their APO variants, which performed as well or better. The final results are shown in the table in the response to all reviewers at the top.\", \"Shown that APO is competitive with manual schedules both in terms of test accuracy and training loss with ResNet34. This demonstrates the practical applicability of APO for contemporary networks.\", \"Updated Figure 2 on CIFAR-10 with SGD/SGDm/RMSprop, Figure 4 on CIFAR-100 with SGD/SGDm, and Figure 6 on CIFAR-10 with SGD. We also added Figure 3 on CIFAR-10 with K-FAC. Each figure compares the baseline optimizers with their APO variants.\", \"Thank you for having helped us improve the paper.\"]}", "{\"title\": \"Updated paper with additional experiments and improved writing (continued)\", \"comment\": \"In the current version of the paper, we have updated Figure 2 on CIFAR-10 with SGD/SGDm/RMSprop, Figure 4 on CIFAR-100 with SGD/SGDm, Figure 5 on SVHN with RMSprop, and Figure 6 on CIFAR-10 with SGD. We also added Figure 3 on CIFAR-10 with K-FAC. Each figure compares the baseline optimizers with their APO variants.\\n\\nWe also added Sections E, G, and H to the Appendix, to show robustness to initial learning rates, Adam experiments, and a comparison to population-based training, respectively.\"}", "{\"title\": \"Thank you for your comment\", \"comment\": \"Thank you for your helpful comments.\", \"q\": \"Additional K-FAC results\\n\\nWe have added K-FAC results on CIFAR-10, in which we use APO to tune the learning rate and damping, and compare to K-FAC with a fixed learning rate as well as a manual decay schedule. We find that APO performs well when tuning the global learning rate, and that the training loss and test accuracy improve when we tune both the learning rate and damping coefficient.\"}", "{\"title\": \"Updated paper with additional experiments and improved writing (continued)\", \"comment\": \"Comparison to Population-Based Training\\n---------------------------------------------------------\\nWe added Section H to the appendix comparing PBT with APO to adapt the learning rate for RMSprop while training a ResNet34 model on CIFAR-10.\\n\\nThere are several important design decisions that must be made for PBT, including 1) the population size; 2) the exploration strategy; 3) the exploration frequency; and 4) the resampling probability. In particular, we found that it was critical to set the probability of resampling a learning rate value from an underlying hyperparameter distribution to 0; otherwise, the learning rate would jump from small to large values and cause training to become unstable.\\n\\nIn contrast, APO only requires a simple grid search over lambda. We found that APO substantially outperformed PBT, achieving a lower final training loss and equal test accuracy in much less wall-clock time. Because APO uses gradient-based optimization to tune the learning rate, it is more efficient than PBT, which is an evolutionary method that uses random perturbations to adapt the learning rate.\\n\\nAdam\\n--------\\nWe added experiments for tuning the global learning rate of Adam with APO in appendix Section G, where Adam-APO achieves better performance than Adam with a fixed global learning rate, and is competitive with Adam with a manual schedule.\\n\\nK-FAC\\n--------\\nWe added results for K-FAC on CIFAR-10. We compare K-FAC-APO to K-FAC with a fixed learning rate as well as a learning rate schedule. We find that APO performs well when tuning the global learning rate, and that the training loss and test accuracy improve when we tune both the learning rate and damping coefficient.\"}", "{\"title\": \"Updated paper with additional experiments and improved writing\", \"comment\": \"We thank all the reviewers for their insightful and helpful comments.\\n\\nWe made the following changes to the paper to address the reviewers\\u2019 concerns:\\n\\nUpdated Baselines and Comparison to Learning Rate Decay Schedules\\n-------------------------------------------------------------------------------------------------\\nWe updated our results for CIFAR-10 and CIFAR-100 using a larger network, ResNet34, instead of the VGG11 model used in the previous version. We also compared APO to manual learning rate decay schedules. For CIFAR-10/100, we trained the ResNet34 for 200 epochs, decaying the learning rate by a factor of 5 three times during training.\", \"the_final_test_accuracies_of_the_updated_model_with_and_without_apo_are\": \"| CIFAR-10 | CIFAR-100 |\\n--------------------------+--------------+---------------+\\nSGD (fixed lr) 92.97 72.69\\nSGDm (fixed lr) 92.77 72.53\\nSGD (decayed lr) 93.29 73.45\\nSGDm (decayed lr) 93.53 73.80\\nSGD-APO 93.82 74.65\\nSGDm-APO 94.59 73.89\\n\\n | CIFAR-10 |\\n-------------------------------+--------------+\\nRMSprop (fixed lr) 92.00 \\nRMSprop (decayed lr) 93.54\\nRMSprop-APO 93.58\\n\\n | CIFAR-10 |\\n-------------------------------+--------------+\\nK-FAC (fixed lr) 92.56\\nK-FAC (decayed lr) 94.25 \\nK-FAC-APO {lr} 93.91\\nK-FAC-APO {lr,damping} 94.51\\n\\nOur manual learning rate decay schedules achieve test accuracies ~93-94% on CIFAR-10, which we believe are strong baselines.\\nIn all cases, the learning rate schedules discovered by APO are competitive with the custom schedules.\\n\\n\\nLearning Rate Initialization\\n-------------------------------------\\nWe added a section to the appendix (Section E) in which we show that APO is robust to the initial learning rate of the base optimizer. We perform experiments with RMSprop on Rosenbrock, MNIST, and CIFAR-10, and show that the training loss, test accuracy, and learning rate trajectories are nearly identical when using initial learning rates that range across 5 orders of magnitude.\\n\\nComputational Efficiency\\n----------------------------------\\nEach meta-optimization step requires approximately the same amount of computation as a parameter update for the model.\\n\\nBy using a sufficiently large meta learning rate, we can amortize the meta-optimization by performing 1 meta-update for every K steps of the base optimization. We found that K=10 works well across our settings, while reducing the computational requirements of APO to just a small fraction more than the original training procedure.\\n\\nChoosing Lambda\\n-------------------------\\nThe only parameter that needs to be tuned in APO is lambda. The meta learning rate and meta update interval can be kept at our default values, which work well across many settings. Since each setting of lambda determines a learning rate schedule, tuning lambda is more valuable than tuning a fixed learning rate; it is equivalent to tuning a full learning rate schedule, for which the search space is much larger (i.e., to find a schedule manually, one must decide how often to decay the learning rate, and by what factor to decay each time).\"}", "{\"title\": \"Clarified writing and additional experiments\", \"comment\": \"Thank you for your helpful comments. We have improved the writing to incorporate your feedback. We have also performed more experiments to compare APO to manual learning rate schedules.\", \"q\": \"How to tune lambda? Tuning a good lambda v.s. tuning a good step-size, which one costs more?\\n\\nWe tune lambda by performing a grid search over the range {1e-1, 1e-2, 1e-3, 1e-4, 1e-5}. Because each lambda value gives rise to a learning rate schedule, tuning lambda yields significantly more value than tuning a fixed learning rate. Instead of trying to come up with a custom learning rate schedule, which would require deciding how frequently to decay the learning rate, and by what factor it should be decayed, all one needs to do is perform a grid search over a fixed set of lambdas to find an automated schedule that is competitive with hand-designed schedules (which are the result of years of accumulated experience in the field).\"}", "{\"title\": \"Updated baselines, and comparison to learning rate schedules\", \"comment\": \"Thank you for your helpful comments. We have addressed your concern about the baseline models and learning rate schedules in our updated paper.\", \"q\": \"You mention that \\\"APO converges quickly from different starting points on the Rosenbrock surface\\\" but 10000 iterations is not quick at all for the 2-dimensional Rosenbrock, it is extremely slow compared to 100-200 function evaluations needed for Nelder-Mead to solve it. I guess you mean w.r.t. the original RMSprop.\\n\\nYes, we intended to say that on Rosenbrock, RMSprop-APO converges quickly compared to baseline RMSprop; we have updated the paper to clarify this.\"}", "{\"title\": \"Addressed Adam, PBT, and initial learning rates\", \"comment\": \"Thank you for your insightful comments. We have incorporated your suggestions into the revised version of the paper.\", \"q\": \"In your experiments, you set the learning rate to be really low. What happens if you set it to be arbitrarily high? Can you algorithm recover good learning rates?\\n\\nAPO is robust to the initial learning rate of the base optimizer, using the default meta learning rate suggested in our updated paper. We have added a section to the appendix in which we include RMSprop-APO experiments on Rosenbrock, MNIST, and CIFAR-10 to show that the training loss, test accuracy, and learning rate trajectories are nearly identical when starting with initial learning rates {1e-2, 1e-3, 1e-4, 1e-5, 1e-6, 1e-7}, spanning 5 orders of magnitude. Note that 1e-2 is quite a large initial learning rate for RMSprop.\"}", "{\"comment\": \"Hi,\", \"i_have_several_questions_about_the_experiments\": [\"How does APO compare to standard learning rate decay schedule (e.g., decay lr with a factor of 10 in the middle of training)?\", \"The reported numbers in terms of test performance on CIFAR-10 (~90%) and CIFAR-100 (~65%) are lower than my expectation. As I know, VGG16 can easily get >70% on CIFAR-100 with BN and data augmentation. Besides, I suggest the authors focusing on CIFAR-100, rather than CIFAR-10 and MNIST (too easy).\", \"Can you show more experiments for K-FAC since you mentioned K-FAC in abstract and introduction? The experiments of K-FAC on MNIST is far from convincing, you should at least show some experiments on CIFAR. Also, you mentioned in 5.3 that K-FAC-APO first decreases the damping, then gradually increases the damping later in the training. Does it really make sense? As argued by the original K-FAC paper, the damping would diminish later in the training since the quadratic approximation is accurate enough.\"], \"title\": \"Concerns about experiments (more experiments need to be done!)\"}", "{\"title\": \"Interesting and Novel contribution - Some concerns that need to be answered regarding experiments and theory\", \"review\": \"Summary:\\nThis paper introduces Amortized Proximal Optimization (APO) that optimizes a proximal objective at each optimization step. The optimization hyperparameters are optimized to best minimize the proximal objective. \\n\\nThe objective is represented using a regularization style parameter lambda and a distance metric D that, depending on its definition, reduces the optimization procedure to Gauss-Newton, General Gauss Newton or Natural Gradient Descent.\\n\\nThere are two key convergence results which are dependent on the meta-objective being optimized directly which, while not practical, gives some insight into the inner workings of the algorithm. The first result indicates strong convergence when using the Euclidean distance as the distance measure D. The second result shows strong convergence when D is set as the Bregman divergence. \\n\\nThe algorithm optimizes the base optimizer on a number of domains and shows state-of-the-art results over a grid search of the hyperparameters on the same optimizer.\", \"clarity_and_quality\": \"The paper is well written.\", \"originality\": \"It appears to be a novel application of meta-learning. I wonder why the authors didn\\u2019t compare or mention optimizers such as ADAM and ADAGRAD which adapt their parameters on-the-fly as well. Also how does this compare to adaptive hyperparameter training techniques such as population based training?\", \"significance\": \"Overall it appears to be a novel and interesting contribution. I am concerned though why the authors didn\\u2019t compare to adaptive optimizers such as ADAM and ADAGRAD and how the performance compares with population based training techniques. Also, your convergence results appear to rely on strong convexity of the loss. How is this a reasonable assumption? These are my major concerns.\", \"question\": \"In your experiments, you set the learning rate to be really low. What happens if you set it to be arbitrarily high? Can you algorithm recover good learning rates?\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Update baselines\", \"review\": \"The paper proposes an approach to adapt hyperparameters online.\\nWhen learning rates are in focus, a convincing message would be to show that adaptation of learning rates is more efficient and simpler than their scheduling when tested on state-of-the-art architectures. \\nA. You demonstrate the results on CIFAR-10 for 10% error rates which corresponds to networks which are far from what is currently used in deep learning. Thus, it is hard to say whether the results are applicable in practice. \\nB. You don't schedule learning rates for your baseline methods except for a single experiment for some initial learning rate. \\nC. Your method involves a hyperparameter to be tuned which affects the shape of the schedule. This hyperparameter itself benefits from (requires?) some scheduling. \\n\\nIt would be interesting to see if the proposed method is competitive for training contemporary networks and w.r.t. simple schedule schemes. Online tuning of hyperparameters is an important functionality and I hope your paper will make it more straightforward to use it in practice. \\n\\n\\n* Minor notes:\\n\\nYou mention that \\\"APO converges quickly from different starting points on the Rosenbrock surface\\\" but 10000 iterations is not quick at all for the 2-dimensional Rosenbrock, it is extremely slow compared to 100-200 function evaluations needed for Nelder-Mead to solve it. I guess you mean w.r.t. the original RMSprop.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Simple and intuitive idea but needs more clarification\", \"review\": [\"I raised my rating. After the rebuttal.\", \"the authors address most of my concerns.\", \"it's better to show time v.s. testing accuracy as well. the per-epoch time for each method is different.\", \"anyway, the theory part acts still more like a decoration. as the author mentioned, the assumption is not realistic.\", \"-------------------------------------------------------------\", \"This paper presents a method to update hyper-parameters (e.g. learning rate) before updating of model parameters. The idea is simple but intuitive. I am conservative about my rating now, I will consider raising it after the rebuttal.\", \"1. The focus of this paper is the hyper-parameter, please focus and explain more on the usage with hyper-parameters.\", \"no need to write so much in section 2.1, the surrogate is simple and common in optimization for parameters. After all, newton method and natural gradients method are not used in experiments.\", \"in section 2.2, please explain more how gradients w.r.t hyper-parameters are computed.\", \"2. No need to write so much decorated bounds in section 3. The convergence analysis is on Z, not on parameters x and hyper-parameters theta. So, bounds here can not be used to explain empirical observations in Section 5.\", \"3. Could authors explain the time complexity of inner loop in Algorithm 1? Does it take more time than that of updating model parameters?\", \"4. Authors have done a good comparison in the context of deep nets. However,\", \"could the authors compare with changing step-size? In most of experiments, the baseline methods, i.e. RMSProp are used with fixed rates. Is it better to decay learning rates for toy data sets? It is known that SGD with fixed step-size can not find the optimal for convex (perhaps, also simple) problems.\", \"how to tune lambda? it is an important hyper-parameter, but it is set without a good principle, e.g., \\\"For SGD-APO, we used lambda = 0.001, while for SGDm-APO, we used lambda = 0.01\\\", \\\"while for RMSprop-APO, the best lambda was 0.0001\\\". What are reasons for these?\", \"In Section 5.2, it is said lambda is tuned by grid-search. Tuning a good lambda v.s. tuning a good step-size, which one costs more?\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}" ] }
Byxpfh0cFm
Efficient Augmentation via Data Subsampling
[ "Michael Kuchnik", "Virginia Smith" ]
Data augmentation is commonly used to encode invariances in learning methods. However, this process is often performed in an inefficient manner, as artificial examples are created by applying a number of transformations to all points in the training set. The resulting explosion of the dataset size can be an issue in terms of storage and training costs, as well as in selecting and tuning the optimal set of transformations to apply. In this work, we demonstrate that it is possible to significantly reduce the number of data points included in data augmentation while realizing the same accuracy and invariance benefits of augmenting the entire dataset. We propose a novel set of subsampling policies, based on model influence and loss, that can achieve a 90% reduction in augmentation set size while maintaining the accuracy gains of standard data augmentation.
[ "data augmentation", "invariance", "subsampling", "influence" ]
https://openreview.net/pdf?id=Byxpfh0cFm
https://openreview.net/forum?id=Byxpfh0cFm
ICLR.cc/2019/Conference
2019
{ "note_id": [ "S1x3mHh-gE", "SkgnhejSC7", "SJgIeO84CQ", "r1xc5LLE0m", "HylD3z4ch7", "Hkldxs7qnX", "SyxMijTQhX" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544828196241, 1542987955813, 1542903789814, 1542903442449, 1541190319383, 1541188336337, 1540770714108 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1306/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1306/Authors" ], [ "ICLR.cc/2019/Conference/Paper1306/Authors" ], [ "ICLR.cc/2019/Conference/Paper1306/Authors" ], [ "ICLR.cc/2019/Conference/Paper1306/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1306/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1306/AnonReviewer2" ] ], "structured_content_str": [ "{\"metareview\": \"The paper proposes several subsampling policies to achieve a clear reduction in the size of augmented data while maintaining the accuracy of using a standard data augmentation method. The paper in general is clearly written and easy to follow, and provides sufficiently convincing experimental results to support the claim. After reading the authors' response and revision, the reviewers have reached a general consensus that the paper is above the acceptance bar.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Useful contributions to practice\"}", "{\"title\": \"Revisions To Paper Uploaded\", \"comment\": \"Thank you for your detailed review and feedback. We\\u2019ve updated the paper to address your feedback on the datasets, subset selection method, and margin-based approach. We summarize these edits and address remaining comments below.\\n\\n[Dataset statistics] We\\u2019ve added the dataset class statistics in the Appendix under \\u201cExperiment Details\\u201d. NORB is balanced and the other two datasets are slightly imbalanced. We have also made the plots/tables slightly larger to improve readability as per your suggestion.\\n\\n[Subset selection] We agree that diversity-inducing subset selection techniques have the potential to be useful in this setting, as mentioned in our discussion section. We have included two sets of experiments with simple subset selection techniques: (1) a stratified sampling approach using k-means clustering, and (2) an implementation of augmentation with DPPs. The DPP approach used both bottleneck features alone and bottleneck features combined with influence. While these methods improve upon random sampling, they generally don\\u2019t outperform the performance of the proposed greedy influence/loss based approach. When combined in conjunction with influence/loss, however, they can obtain competitive performance compared to our proposed method (though with additional costs). Please find full results on these experiments in Appendix H and I. Thank you for this suggestion.\\n\\n[Margin-based approach] Prior work on VSVs considered this method only for a fixed set of support vectors. A contribution of our work is to draw on this prior art but note that metrics such as influence and loss are both more generally applicable and also allow the sampling to be considerably more flexible. For completeness, we have also included the margin-based approach that you suggest, which is a more direct generalization of VSV. However, this method does not improve upon the loss/influence approach. Results are provided in Appendix F.\\n\\n[Accuracy metrics] In our experiments, we transform the data simply to highlight the impact of augmentation, which allows us to illustrate the effect of subsampling strategies more clearly.\"}", "{\"title\": \"Revisions To Paper Uploaded\", \"comment\": \"Thank you for your thoughtful review.\\n\\n[Efficiency of approach] First, we note that there are benefits of our approach beyond efficiency. Determining the correct set of augmentations to apply is often a manual and time-consuming process, and applying augmentations to a small set of points can help to make this approach more user-friendly and interpretable (as more sophisticated, data-point specific augmentations can be applied, and general augmentations can be more readily diagnosed).\\n\\nIn terms of efficiency alone, we also note that selecting augmentations is often not a one-shot process: it may involve continually re-training a model and evaluating held-out accuracy to determine the best set of transformations. Therefore the efficiency improvements that result from reducing the dataset size may be compounded over multiple iterations.\", \"with_regards_to_just_a_single_application_of_data_augmentation\": \"For a known set of augmentations, the expected dataset reduction of our approach is (n_original + n_augmentations*sample_size) compared to (n_original + n_augmentations*n_original). As training time is linear to superlinear with the dataset size, this can provide a rough estimate of the time savings depending on the size of the sample.\\n\\nHowever, we understand that true efficiency savings can vary somewhat depending on the implementation of interest. For completeness, we have therefore performed an empirical study to estimate the practical efficiency of our approach in relation to the number of augmentations applied. These results, performed in Tensorflow, show a linear relationship between the number of training examples and the time per epoch. Full results are provided in Appendix G.\\n\\n[Two-stage approach] Although for CIFAR and NORB we freeze earlier layers, note that for MNIST, we fully retrain the model (as stated in the first paragraph in Sec 5). We have thus explored both settings. The two-stage approach can be viewed as an extension to classical feature extraction techniques (e.g., SIFT, HOG). An example common in natural language processing is word embeddings, which can be learned in one-shot approach (e.g., neural language models) or used in a two-stage approach (e.g., Word2Vec with a classifier). A similar example can be seen in vision with Face Embeddings (Schroff et. al., CVPR 2015). Also note that the model is trained once, and then can be used for continual improvements. For large datasets, it may be impractical to retrain a full deep network for every modification to the experiment.\\n\\n[Limited empirical studies, understanding of policies] We disagree that the empirical studies performed are limited in nature, or that we have made little effort to understand the policies. We have explored not only the proposed influence-based approach across several datasets, but have also explored around this space -- including several natural variants of the method (e.g., updating, re-weighting, loss). This set has now been expanded even further to consider diversity-inducing techniques. In our experiments, we have been careful to compare against natural baselines and related work (such as the VSV method and random sampling). In terms of developing an understanding for our approach, we provide an early analysis (Section 4.1) that explains why we expect the method to work, and then validate this intuition in our experiments (Section 5.1-5.2) and exploration of the resulting samples (Section 5.3 and Appendix E).\"}", "{\"title\": \"Revisions To Paper Uploaded\", \"comment\": \"Thank you for your encouraging review. We note that we have made a few additions to our original paper to strengthen the submission. In particular, we have: (i) more thoroughly compared against diversity-inducing subset selection baselines (as mentioned to AnonReviewer3), (ii) validated the efficiency improvements of our approach (in response to AnonReviewer2), and (iii) made cosmetic adjustments to the writing and plotting throughout to increase clarity. These additional edits further validate our initial approach and help to better illustrate the method.\"}", "{\"title\": \"Incomprehensive experiments with several missing baselines\", \"review\": \"Summary: The authors study the problem of identifying subsampling strategies for data augmentation, primarily for encoding invariances in learning methods. The problem seems relevant with applications to learning invariances as well as close connections with the covariate shift problem.\", \"contributions\": \"The key contributions include the proposal of strategies based on model influence and loss as well as empirical benchmarking of the proposed methods on vision datasets.\", \"clarity\": \"While the paper is written well and is easily accessible, the plots and the numbers in the tables were a bit small and thereby hard to read. I would suggest the authors to have bigger plots and tables in future revisions to ensure readability.\\n\\n>> The authors mention in Section 4.1 that \\\"support vector are points with non-zero loss\\\": In all generality, this statement seems to be incorrect. For example, even for linearly separable data, a linear SVM would have support vectors which are correctly classified. \\n\\n>> The experiment section seems to be missing a table on the statistics of the datasets used: This is important to understand the class distribution in the datasets used and if at all there was label imbalance in any of them. It looks like all the datasets used for experimentation had almost balanced class labels and in order to fully understand the scope of these sampling strategies, I would suggest the authors to also provide results on class imbalanced datasets where the distribution over labels is non-uniform. \\n\\n>> Incomprehensive comparison with benchmarks: \\na) The comparison of their methods with VSV benchmark seems incomplete. While the authors used the obtained support vectors as the augmentation set and argued that it is of fixed size, a natural way to extend these to any support size is to instead use margin based sampling where the margins are obtained from the trained SVM since these are inherently margin maximizing classifiers. Low margin points are likely to be more influential than high margin points.\\nb) In Section 5.3, a key takeaway is \\\"diversity and removing redundancy is key in learning invariances\\\". This leads to possibly other benchmarks to which the proposed policies could be compared, for example those based on Determinantal point processes (DPP) which are known for inducing diversity in subset selection. There is a large literature on sampling diverse subsets (based on submodular notions of diversity) which seems to be missing from comparisons. Another possible way to overcome this would be to use stratified sampling to promote equal representation amongst all classes. \\nc) In Section 2, it is mentioned that general methods for dataset reduction are orthogonal to the class of methods considered in this paper. However, on looking at the data augmentation problem as that of using fewest samples possible to learn a new invariance, it can be reduced to a dataset reduction problem. One way of using these reduction methods is to use the selected set of datapoints as the augmentation set and compare their performance. This would provide another set of benchmarks to which proposed methods should be compared.\\n\\n>> Accuracy Metrics: While the authors look at the overall accuracy of the learnt classifiers, in order to understand the efficacy of the proposed sampling methods at learning invariances, it would be helpful to see the performance numbers separately on the original dataset as well as the transformed dataset using the various transformations. \\n\\n>> Experiments in other domains: The proposed schemes seem to be general enough to be applicable to domains other than computer vision. Since the focus of the paper is the proposal of general sampling strategies, it would be good to compare them to baselines on other domains possibly text datasets or audio datasets.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Intuitive and useful\", \"review\": \"Data augmentation is a useful technique, but can lead to undesirably large data sets. The authors propose to use influence or loss-based methods to select a small subset of points to use in augmenting data sets for training models where the loss is additive over data points, and investigate the performance of their schemes when logistic loss is used over CNN features. Specifically, they propose selecting which data points to augment by either choosing points where the training loss is high, or where the statistical influence score is high (as defined in Koh and Liang 2017). The cost of their method is that of fitting an initial model on the training set, then fitting the final model on the augmented data set.\", \"they_compare_to_reasonable_baselines\": \"no augmentation, augmentation by transforming only a uniformly random chosen portion of the training data, and full training data augmentation; and show that augmenting even 10% of the data with their schemes can give loss competitive with full data augmentation, and lower than the loss achievable with no augmentation or augmentation of a uniformly random chosen portion of the data of similar size. Experiments were done on MNIST, CIFAR, and NORB.\\n\\nThe paper is clearly written, the idea is intuitively attractive, and the experiments give convincing evidence that the method is practically useful. I believe it will be of interest to a large portion of the ICLR community, given the usefulness of data augmentation.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Useful idea, though the contribution is a bit marginal\", \"review\": \"This paper considers how to augment training data by applying class-preserving transformations to selected datapoints.\", \"it_proposes_improving_random_datapoint_selection_by_selection_policies_based_on_two_metrics\": \"the training loss\\nassociate with each datapoint (\\\"Loss\\\"), and the influence score (from Koh and Liang that approximates leave-one-one test loss). The authors consider two policies based on these metrics: apply transformations to training points in decreasing \\norder of their score, or to training points sampled with probability proportional to score. They also consider two\", \"refinements\": \"downweighting observations that are selected for transformation, and updating scores everytime\\ntransformations associated with an observation are added. \\n\\nThe problem the authors tackle is important and their approach is natural and promising. On the downside, the theoretical \\ncontribution is moderate, and the empirical studies quite limited.\", \"the_stated_goals_of_the_paper_are_quite_modest\": \"\\\"In this work, we demonstrate that it is possible to significantly reduce the\\nnumber of data points included in data augmentation while realizing the same accuracy and invariance benefits of \\naugmenting the entire dataset\\\". It is not too surprising that carefully choosing observations according suitable policies \\nis an improvement over random subsampling, especially, when the test data has been \\\"poisoned\\\" to highlight this effect. \\nThe authors have demonstrated that two intuitive policies do indeed work, have quantified this on 3 datasets. \\n\\nHowever they do not address the important question of whether doing so can improve training time/efficiency. In other words, the authors have not attempted to investigate the computational cost of trying to assign importance scores to each observation. Thus this paper does not really demonstrate the overall usefulness of the proposed methodology.\\n\\nThe experimental setup is also limited to (I think) favor the proposed methodology. Features are precomputed on images using a CNN, and the different methods are compared on a logistic regression layer acting on the frozen features. The existence of such a pretrained model is necessary for the proposed methods, otherwise one cannot assign selection scores to different datapoints. However, this is not needed for random selection, where the transformed inputs can directly be input to the system. A not unreasonable baseline would be to train the entire CNN with the augmented 5%,10%, 25% datasets, rather than just the last layer. Of course this now involves training the entire CNN on the augmented dataset, rather than just the last layer, but how relevant is the two stage training approach that the authors propose?\\n\\nIn short, while I think the proposed methodology is promising, the authors missed a chance to include a more thorough analysis of the trade-offs of their method.\\n\\nI also think the paper makes only a minimal effort to understand the policies, the experiments could have helped shed some more light on this.\", \"minor_point\": \"The definition of \\\"influence\\\" is terse e.g. I do not see the definition of H anywhere (the Hessian of the empirical loss)\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}" ] }
SJMnG2C9YX
Complementary-label learning for arbitrary losses and models
[ "Takashi Ishida", "Gang Niu", "Aditya Krishna Menon", "Masashi Sugiyama" ]
In contrast to the standard classification paradigm where the true (or possibly noisy) class is given to each training pattern, complementary-label learning only uses training patterns each equipped with a complementary label. This only specifies one of the classes that the pattern does not belong to. The seminal paper on complementary-label learning proposed an unbiased estimator of the classification risk that can be computed only from complementarily labeled data. How- ever, it required a restrictive condition on the loss functions, making it impossible to use popular losses such as the softmax cross-entropy loss. Recently, another formulation with the softmax cross-entropy loss was proposed with consistency guarantee. However, this formulation does not explicitly involve a risk estimator. Thus model/hyper-parameter selection is not possible by cross-validation— we may need additional ordinarily labeled data for validation purposes, which is not available in the current setup. In this paper, we give a novel general framework of complementary-label learning, and derive an unbiased risk estimator for arbitrary losses and models. We further improve the risk estimator by non-negative correction and demonstrate its superiority through experiments.
[ "complementary labels", "weak supervision" ]
https://openreview.net/pdf?id=SJMnG2C9YX
https://openreview.net/forum?id=SJMnG2C9YX
ICLR.cc/2019/Conference
2019
{ "note_id": [ "SyleMjflxV", "HyeaZMtYRX", "SJe6JGKF0Q", "HJxJsWYtCX", "SkeJ7R_m6X", "B1gDnQhq3X", "rJedhpHq3X" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544723208301, 1543242244952, 1543242213070, 1543242135493, 1541799447112, 1541223343307, 1541197232476 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1305/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1305/Authors" ], [ "ICLR.cc/2019/Conference/Paper1305/Authors" ], [ "ICLR.cc/2019/Conference/Paper1305/Authors" ], [ "ICLR.cc/2019/Conference/Paper1305/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1305/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1305/AnonReviewer3" ] ], "structured_content_str": [ "{\"metareview\": \"The paper studies learning from complementary labels \\u2013 the setting when example comes with the label information about one of the classes that the example does not belong to. The paper core contribution is an unbiased risk estimator for arbitrary losses and models under this learning scenario, which is an improvement over the previous work, as rightly acknowledged by R1 and R2.\", \"the_reviewers_and_ac_note_the_following_potential_weaknesses\": \"(1) R3 raised an important concern that the core technical contribution is a special case of previously published more general framework which is not cited in the paper. The authors agree with R3 on this matter; (2) the proposed unbiased estimator is not practical, e.g. it leads to overfitting when the cross-entropy loss is used, it is unbounded from below as pointed out by R1; (3) the two proposed modifications of the unbiased estimator are biased estimators, which defeats the motivation of the work and limits its main technical contributions; (4) R2 rightly pointed out that the assumption that complementary label is selected uniformly at random is unrealistic \\u2013 see R2\\u2019s suggestions on how to address this issue.\\nWhile all the reviewers acknowledged that the proposed biased estimators show advantageous performance on practice, the AC decides that in its current state the paper does not present significant contributions to the prior work, given (1)-(3), and needs major revision before submitting for another round of reviews.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Meta-Review\"}", "{\"title\": \"Thank you for your reviews!\", \"comment\": \"Thank you very much for the insightful reviews!\\n\\nQ) Theorem 1 and derived loss are special cases of more general framework published in prior work.\\nA) Thank you very much for pointing this out and explaining the relationship between our paper. We would like to clarify our contributions carefully and explain the relationship in our paper.\"}", "{\"title\": \"Thank you for your reviews!\", \"comment\": \"Thank you very much for the review and for the important questions!\\n\\nQ) Modified version appears to be biased, and does this reverse the claim of being able to cross-validate?\\nA) We apologize for the lack of clarity in the paper, but even if our learning objective uses a modified version that can potentially be a biased estimator, we can still use an unbiased version for our cross-validating objective. We would like to demostrate a very simple example (with a single validation split). We report the classification accuracy with Fashion-MNIST with just one trial: Free(proposed):59.06% / gradient-ascent(proposed):79.58% / forward[Xiyu'18]:46.72% / pairwise-comparison[Ishida'17]:76.03%. 9 hyper-parameter candidate combinations of weight decay (1e-4, 1e-5, 1e-6) and learning rate (1e-4, 1e-5, 1e-6) were used with 150 epochs. SGD with momentum 0.9 was used for optimization. We reported the test accuracy of the best model based on validation score (calculated from only complementarily-labeled data) from all epochs and all hyper-parameter combinations. For forward method [Xiyu'18], we simply used their proposed learning objective for validation criteria. The model was multi-layer perceptron with d-50-1 (In Figure 2, d-500-1 was used). Due to the time constraint, we were able to only finish a very simple setup with a single trial, with very few hyper-parameters and few candidates for each of them, so our main message here is not the result itself (for example \\u201cforward\\u201d is too weak and we can guess optimal hyper-parameters were not included), but to show that validation is possible with our unbiased estimator. We would like to report results for extensive experiments in the final version.\\n\\nQ) The hyper-parameters are fixed. Will this implicitly handicap / favor some over others?\\nA) Our motivation of the experiments was to demonstrate the failure of the proposed method based on Theorem 1, and how the two modifications solve the overfitting issues and show test results of all epochs during training. However, as you point out, this is not a good demonstration of comparing with the best hyper-parameter for each method, so we showed some simple experimental results that tune hyper-parameters with (complementary labeled) validation data in the answer to your previous question. We will add more experiments to demonstrate this.\\n\\nQ) Uniform assumption is na\\u00efve.\\nA) A potentially biased (but consistent) method has already been proposed for a non-uniform assumption [22], but one of our future work is to explore if proposing a non-uniform version of the unbiased estimator is possible or not.\\n\\nQ) The only justification of the uniform assumption is the mixed setting of crowdsourcing, but there is no mixed setting in experiments.\\nA) We would like to demonstrate with more experiments using the mixed setting with both ordinary and complementary labels.\"}", "{\"title\": \"Thank you for your reviews!\", \"comment\": \"Thank you very much for reading our paper in depth and for your reviews!\\n\\nQ) About the reasons for overfitting.\\nA) Thank you for the valuable feedback. A possible empirical test would be to make a non-negative version of [Ishida\\u201917] and compare it with the original estimator in [Ishida\\u201917].\\n\\nQ) The modifications of the unbiased estimator lead to biased estimators.\\nA) Yes, this is true, but we would like to point out that even if our learning objective is biased (with either non-negative version or gradient ascent version), we can still use the unbiased version for our cross-validating objective. Therefore, performing cross validation with only complementary data can still be achieved. Since we did not demonstrate the validation procedure in our experiments, we showed some preliminary results for experiments with validation in the reply to Reviewer2. \\n\\nQ) About the three mistakes on the notations of equations.\\nA) Thank you very much for pointing this out. We will fix these issues in our final version.\\n\\nQ) What is the loss from [Ishida\\u201917]?\\nA) We used pairwise comparison multi-class loss with sigmoid binary loss, which was used in the experimental section of [Ishida\\u201917].\"}", "{\"title\": \"Interesting setting, but problems with the original estimator and limited experimental evaluation weaken the claims\", \"review\": [\"Pros:\", \"The authors consider an interesting problem of learning from complementary labels\", \"They propose an approach that, assuming that the complementary label is selected uniformly at random, provides an unbiased estimate for any loss function, which is an improvement over the previous work.\", \"Experiments show promising results for modifications of the proposed estimate\"], \"cons\": [\"Having an unbiased estimate doesn't imply that its minimisation is a successful learning strategy. Indeed, the authors show that minimising their original estimate for the cross-entropy loss leads to overfitting. While the authors attribute this behaviour to the fact that the estimate can be negative, I believe the loss being negative is not problem per se (for example, substituting 0/1 loss with -100/-99 loss would not change the learning; similarly, this is not a problem for the losses considered in [Ishida'17]). I would rather attribute the problem to the fact that the proposed estimate is unbounded from below and there are no generalisation guarantees for it. Indeed, assuming there exists a training example that appears in the training set only once, with one complementary label, estimate (8) can be made arbitrary small by just training to predict probability 0 for the provided complementary label on that example ( and any non-zero probability for other classes).\", \"to cope with the above mentioned problem, the authors propose two heuristic-based modifications of the estimate, which are potentially biased. This weakens the initial motivation for finding an unbiased estimate and shifts the focus towards the experimental evaluation\", \"one of the mentioned motivations for unbiased estimates - being able to perform model selection on complementary labeled validation set - is not illustrated in the experiments\"], \"questions\": [\"I believe 1/(K-1) normalisation factor in (5) is not needed\", \"there seems to be a mistake in (9) (and its modifications later on) - I would expect either the subscript $j$ of the probability distribution in the last summand to be exchanged with $k$ in the loss, or a factor $\\\\pi_j/\\\\pi_k$ added\", \"also, I think there are some mistakes in subscripts in (11)\", \"what loss is the method from [Ishida'17] optimising in the experiments?\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Clear paper on interesting setup, but claims are undermined by issues with first estimator + lack of motivation for assumptions\", \"review\": \"This paper proposes an improved approach to the \\\"complementary-label\\\" form of weak supervision, in which a label that is *not* the true label is marked. Specifically, this paper proposes an unbiased estimator that accepts arbitrary loss functions and models. Noting that this proposed estimator can suffer from overfitting due to unbounded negative loss, a lower-bounded estimator is proposed. Experiments are then performed on several image classification datasets.\", \"pros\": [\"This paper addresses a creative form of weak supervision, proposed by prior work, in which labels that are *not* the true label are labeled, in a clear fashion.\", \"The first proposed estimator is unbiased, as shown by a proof, and accepts arbitrary losses, an improvement over prior approaches\", \"The overall presentation is clear and clean\"], \"cons\": [\"One of the main claims of the paper is the proposal of an unbiased estimator. However, this estimator then does not seem to work well enough due to degenerate negative loss. So then a modified version is proposed- which does not appear to be unbiased? Either way, no assertion or proof of it being unbiased is given. So then presumably this also reverses the claim of being able to cross-validate? This seems like a major weakening of the paper's contributions\", \"Since the unbiased estimator does not appear to work well, two implementations of a corrected one are proposed, using heuristic approaches without explicit theoretical guarantees. This shifts the burden to the experimental studies. These are somewhat thorough, but not extremely so: for example, one set of hyperparameters were used for all of the methods? This seems like it could implicitly handicap / favor some over others?\", \"The proposed estimator is based on the assumption that the probability of classes in the complement set (the set of labels other than the one marked as incorrect) is uniformly distributed (e.g. see beginning of Proof of thm 1). However, this seems like a potentially naive assumption. Indeed, in the related work section, it is mentioned that work in 2018 already considered the case where this uniformity assumption does not hold.\", \"More broadly, but following from the above: The paper does not provide any real world examples, real or hypothetical, to give the reader an idea of whether the above uniformity assumption---or really any of these assumptions---are well-motivated or empirically justified. At the bottom of page 3 in the related work, a concrete application used in prior work is mentioned---where crowd workers are shown single labels and vote Y/N, leading to a mix of standard (if Y) and complement-labeled (if N) data---however this mixed setting is not considered explicitly in this paper. So, how is the reader supposed to get any idea of whether the assumed setup is motivated or justified? The experiments do not provide this, because the complementary labels are synthetically generated according to the model assumed in the paper. Additionally, it is briefly mentioned that collecting complementary labeled data is faster, but again no concrete examples are given to support this.\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Well written paper about an interesting problem. The major problem is that the core part of the contribution is a special case of previously published more general framework not cited in the paper.\", \"review\": [\"pros:\", \"Clearly written and sound paper.\", \"Addresses interesting problem.\", \"Improves existing methods used for this learning scenario.\"], \"cons\": \"- The core contribution is a special case of previously published more general framework which is not cited in the paper.\\n\\nIt is clearly written paper with a good motivation. The major problem is that the core contribution, namely, the risk reformulation in Theorem 1 and the derived loss (6), are special cases of more general framework published in \\n Jesus Cid-Sueiro et al. Consistency of Losses for Learning from Weak Labels. ECML 2014.\\n\\nThe work of [Cid-Sueiro2014] proposes a general way how to construct losses for learning from weak labels. They require that the distribution of weak labels is a linear transformation of the true label distribution, i.e. the assumption (3) of the paper under review. According to [Cid-Sueiro2014], the loss on weak labels is constructed by $weak_loss = L*original_loss$, where $L$ is the left inversion of the \\\"mixing matrix\\\" $T$ in (3). [Cid-Sueiro2014] also shows that such weak loss is classification calibrated which implies statistical consistency of the method. \\n\\nLearning from complementary labels is a special case when the mixing matrix is $T=(E-I)/(K-1)$ (E is unitary matrix, I is matrix of ones, K is number of labels). In this case, the left inversion of $T$ is simply $L=- E*(K-1) + I$ and so the weak loss is $weak_loss=L*loss$ which corresponds to the loss (5) proposed in the paper under review (in fact, the loss (5) also adds a constant term (Y-2)/(Y-1) which however has no effect on the minimizer). \\n\\nThe novel part of the paper is the non-negative risk estimator proposed in sec 3.3 and the online optimization methods addressed in sec 3.4. These extensions, although relatively straightforward, are empirically shown to significantly improve the results.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
B1lnzn0ctQ
ALISTA: Analytic Weights Are As Good As Learned Weights in LISTA
[ "Jialin Liu", "Xiaohan Chen", "Zhangyang Wang", "Wotao Yin" ]
Deep neural networks based on unfolding an iterative algorithm, for example, LISTA (learned iterative shrinkage thresholding algorithm), have been an empirical success for sparse signal recovery. The weights of these neural networks are currently determined by data-driven “black-box” training. In this work, we propose Analytic LISTA (ALISTA), where the weight matrix in LISTA is computed as the solution to a data-free optimization problem, leaving only the stepsize and threshold parameters to data-driven learning. This significantly simplifies the training. Specifically, the data-free optimization problem is based on coherence minimization. We show our ALISTA retains the optimal linear convergence proved in (Chen et al., 2018) and has a performance comparable to LISTA. Furthermore, we extend ALISTA to convolutional linear operators, again determined in a data-free manner. We also propose a feed-forward framework that combines the data-free optimization and ALISTA networks from end to end, one that can be jointly trained to gain robustness to small perturbations in the encoding model.
[ "sparse recovery", "neural networks" ]
https://openreview.net/pdf?id=B1lnzn0ctQ
https://openreview.net/forum?id=B1lnzn0ctQ
ICLR.cc/2019/Conference
2019
{ "note_id": [ "r1gphYh4xV", "HJen7j87JN", "HklQ7zppRQ", "ryxQcC8jp7", "rJey90hw6m", "r1xH4AhDpX", "rkxXkC3DTQ", "r1lP83nDpX", "SyxX7z5uhQ", "B1xca8C4nX", "HyltGkJFiQ" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1545025973281, 1543887651611, 1543520795258, 1542315659105, 1542078087105, 1542077996560, 1542077914684, 1542077518816, 1541083675292, 1540839106209, 1540054800720 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1304/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1304/Authors" ], [ "ICLR.cc/2019/Conference/Paper1304/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1304/Authors" ], [ "ICLR.cc/2019/Conference/Paper1304/Authors" ], [ "ICLR.cc/2019/Conference/Paper1304/Authors" ], [ "ICLR.cc/2019/Conference/Paper1304/Authors" ], [ "ICLR.cc/2019/Conference/Paper1304/Authors" ], [ "ICLR.cc/2019/Conference/Paper1304/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1304/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1304/AnonReviewer3" ] ], "structured_content_str": [ "{\"metareview\": \"This is a well executed paper that makes clear contributions to the understanding of unrolled iterative optimization and soft thresholding for sparse signal recovery with neural networks.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Solid contribution to unrolled iterative optimization and soft thresholding\"}", "{\"title\": \"Re: Rating has been upgraded\", \"comment\": \"[Opening is okay]\", \"points_1_and_2\": \"There is no word \\\"tree\\\" or \\\"graph\\\", no \\\"beta\\\" or \\\"$\\\\beta$\\\", in our paper. We are confused and think they may refer to another paper. Could you kindly clarify?\", \"3\": \"This is great suggestion. The matrix W is the solution of a convex quadratic program subject to linear constraints and, thereby, a linear system. Solving this system costs a negligible amount compared to training the remaining parameters. For example, when W is 250-by-500, computing W takes a few seconds but the remaining of ALISTA takes 1.5 hours. As you suggested, we will add this explanation and the complexity of computing W to the camera-ready version.\"}", "{\"title\": \"Rating has been upgraded\", \"comment\": \"Just some minor comments on the responses from the authors:\\n[Removed comments that were incorrectly included in this response]\\n* The complexity of the algorithm should also include that of the optimization that finds the matrix W, equation (16) in Stage 1.\"}", "{\"title\": \"Continued Response to Reviewer 3\", \"comment\": \"As you kindly suggested, we added two experiments to train the data-augmented version of ALISTA with 20 and 24 layers, to compare with the robust ALISTA model (the concatenation of a feed-forward encoder network that learns to solve the coherence minimization and a ALISTA network with step size and thresholds parameters).\\n\\nFor training ALISTA with data-augmentation, in each step, we first generate a batch of perturbed dictionaries \\\\tilde{D}s around an original dictionary D. Then these perturbed dictionaries are used to generate observations, by multiplying sparse vector samples from the same distribution. The data-augmented version of ALISTA is then trained with those dictionary-perturbed samples. It still follows the standard ALISTA to use a fixed weight matrix W that is analytically pre-solved from the original dictionary D.\\n\\nThe robust ALISTA model instead uses the encoder network to adaptively produce weight matrices to be used in ALISTA. Apart from the encoder network, the robust ALISTA needs to learn a set of step size and thresholds parameters just like the baseline ALISTA. We fix using a 16-layer ALISTA network and a 4-layer encoder in the robust ALISTA model.\\n\\nIn this experiment, we compare both models\\u2019 robustness to dictionary perturbations, by plotting recovery normalized MSEs (in dB) in testing, w.r.t. the standard deviation of perturbation noise, and also w.r.t. the layers used for data-augmented ALISTA. We set the maximal standard deviation of generated perturbations to 0.02 and followed the same settings described in Appendix E in the paper: \\n\\nSigma (standard deviation) | 0.0001 | 0.001 | 0.01 | 0.015 | 0.02 | 0.025\\nAugmented ALISTA T=16 | -26.58 | -25.87 | -15.49 | -11.71 | -8.84 | -6.74\\nAugmented ALISTA T=20 | -24.43 | -24.46 | -15.39 | -11.77 | -8.94 | -6.82\\nAugmented ALISTA T=24 | -24.12 | -24.00 | -15.45 | -11.68 | -8.81 | -6.70\\nRobust ALISTA T=16 | -62.47 | -62.41 | -62.02 | -61.50 | -60.67 | -45.00\\n\\n\\n- Observation: as we may see in the above results, more layers didn\\u2019t bring obvious empirical benefits to the recoverability of ALISTA. We could even observe that ALISTA of 24 layers had slightly worse NMSE that ALISTA of 16 and 20 layers.\\n\\n- Analysis: we agree with your insight that the limited parameter volume of augmented ALISTA might limited its capacity and robustness to recover from dictionary-perturbed measurements, compared to robust ALISTA which has another encoder network that adaptively and efficiently encodes the perturbed dictionary \\\\tilde{D} into new (dynamic) weight matrix \\\\tilde{W}. ALISTA only has two scalars to be learned in each layer (one scalar as step size and the other as threshold), therefore adding more layers do not enlarge the parameter volume significantly. \\n\\n- Remark: from the comparison, we could conclude that it takes more than adjusting step sizes and thresholds to gain robustness to dictionary perturbations in LISTA/ALISTA. Therefore, robust ALISTA makes the meaningful progress in creating an efficient encoder network, that can dynamically address the dictionary variations \\\\tilde{D} by always adjusting \\\\tilde{W}. Without incurring much higher complexity, robust ALISTA witness remarkable improvements over ALISTA, making it a worthy effort in advancing LISTA-type network research into the practical domain.\"}", "{\"title\": \"Response to Reviewer 3\", \"comment\": [\"Thank you for your careful reading and comments!\", \"Q1: In our proofs, we take b as b=Ax*. If we add noise to the measurements, almost all the inequalities in the proof need to be modified. We will end up getting \\u201cconvergence\\u201d to a neighbor of x* with a size depending on the noise level. Such modifications also apply to the analysis for convolutional dictionaries. Numerically, figures 1(b), 1(c) and 1(d) depict the results of ALISTA under SNRs = 40dB, 30dB and 20dB, respectively.\", \"Q2: We basically agree with your comment on why data augmented TiLISTA and ALISTA are not performing as well as robust ALISTA. We are conducting the experiments that you have suggested and will update the results in comments once they become available, and also add them to the paper\\u2019s next update.\", \"Q3: Thanks for kindly pointing out our writing issues. We will carefully fix typos and use more proofreading.\"]}", "{\"title\": \"ALISTA pre-computes the weight matrix and only learns a series of threshold and step size parameters\", \"comment\": \"Thanks for your careful review and the comments! We have revised our paper and we believe our responses and revisions address your concerns. We would be very grateful if you would look over our paper again, and reconsider your opinion.\\n\\nLet us first provide a general response, followed by responses to your specific comments.\\n\\nThe goal of work is to significantly speed up sparse recovery. The basis of this line of work is ISTA (iterative soft-thresholding algorithm), a classic iterative method for recovering a sparse vector x from it linear measurements Dx, which are further contaminated by additive noise. Like most iterative methods, ISTA repeats the same operation (matrix-vector multiplications by D and D\\u2019 and a soft thresholding) at each iteration. Therefore, it can be written as a simple for-loop. However, depending on the problem condition, it can take hundreds of iterations or tens of thousands of iterations. Gregor & LeCun, 2010, instead of using the original matrices D and D\\u2019 and soft-thresholding scalars in ISTA, select a series of new matrices and scalars by training using a set of synthetic sparse signals and their linear measurements. The resulting method, called LISTA (or learned ISTA), has a small fixed number of iterations, roughly 20, and is not only much faster but recovers more accurate sparse vectors than ISTA even if ISTA runs order-of-magnitude more iterations. On the other hand, training LISTA takes a long time, typically ten hours or longer, much like training a neural network with lots of parameters. Also, one must train new matrices and scalars for each encoding matrix D. These shortcomings are addressed by a line of work that follows LISTA. \\n\\nThis paper introduces ALISTA, which significantly simplifies LISTA by using only one free matrix (besides the encoding matrix D) for all iterations, and pre-computing that matrix by analytic optimization, as opposed to data-driven training. Therefore, when it comes to training ALISTA, there remain only a series of scalars for thresholding and step sizes to be learned from synthetic data. Despite this huge simplification, the performance of ALISTA is no worse than LISTA and other work along the line, supported by our theoretical results and numerical verification. \\n\\nYour question on computational complexity is great. Let us compute how much saving in flops ALISTA has over LISTA or its variants. Assume there are K layers (i.e., iterations) in total, and the encoding matrix has N rows and M columns with N < M, possibly N << M. In its typical implementation, vanilla LISTA learns O(KM^2+K+MN) parameters. That is one matrix and one scalar per layer and another matrix shared between all layers. LISTA in Chen et al., 2018 (also (6) in this paper) learns O(KNM + K) parameters as they learn only one N-by-M matrix and one thresholding parameter per layer. Tied LISTA ((15) in this paper) learns only O(NM + K) parameters by using only one matrix for all the K layers plus a step size and a thresholding parameter per layer. ALISTA ((16) in this paper) learns only O(K) parameters because it determines the only matrix by analytic optimization and fixes it during training. All these methods achieve similar recover quality. We have added this comparison to the revised paper.\\n\\nThe model in the paper that you has mentioned, \\u201cDictionary Learning for Analysis-Synthesis Thresholding\\u201d, is related to our paper as a special LISTA model with only one layer. We have cited this and related papers (listed below) in Section 1 of our updated version and discussed their contributions. \\n\\nYang et al., 2016. \\u201cAnalysis-Synthesis Dictionary Learning for Universality-Particularity Representation Based Classification.\\u201d\"}", "{\"title\": \"Response to Reviewer 2 (Continued)\", \"comment\": \"Answers to individual comments:\\n\\n- Q1 (Intuition and feasibility of identifying \\\"good\\\" matrices; Definition 1):\", \"definition_1_describes_a_property_of_good_matrices\": \"small coherence with respect to D. This is inspired by Donoho & Elad, 2003; Elad, 2007; Lu et al, 2018. Our Theorem 1 validates this point: a small mutual coherence leads to a large c and faster convergence. Feasibility is proved in (Chen et al., 2018). We have added these clarifications in our update.\\n \\n- Q1 (Clarification of Definition 2): \\nBecause W and D are both \\u201cfat\\u201d matrices, the product W\\u2019D, and such products of their submatrices consisting of two or more their corresponding columns, generally cannot be very close to the identity matrix. For a given D, Definition 2 let sigma_min represent the minimal \\u201cdistance\\u201d and define the set of corresponding W matrices. A larger sigma_min implies slower convergence in Theorem 2. We have added numerical validations of (11) to the appendix in the update. (The original definition (12) is (11) in the updated version.)\\n\\n- Q2 (Difference between the maximum entry \\\"norm\\\" and the Frobenius norm): \\nWe use a Frobenius norm in (16) instead of a sup-norm in Def. 1 (8) for computational efficiency. Directly minimizing the sup norm leads to a large-scale linear program. The sizes of the matrices W and D that we used in our numerical experiments are 250 by 500. We implemented an LP solver for the sup-norm minimization (8) based on Gurobi, which requires more than 8GB of memory and may be intractable on a typical PC. However, solving (16) in MATLAB needs only around 10MB of memory and a few seconds. Besides the Frobenius norm, we also tried to minimize the L_{1,1} norm but found no advantages. (The original formula (17) is (16) in the update.)\\n\\n- Q3 (Definition 3): \\nBy (6), x^k depends on thresholding parameters theta^0, theta^1, ..., theta^{k-1}. When these theta parameters are large enough, x^k can be sufficiently sparse. Theorem 1 implies we can ensure \\u201csupport(x^k) belongs to S\\u201d for all k by properly choosing the theta^k sequence.\\n\\n- Q4 (How is gamma learned): \\nThe step sizes gamma^k and thresholds theta^k (for all k) are updated to minimize the empirical recovery loss in (5), using the standard training method based on backpropagation and the Adam method. For ALISTA, the big Theta in (5), which is the set of parameters subject to learning, consists of only gammas and thetas. The matrix W is pre-computed by analytic optimization and, therefore, is fixed during training.\\n\\n- Q5 (The notation in Section 3): \\nThe lowercase letters are always vectors. The matrices D_{conv,m} are defined so that (18), which is precise but complicated, is equivalent to (19), which is simple and compact. The full definition of D_{conv,m} is given in Appendix C.2. The matrices W_{conv,m} are defined for a similar purpose before (21). We have added these clarifications in the updated version. (The original formula (20) is (19) in the current version.)\\n\\n- Q6 (Transpose in convolution): \\nTransposing a circulant matrix is equivalent to applying the convolution with rotated filters (Equation (6) and Footnote 2 in Chalasani et al., 2013). We have made clarifications in the update. \\n\\n- Q7 & Q8 & Q9 (Typos and figure suggestions): \\nThanks for finding the typos and making suggestions for figures. We have fixed the typos and will carefully proofread our paper. \\n\\n- Q10 (\\u201cI do not think Figure 2(b) verifies Theorem 1\\u201d): \\nWe agree that we incorrectly used the words \\\"verify\\\" and \\\"validation.\\\" Rather, the numerical observations in Figure 2(b) justify our choices of parameters in Theorem 1. We have made this correction.\\n\\n- Q11 (Figure 3): \\nWe agree that the number and proportion of false alarms are a more straightforward performance metric. However, they are sensitive to the threshold. We found that, although using a smaller threshold leads to more false alarms, the final recovery quality is better and those false alarms have small magnitudes and are easy to remove by thresholding during post-processing. That's why we chose to show their magnitudes, implying that we get easy-to-remove false alarms. We have added this reasoning to the final version.\"}", "{\"title\": \"Response to Reviewer 1\", \"comment\": \"Thank you for your careful reading and kindly identifying the typos in our paper! We will fix these typos and meticulously proofread our article.\"}", "{\"title\": \"important theoretical contribution to unrolling literature\", \"review\": \"The paper raises many important questions about unrolled iterative optimization algorithms, and answers many questions for the case of iterative soft thresholding algorithm (ISTA, and learned variant LISTA). The authors demonstrate that a major simplification is available for the learned network: instead of learning a matrix for each layer, or even a single (potentially large) matrix, one may obtain the matrix analytically and learn only a series of scalars. These simplifications are not only practically useful but allow for theoretical analysis in the context of optimization theory. On top of this seminal contribution, the results are extended to the convolutional-LISTA setting. Finally, yet another fascinating result is presented, namely that the analytic weights can be determined from a Gaussian-perturbed version of the dictionary. Experimental validation of all results is presented.\\n\\nMy only constructive criticism of this paper are a few grammatical typos, but specifically the 2nd to last sentence before Sec 2.1 states the wrong thing \\\"In this way, the LISTA model could be further significantly simplified, without little performance loss\\\"\\n...\\nit should be \\\"with little\\\".\", \"rating\": \"10: Top 5% of accepted papers, seminal paper\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"ALISTA - Review\", \"review\": [\"The paper describes ALISTA, a version of LISTA that uses the dictionary only for one of its roles (synthesis) in ISTA and learns a matrix to play the other role (analysis), as seen in equations (3) and (6). The number of matrices to learn is reduced by tying the different layers of LISTA together.\", \"The motivation for this paper is a little confusing. ISTA, FISTA, etc. are algorithms for sparse recovery that do not require training. LISTA modified ISTA to allow for training of the \\\"dictionary matrix\\\" used in each iteration of ISTA, assuming that it is unknown, and offering a deep-learning-based alternative to dictionary learning. ALISTA shows that the dictionary does not need to change, and fewer parameters are used than in LISTA, but it still requires learning matrices of the same dimensionality as LISTA (i.e., the reduction is in the constant, not the order). If the argument that fewer parameters are needed is impactful, then the paper should discuss the computational complexity (and computing times) for training ALISTA vs. the competing approaches.\", \"There are approaches to sparse modeling that assume separate analysis and synthesis dictionaries (e.g., Rubinstein and Elad, \\\"Dictionary Learning for Analysis-Synthesis Thresholding\\\"). A discussion of these would be relevant in this paper.\", \"The intuition and feasibility of identifying \\\"good\\\" matrices (Defs. 1 and 2) should be detailed. For example, how do we know that an arbitrary starting W belongs in the set (12) so that (14) applies?\", \"Can you comment on the difference between the maximum entry \\\"norm\\\" used in Def. 1 and the Frobenius norm used in (17)?\", \"Definition 3: No dependence on theta(k) appears in (13), thus it is not clear how \\\"as long as theta(k) is large enough\\\" is obtained.\", \"How is gamma learned (Section 2.3)?\", \"The notation in Section 3 is a bit confusing - lowercase letters b, d, x refer to matrices instead of vectors. In (20), Dconv,m(.) is undefined; later Wconv is undefined.\", \"For the convolutional formulation of Section 3, it is not clear why some transposes from (6) disappear in (21).\", \"In Section 3.1, \\\"an efficient approximated way\\\" is an incomplete sentence - perhaps you mean \\\"an efficient approximation\\\"?. Before (25), Dconv should be Dcir? The dependence on d should be more explicitly stated.\", \"Page 8 typo \\\"Figure 1 (a) (a)\\\".\", \"Figure 2(a): the legend is better used as the label for the y axis.\", \"I do not think Figure 2(b) verifies Theorem 1; rather, it verifies that your learning scheme gives parameter values that allow for Theorem 1 to apply (which is true by design).\", \"Figure 3: isn't it easier to use metrics from support detection (false alarm/missed detection proportions given by the ALISTA output)?\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"ALISTA: Analytic Weights Are As Good As Learned Weights in LISTA\", \"review\": \"The papers studies neural network-based sparse signal recovery, and derives many new theoretical insights into the classical LISTA model. The authors proposed Analytic LISTA (ALISTA), where the weight matrix in LISTA is pre-computed with a data-free coherence minimization, followed by a separate data-driven learning step for merely (a very small number of) step-size and threshold parameters. Their theory is extensible to convolutional cases. The two-stage decomposed pipeline was shown to keep the optimal linear convergence proved in (Chen et al., 2018). Experiments observe that ALISTA has almost no performance loss compared to the much heavier parameterized LISTA, in contrast to the common wisdom that (brutal-force) \\u201cend-to-end\\u201d always outperforms stage-wise training. Their contributions thus manifest in both novel theory results, and the practical impacts of simplifying/accelerating LISTA training. Besides, they also proposed an interesting new strategy called Robust ALISTA to overcome the small perturbations on the encoding basis, which also benefits from this decomposed problems structure.\\n\\nThe proofs and conclusions are mathematically correct to my best knowledge. I personally worked on similar sparse unfolding problems before so this work looks particularly novel and interesting to me. My intuition then was that, it should not be really necessary to use heavily parameterized networks to approximate a simple linear sparse coding form (LISTA idea). Similar accelerations could have been achieved with line search for something similar to steepest descent (also computational expensive, but need learn step-sizes only, and agnostic to input distribution). Correspondingly, there should exist a more elegant network solution with very light learnable weights. This work perfectly coincides with the intuition, providing very solid guidance on how a LISTA model could be built right. Given in recent three years, many application works rely on unfold-truncating techniques (compressive sensing, reconstruction, super resolution, image restoration, clustering\\u2026), I envision this paper to generate important impacts for practitioners pursuing those ideas. \\n\\nAdditionally, I like Theorem 3 in Section 3.1, on the provable efficient approximation of general convolution using circular convolution. It could be useful for many other problems such as filter response matching. \\n\\nI therefore hold a very positive attitude towards this paper and support for its acceptance. Some questions I would like the authors to clarify & improve in revision:\\n\\n1.\\tEqn (7) assumes noise-free case. The author stated \\u201cThe zero-noise assumption is for simplicity of the proofs.\\u201d Could the authors elaborate which part of current theory/proof will fail in noisy case? If so, can it be overcome (even by less \\u201csimpler\\u201d way)? How about convolutional case, the same? Could the authors at least provide some empirical results for ALISTA\\u2019s performance under noise?\\n\\n2.\\tSection 5.3. It is unclear to me why Robust ALISTA has to work better than the data augmented ALISTA. Is it potentially because that in the data augmentation baseline, the training data volume is much amplified, and one ALISTA model might become underfitting? It would be interesting to create a larger-capacity ALISTA model (e.g., by increasing unfolded layer numbers), train it on the augmented data, and see if it can compare more favorably against Robust ALISTA?\\n\\n3.\\tThe writeup is overall very good, mature, and easy to follow. But still, typos occur from time to time, showing a bit rush. For example, Section 5.1, \\u201cthe x-axes denotes is the indices of layers\\u201d should remove \\u201cis\\u201d. Please make sure more proofreading will be done.\", \"rating\": \"9: Top 15% of accepted papers, strong accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}" ] }
rygnfn0qF7
Language Model Pre-training for Hierarchical Document Representations
[ "Ming-Wei Chang", "Kristina Toutanova", "Kenton Lee", "Jacob Devlin" ]
Hierarchical neural architectures can efficiently capture long-distance dependencies and have been used for many document-level tasks such as summarization, document segmentation, and fine-grained sentiment analysis. However, effective usage of such a large context can difficult to learn, especially in the case where there is limited labeled data available. Building on the recent success of language model pretraining methods for learning flat representations of text, we propose algorithms for pre-training hierarchical document representations from unlabeled data. Unlike prior work, which has focused on pre-training contextual token representations or context-independent sentence/paragraph representations, our hierarchical document representations include fixed-length sentence/paragraph representations which integrate contextual information from the entire documents. Experiments on document segmentation, document-level question answering, and extractive document summarization demonstrate the effectiveness of the proposed pre-training algorithms.
[ "language model", "document segmentation", "algorithms", "hierarchical document representations", "representations", "dependencies", "many", "tasks" ]
https://openreview.net/pdf?id=rygnfn0qF7
https://openreview.net/forum?id=rygnfn0qF7
ICLR.cc/2019/Conference
2019
{ "note_id": [ "H1eKXHHlgN", "rylFqMMiAm", "HJxN7zGjCQ", "HJxdkzGs0Q", "H1xO5rV56Q", "SkegnEVq6m", "HkebYmVca7", "rJewk1Bphm", "rygPes5cn7", "SyeacLV537" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544733984861, 1543344785298, 1543344668214, 1543344607856, 1542239632410, 1542239399596, 1542239096671, 1541390046659, 1541217007078, 1541191317497 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1303/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1303/Authors" ], [ "ICLR.cc/2019/Conference/Paper1303/Authors" ], [ "ICLR.cc/2019/Conference/Paper1303/Authors" ], [ "ICLR.cc/2019/Conference/Paper1303/Authors" ], [ "ICLR.cc/2019/Conference/Paper1303/Authors" ], [ "ICLR.cc/2019/Conference/Paper1303/Authors" ], [ "ICLR.cc/2019/Conference/Paper1303/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1303/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1303/AnonReviewer2" ] ], "structured_content_str": [ "{\"metareview\": \"This paper proposes to pre-train hierarchical document representations for use in downstream tasks. All reviewers agreed that the results were reasonable.\\n\\nHowever, the methodological novelty is limited. While I believe there is a place for solid empirical results, even if not incredibly novel, there is also little qualitative or quantitative analysis to shed additional insights.\\n\\nGiven the high quality bar for ICLR, I can't recommend the paper for acceptance at this time.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Reasonable results, limited novelty\"}", "{\"title\": \"Paper updated\", \"comment\": [\"We have updated the paper and addressed several issues pointed out by the reviewers.\", \"We added comparisons to ELMo-Pool and skip-thought vectors on all tasks, and our model out-performs these prior methods pretraining sentence embeddings. Note that we reran the segmentation model with ELMo-Pool with different hyper-parameters, and got slightly better results than before. However, it is still under-performs our models.\", \"We improved the baselines (especially the L+R LSTM) and we reran all summarization experiments with models using an additional LSTM layer.\", \"We update and include some more recent results on the summarization tasks.\", \"For the future revision, we plan to include more experimental details (which were included in our prior response) in the appendix.\"]}", "{\"title\": \"Paper updated\", \"comment\": [\"We have updated the paper and addressed several issues pointed out by the reviewers. Specifically, we add more comparisons for the segmentation experiments as suggested.\", \"We added comparisons to ELMo-Pool and skip-thought vectors on all tasks, and our model out-performs these prior methods pretraining sentence embeddings. Note that we reran the segmentation model with ELMo-Pool with different hyper-parameters, and got slightly better results than before. However, it is still under-performs our models.\", \"We improved the baselines (especially the L+R LSTM) and we reran all summarization experiments with models using an additional LSTM layer.\", \"We update and include some more recent results on the summarization tasks.\", \"For the future revision, we plan to include more experimental details (which were included in our prior response) in the appendix.\"]}", "{\"title\": \"Paper updated\", \"comment\": [\"We have updated the paper and addressed several issues pointed out by the reviewers. Specifically, we add more comparisons to Elmo-Pool and skip-thought as suggested.\", \"We added comparisons to ELMo-Pool and skip-thought vectors on all tasks, and our model out-performs these prior methods pretraining sentence embeddings. Note that we reran the segmentation model with ELMo-Pool with different hyper-parameters, and got slightly better results than before. However, it is still under-performs our models.\", \"We improved the baselines (especially the L+R LSTM) and we reran all summarization experiments with models using an additional LSTM layer.\", \"We update and include some more recent results on the summarization tasks.\", \"For the future revision, we plan to include more experimental details (which were included in our prior response) in the appendix.\"]}", "{\"title\": \"Initial Response\", \"comment\": [\"We thank Reviewer 2 for the valuable comments. We will address the clarification suggestions in detail in an appendix in the paper and also provide brief explanations below.\", \"In the document segmentation task experiments, the dataset is sampled from the same set of articles used for pretraining. However, the labels for the segmentation task (section boundaries) are never used during pretraining.\", \"We pre-trained our representations using a sentence-based corpus, containing articles represented as lists of sentences. For passage retrieval, we directly apply the same model on the passages without re-training the models. The whole model is then fine-tuned using the passage retrieval labeled data.\", \"Other features for passage retrieval: these features were computed using the software \\\\url{https://github.com/allenai/document-qa} of Clark & Gardner and are defined as follows: cosine similarity of tf-idf representations of passage and question, logarithm of scaled position of first word in the paragraph, indicator function of whether this is the first paragraph in the document, number of non-stop words in the paragraph that appear in the question, number of paragraph words that appear in the question but with a different case or that are stop-words.\", \"Currently we always select 3 sentences in the extractive summarization task. All of our tasks use the same pretrained model trained on Wikipedia.\", \"(Compare local setting with ELMo) In fact, the local setting is quite similar to ELMo, as the existing ELMo models have been trained on documents with shuffled sentences which encourages the model to ignore the external context. In addition, the use of truncated back-propagation (typically at 20 words to enable efficient training) limits the ability of the model to learn long-distance dependencies. We will add discussion in the paper.\", \"(More comparisons) We performed an additional experiment using ELMo-pool on document segmentation. For document segmentation, using fixed ELMo-pool representations as block features for a document-level LSTM results in 42.3 F1. This is significantly lower than the 54.9 by L+R-LM and 51.9 by Global-MLM. We are working on evaluating ELMo-pool on summarization as well.\"]}", "{\"title\": \"Initial Response\", \"comment\": \"We thank Reviewer 3 for the valuable comments.\\n\\n[Overall] We would like to clarify the paper and point out that the main novelty of the paper is to ask the research question: \\u201cwhat is the value of pretraining document-level hierarchical models with document-level context?\\u201d While language model pre-training has been studied before, language model pretraining on document-level context has not been studied extensively. \\n\\n[1] While objective functions such as left-to-right LM next word prediction or missing word prediction have been proposed before, no prior work has applied them to pretrain hierarchical document-level models. Unlike prior work like word2vec or Collobert et al.-11 that used missing word prediction to pretrain only context-independent word embeddings, we used this and uni-directional LM objectives to pre-train millions of parameters of a hierarchical document-level representation which contextualizes text segment representations with respect to thousands of tokens in the document.\\n \\n[2] (Document segmentation task) We performed an additional experiment using ELMo-pool. Using fixed ELMo-pool representations as block features for a document-level LSTM results in 42.3 F1 on the document segmentation task. This is significantly lower than the 54.9 by L+R-LM and 51.9 by Global-MLM. \\n\\n[3] Note that, unlike other pre-training methods for document classification tasks, our model does *not* generate a single vector for an input document; this is why we did not apply the model to document classification/retrieval tasks. The focus of the paper is on pretraining hierarchical representations. The state-of-the-art models for segmentation and document summarization use hierarchical models, because they require representations of individual sentences or paragraphs in document-level context. Our focus is on improving such contextual representations and thus choose these tasks to evaluate the effectiveness of our approach.\"}", "{\"title\": \"Initial Response\", \"comment\": \"We thank Reviewer 1 for the valuable comments and will update the paper to address the comments.\", \"the_novelty_of_our_paper_can_be_summarized_into_three_points\": [\"While language model pre-training has been studied before, language model pre-training on document-level context has not been studied extensively.\", \"We extend the language model pre-training framework to learning representations of thousands of tokens through hierarchical models. Previous work has pre-trained non-hierarchical representations of at most hundreds of tokens through language model pre-training.\", \"We compare the effectiveness of combining pre-trained uni-directional representations versus pre-training bidirectional representations directly, which has not been done before.\"]}", "{\"title\": \"Reasonable method, but not too much novelty\", \"review\": \"Reasonable method, but not too much novelty\\n\\n[Summary]\\n\\nThe paper proposed techniques to pretrain two-layer hierarchical bi-directional or single-directional LSTM networks for language processing tasks. In particular, the paper uses the word prediction, either for the next work or randomly missing words, as the self-supervised pretraining tasks. The main idea is to not only train text embedding using context from the same sentence but also take the embedding of the surrounding sentences into account, where the sentence embedding is also context-aware. Experiments are done for document segmentation, answer passage retrieval, extractive document summary.\\n\\n[Pros]\\n\\n1.\\tThe idea of considering across-sentence/paragraph context for text embedding learning is very reasonable. \\n2.\\tThe random missing-word completion is also a reasonable self-supervised learning task. \\n3.\\tThe results are consistently encouraging across all three task. And the performance for \\u201canswer passage retrieval\\u201d is especially good. \\n\\n[Cons]\\n\\n1.\\tThe ideas of predicting the next word (L+R-LM) or missing words (mask-LM) have been around and widely used for a long time. Apply this idea to an two-layer hierarchical LSTM is a straightforward extension of this existing idea.\\n2.\\tFor document segmentation, no comparison with other methods is provided. For extractive document summary, the performance difference between the proposed method and the previous methods are very minor.\\n3.\\tImportantly, the experiments can be stronger if the learned embedding can be successfully applied to more fundamental tasks, such as document classification and retrieval. \\n\\nOverall, the paper proposed a reasonable method, but the significance of the paper can be better justified by more solid experiments.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Good incremental work showing the value of pretraining\", \"review\": \"Summary:\\nThis paper proposes to extend the pretraining used for word representations in QA (e.g., ELMO) in the following sense: Instead of just predicting next/previous words in a sentence/paragraph, performing a hierarchical prediction over the whole document, by having a local LSTM and a global LSTM as presented in Fig. 1 + the idea of masked language model. Authors show meaningful improvements in 3 tasks that require document level understanding: extractive summarization, document segmentation, and answer passage retrieval for doc level QA.\", \"pros\": [\"Good presentation and clear explanations.\", \"Meaningful improvements in various tasks requiring document level understanding.\"], \"cons\": [\"Novelty is mainly incremental\"], \"minor_comment\": \"- Use a bigger picture for Fig. 1\\n- In page 1, Introduction, paragraph 2, line 10, \\\"due the long-distance ...\\\" ==> \\\"due to the long-distance ...\\\"\\n\\n**********\\nI would like to thank authors for their feedback. After reading their feedback I still believe that novelty is incremental and would like to keep my score.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Good contribution with a few missing baselines and implementation details\", \"review\": \"In this work, the authors explore different ways to pre-train contextualized word and sentence representations for use in other tasks. They propose two main methods: a straight-forward extension of the ElMO model for hierarchical uni-directional language models, and a de-noising auto-encoder type method which allows to train bi-directional representations. The learned contextual representations are evaluated on three downstream tasks, demonstrating the superiority of the bi-directional training setting, and beating strong baselines on extractive summarization.\\n\\nThe method is clearly presented and easy to follow, and the experiments do seem to support the author's claims, but their exposition misses several important details (or could be presented more clearly). For the document segmentation task, are the articles taken from a held-out set, or are they contained in the pre-training set? For passage retrieval, is the representation the same or are the representations re-trained from scratch using paragraph blocks? What exactly are the other features (those can go in the appendix)? And for the extractive summarization task, how many sentences are selected? Is pre-training also done on Wikipedia, or are those representations trained on news text?\\n\\nA comparison to non-contextualized sentence representations would also be welcome (SkipThought, InferSent, ElMO-pool for settings other than passage retrieval). Note also that the local pre-training is not equivalent to ElMO, as the later sees context form the whole document rather than just the current sentence.\\n\\nIt is interesting to see that contextualized sentence representations can be used and that the Mask-LM objective yields better results than L+R-LM, but these points would be better made if the above questions were answered.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
Skl3M20qYQ
Non-Synergistic Variational Autoencoders
[ "Gonzalo Barrientos", "Sten Sootla" ]
Learning disentangling representations of the independent factors of variations that explain the data in an unsupervised setting is still a major challenge. In the following paper we address the task of disentanglement and introduce a new state-of-the-art approach called Non-synergistic variational Autoencoder (Non-Syn VAE). Our model draws inspiration from population coding, where the notion of synergy arises when we describe the encoded information by neurons in the form of responses from the stimuli. If those responses convey more information together than separate as independent sources of encoding information, they are acting synergetically. By penalizing the synergistic mutual information within the latents we encourage information independence and by doing that disentangle the latent factors. Notably, our approach could be added to the VAE framework easily, where the new ELBO function is still a lower bound on the log likelihood. In addition, we qualitatively compare our model with Factor VAE and show that this one implicitly minimises the synergy of the latents.
[ "vae", "unsupervised learning" ]
https://openreview.net/pdf?id=Skl3M20qYQ
https://openreview.net/forum?id=Skl3M20qYQ
ICLR.cc/2019/Conference
2019
{ "note_id": [ "HJlTO5ZIlV", "HJe7r3bRhm", "BJx7Hedv3X", "BklcjjTljm" ], "note_type": [ "meta_review", "official_review", "official_review", "official_review" ], "note_created": [ 1545112180595, 1541442619150, 1541009467203, 1539525537559 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1302/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1302/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1302/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1302/AnonReviewer3" ] ], "structured_content_str": [ "{\"metareview\": \"The paper introduces a form of variational auto encoder for learning disentangled representations. The idea is to penalise synergistic mutual information. The introduction of concepts from synergy to the community is appreciated.\\n\\nAlthough the approach appears interesting and forward looking in understanding complex models, at this point the paper does not convince on the theoretical nor on the experimental side. The main concepts used in the paper are developed elsewhere, the potential value of synergy is not properly examined. \\n\\nThe reviewers agree on a not so positive view on this paper, with ratings either ok, but not good enough, or clear rejection. There is a consensus that the paper needs more work.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Interesting approach, but more work needed on theory and experiments\"}", "{\"title\": \"Interesting perspective, but not strong enough results\", \"review\": \"The paper proposes a new objective function for learning disentangled representations in a variational framework, building on the beta-VAE work by Higgins et al, 2017. The approach attempts to minimise the synergy of the information provided by the independent latent dimensions of the model. Unfortunately, the authors do not properly evaluate their newly proposed Non-Syn VAE, only providing a single experiment on a toy dataset and no quantitative metric results. Furthermore, even qualitatively the proposed model is shown to perform no better than the existing factor-VAE baseline.\\n\\nI commend the authors for taking a multi-disciplinary perspective and bringing the information synergy ideas to the area of unsupervised disentangled representation learning. However, the resulting Non-Syn VAE objective function is effectively a different derivation of the original beta-VAE objective. If the authors want to continue with the synergy minimisation approach, I would recommend that they attempt to use it as a novel interpretation of the existing disentangling techniques, and maybe try to develop a more robust disentanglement metric by following this line of reasoning. Unfortunately, in the current form the paper is not suitable for publication.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Important topic, but lack of experiments\", \"review\": \"The authors aim at training a VAE that has disentangled latent representations in a \\\"synergistically\\\" maximal way.\\nFor this they use one (of several possible) versions of synergy defintions and create a straight forward penalization term for a VAE objective (roughly the whole mutual information minus the maximum mutual information of its parts).\\nThey train this VAE on one dataset, namely dsprites, and compare it to a VAE with total correlation penalization. \\n\\nThe paper is well written and readable. The idea of using synergy is an important step forward in understanding complex models. The concept of synergy has great potential in machine learning and is highly relevant.\\n\\nThe main concepts of synergy are not developed in this paper and the used penalization term is straight forward.\\nThe number of experiments conducted and comparisons done is quite limited. Also the potential of synergy is not really demonstrated, e.g. for representation learning, causality, etc., and appears here ad hoc. \\nAlso why one should use the authors' suggested penalization term instead of total correlation is not discussed, nor demonstrated as they perform similarly on both disentanglement and synergy loss.\\n\\nI hope the authors find more relevant applications or data sets in the future to demonstrate the importance of synergy.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Far off being ready for publication\", \"review\": \"This paper proposes a new approach to enforcing disentanglement in VAEs using a term that penalizes the synergistic mutual information between the latent variables, encouraging representations where any given piece of information about a datapoint can be garnered from a single latent. In other words, representations where there is no information conveyed by combinations of latents that is not conveyed by considering each latent in isolation. As the resultant target is intractable to evaluate, a number of approximations are employed for practical training.\\n\\nThe high-level idea is quite interesting, but the paper itself is quite a long way of convincing me that this is actually a good approach. Moreover, the paper is a long way of the level of completeness, rigor, clarity, and polish that is required to seriously consider it for publication. In short, the work is still at a relatively early stage and a lot more would need to be done for it to attain various minimum standards for acceptance. A non-exhaustive list of specific examples of its shortfalls are given below.\\n\\n1. The paper is over a page and a half under length, despite wasting large amounts of space (e.g. figures 3 and 4 should be two lines on the same plot)\\n\\n2. The experimental evaluation is woefully inadequate. The only quantitative assessment is to compare to a single different approach on a single toy dataset, and even then the metric being used is the one the new method uses to train for making it somewhat meaningless.\\n\\n3. The introduction is completely generic and says nothing about the method itself, just providing a (not especially compelling) motivation for disentanglement in general. In fact, the motivation of the introduction is somewhat at odds with the work -- correctly talking about the need for hierarchical representations which the approach actually actively discourages.\\n\\n4. There are insufficient details on the algorithm itself in terms of the approximations that are made to estimate the synergistic mutual information. These are mostly glossed over with only a very short explanation in the paragraph after equation 15. Yes there are algorithm blocks, but these are pretty incomprehensible and lack accompanying text. In particular, I cannot understand what A_w is supposed to be. This is very important as I suspect the behavior of the approximation is very different to the true target. Similarly, it would be good to provide more insight into the desired target (i.e. Eq 15). For example, I suspect that it will encourage a mismatch between the aggregate posterior and prior by encouraging higher entropy on the former, in turn causing samples from the generative model to provide a poor match to the data.\\n\\n5. The repeated claims of the approach and results being \\\"state-of-the-art\\\" are cringe-worthy bordering on amusing. Writing like this serves no purpose even when it justified, and it certainly is not here.\\n\\n6. There are a lot of typos throughout and the production values are rather poor. For example, the algorithm blocks which are extremely messy to the point where they are difficult to follow, citep/citet mistakes occur almost every other citation, there is a sign error in Equation 16.\\n\\n\\nThis is a piece of work in an exciting research area that, with substantial extra work, could potentially result in a decent paper due to fact that the core idea is simple and original. However, it is a long way short of this in its current state. Along with addressing the specific issues above and improving the clarity of the work more generally, one thing in particular that would need to address in a resubmission is a more careful motivation for the method (ideally in the form of a proper introduction). \\n\\nThough I appreciate this is a somewhat subjective opinion, for me, penalizing the synergistic information is probably actually a bad thing to do when taking a more long-term view on disentanglement. Forcing simplistic representations where no information is conveyed through the composition of latents beyond that they provide in isolation is all well and good for highly artificial and simplistic datasets like dsprites, but is clearly not a generalizable approach for larger datasets where no such simplistic representation exists. As you say in the first line of your own introduction, hierarchy and composition are key parts of learning effective and interpretable representations and this is exactly what you are discouraging. A lot of the issue here is one of the disentanglement literature at large rather than this paper (though I do find it to be a particularly egregious offender) and it is fine to have different opinions. However, it is necessary to at least make a sensible case for why your approach is actually useful. \\n\\nNamely, is there actually any real applications where such a simplistic disentanglement is actually useful? Is there are anyway the current works helps in the longer vision of achieving interpretable representations? When and why is the synergistic information a better regularizer than, for example, the total correlation? The experiments you have do not make any inroads to answering these questions and there are no written arguments of note to address them. I am not trying to argue here that there isn't a good case to be made for the suggested approach in the context of these questions (though I am suspicious), just that if the work is going to have any lasting impact on the community then it needs to at least consider them.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}" ] }
HklnzhR9YQ
Approximation and non-parametric estimation of ResNet-type convolutional neural networks via block-sparse fully-connected neural networks
[ "Kenta Oono", "Taiji Suzuki" ]
We develop new approximation and statistical learning theories of convolutional neural networks (CNNs) via the ResNet-type structure where the channel size, filter size, and width are fixed. It is shown that a ResNet-type CNN is a universal approximator and its expression ability is no worse than fully-connected neural networks (FNNs) with a \textit{block-sparse} structure even if the size of each layer in the CNN is fixed. Our result is general in the sense that we can automatically translate any approximation rate achieved by block-sparse FNNs into that by CNNs. Thanks to the general theory, it is shown that learning on CNNs satisfies optimality in approximation and estimation of several important function classes. As applications, we consider two types of function classes to be estimated: the Barron class and H\"older class. We prove the clipped empirical risk minimization (ERM) estimator can achieve the same rate as FNNs even the channel size, filter size, and width of CNNs are constant with respect to the sample size. This is minimax optimal (up to logarithmic factors) for the H\"older class. Our proof is based on sophisticated evaluations of the covering number of CNNs and the non-trivial parameter rescaling technique to control the Lipschitz constant of CNNs to be constructed.
[ "CNN", "ResNet", "learning theory", "approximation theory", "non-parametric estimation", "block-sparse" ]
https://openreview.net/pdf?id=HklnzhR9YQ
https://openreview.net/forum?id=HklnzhR9YQ
ICLR.cc/2019/Conference
2019
{ "note_id": [ "SJglKRzrg4", "SJxraILVRQ", "rJxXsq8FTX", "Skgxv9UK6m", "H1l4r5IKpQ", "r1evGqLtpQ", "HJxRgdIKaX", "HJeo4wAshX", "rJxsSvvcnQ", "ryxrITy5nm" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1545051767579, 1542903484948, 1542183578948, 1542183511834, 1542183484172, 1542183438828, 1542182902281, 1541297971325, 1541203778913, 1541172556893 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1301/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1301/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1301/Authors" ], [ "ICLR.cc/2019/Conference/Paper1301/Authors" ], [ "ICLR.cc/2019/Conference/Paper1301/Authors" ], [ "ICLR.cc/2019/Conference/Paper1301/Authors" ], [ "ICLR.cc/2019/Conference/Paper1301/Authors" ], [ "ICLR.cc/2019/Conference/Paper1301/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1301/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1301/AnonReviewer1" ] ], "structured_content_str": [ "{\"metareview\": \"The paper presents an interesting treatment of transforming a block-sparse fully connected neural networks to a ResNet-type Convolutional Network. Equipped with recent development on approximations of function classes (Barron, Holder) via block-sparse fully connected networks in the optimal rates, this enables us to show the equivalent power of ResNet Convolutional Nets.\\n\\nThe major weakness in this treatment lies in that the ResNet architecture for realizing the block-sparse fully connected nets is unrealistic. It originates from the recent developments in approximation theory that transforming a fully connected net into a convolutional net via Toeplitz matrix (operator) factorizations. However the convolutional nets or ResNets obtained in this way is different to what have been used successfully in applications. Some special properties associated with convolutions, e.g. translation invariance and local deformation stability, are not natural in original fully connected nets and might be indirect after such a treatment. \\n\\nThe presentation of the paper is better polished further. Based on ratings of reviewers, the current version of the paper is on borderline lean reject.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Interesting transformation of block-sparse fully connected net to ResNet Convolutional blocks, yet the ResNet architecture seems unrealistic and indirect.\"}", "{\"title\": \"Thanks for the update\", \"comment\": \"I do not have further questions.\"}", "{\"title\": \"Reply from authors\", \"comment\": \"Thank you for your review. We appreciate your detailed feedback. We reply to your comments one by one.\\n\\n1.\\n> It seems that when $M=1$, it reduces to the sparse NN considered in [Schmidt-Hieber 2017].\\nBlocks in a block-sparse FNN is dense in general, as opposed to the sparse NN. Therefore, a block-sparse FNN with M=1 block is different from NNs used in [Schmidt-Hieber 2017]. Of course, we can apply our theorem with M=1 to derive the estimation error. The resulting CNN would be a ResNet with single residual block (since the number of blocks in FNN equals to the number of residual blocks in transformed ResNet-type CNN). However, the CNN has as many as O(M) channels. Since optimal FNN has M=O(N^\\\\alpha) (\\\\alpha > 0) blocks, we cannot keep the number of units per layer of the optimal CNN constant.\\n\\n> Thus, it seems unclear why we should consider such block-sparse family. Can any sparse NN be embedded in the family of CNNs?\\nAlmost all FNNs used in the previous studies to approximate some specific function classes have block-sparse structures. For example, [Yarotsky 2017], [Yarotsky 2018], and [Zhou 2018]. Same is true of the case of the expansion of a Besov function by wavelet bases [B\\u00f6lcskei et al. 2017]. From the viewpoint of functional analysis, block-sparse structure naturally corresponds to expansion of the target function with a set of basis functions approximated by dense FNNs.\\nin view of sparsity, we would argue that our NNs are more practical than ones used in previous literature (e.g., [Yarotsky 2017], [Schmidt-Hieber 2017], and [Imaizumi & Fukumizu 2018]). They imposed somewhat artificial sparse constraints to FNNs by restricting the number of non-zero parameters. However, we need to train with L0 regularization to realize such NNs and hence actual NNs do not have such non-zero sparsity patterns. Contrary to that, our CNN is dense in general since the block-sparse FNNs from which we construct CNNs have dense blocks. We have fixed our paper to remove the sparsity constraints to made it clear that our CNN is dense in general.\\n\\n2. \\n> In the Related Work, the authors only compare with 2 previous work on the approximation error of CNN. Actually, this work is more related to [Schmidt-Hieber 2017] due to borrowing the results.\\nWe compared our work with Zhou (2018) and Petersen & Voigtlaender (2018) since their works are close to ours in analyzing approximation ability (and hence estimation ability) of CNNs.\\n\\n> It would be better to see what the novelties are compared with that work, especially in terms of the proof techniques.\\nWe think the evaluation of the covering number of the set of CNNs is novel. It corresponds to the evaluation of M_1 in Theorem 2. If we naively trace the proof of [Schmidt-Hieber 2017] Lemma 12, the logarithm of covering number is prohibitively large to derive the desired estimation error.\", \"we_mainly_used_two_techniques_to_deal_with_the_problem\": \"the architecture-aware evaluation of sup norms and the rescaling of parameters. First, we explicitly used the ResNet-type architecture and derived a tighter bound of the sup norms of the function realized by a CNN (Proposition 11 and Lemma 3). Secondly, if we apply our result on estimation error bounds to concrete classes using the approximation results of [Klusowski 2018] (for Barron class) and [Schimidt-Hieber 2017] (for H\\\\\\u201dolder class), the assumption of Corollary 1 about M_1 is not satisfied because the Lipschitz constant of the CNN is too large. We devised the parameter rescaling technique to reduce the Lipschitz constant to meet the assumption. We discussed the problem in Section 5.1 (Barron class), Section 5.2 (H\\\\\\u201dolder class), and Lemma 6.\\n\\n3. We have added the detail proof as Lemma 7.\"}", "{\"title\": \"Reply from authors (2/2)\", \"comment\": \"Reply to specific comments:\\n> Section 1, p.2: define M? define D? M seems to be used for different things in different paragraphs. The discussion on 'relative scale' could be made clearer.\\n\\nWe added the definition of D and M to the introduction section. We used the variable M mainly for three meanings: the number of blocks of a block-sparse FNN, the number of residual blocks in a ResNet-type CNN, and the number of parameters of an NN (either FNN or CNN). As we see from Theorem 1, the CNN which we constructed an FNN with M blocks has M residual blocks (plus the 0-th block). Therefore, we used the same character M. Since an FNN with M blocks has \\\\tilde{O}(M) parameters in common settings, we used M to indicate the number of parameters in the introduction. If it is confusing, we are thinking to use different characters for parameter counts and block counts.\\n\\n\\n> Section 5.1: \\\"M = 1\\\" this is confusing, maybe use a different letter for the ridge expansion? \\n\\n\\u201cM=1\\u201d should have been D_1 = \\\\cdots = D_M = 1 and L_1^{(1)} = \\\\cdots = L_M^{(1)} = 1. We have fixed the description of Section 5.1 and Section E.\\n\\n> Section 2: Explain what is \\\"s\\\" in the Barron class, or at least point to the relevant definition in the paper\\n\\n\\u201cs\\u201d is a parameter in the definition of the Barron class that indicates the decay speed of signals in Fourier domain. We have added the reference to Definition 3.\\n\\n> Section 3.1:\\n> * 'estimation error' is usually called '(expected) risk' in the statistical literature (also in the introduction). estimation error would have to do with relating R and R^hat\\n\\nIndeed, in the statistics literature, the estimation error is frequently used for the (finite-dimensional) parameters. On the other hand, in nonparametric statistics, it is also common to use the terminology \\\"estimation error\\\" to indicate the expected risk, because parameters themselves are functions in the L2-space (while the estimation error is also sometimes referred as the variance term inside a model). Therefore, we used the terminology. \\n\\n> * why is the estimator \\\"regularized\\\"?\\n\\nWe called this estimator \\\"regularized\\\" because we impose sparse constraints on the set of CNNs from which we pick the ERM estimator by restricting the maximum number of non-zero parameters. Now we do not impose such constraints, we have replaced it with the clipped ERM estimator.\\n\\n> Definition 2: shouldn't it be D_m^(0) = D instead of 1?\\n\\nYes. Thank you for pointing it out. We have fixed it.\\n\\n> Theorem 1: What is L? Also, it would be helpful to sketch the construction in the main paper given that this is the main result.\\n\\nWe intended that L is the total depth of the ResNet-type CNNs. We have changed the statement to specify the ResNet-type CNNs by the number of residual blocks and the depth of each block as we did in the definition of \\\\mathcal{F}^{\\\\mathrm{(CNN)}}.\\n\\n> Section 4.2: M_1 is the Lipschitz constant of what function?\\n\\nThe Lipschitz constant of a function realized by a CNN in \\\\mathcal{F}^{\\\\mathrm{(CNN)}}.\\n\\n> Section 5.2, 'if we carefully look at their proofs': more details on this should be provided.\\n\\nWe have added the detail of the proof as Lemma 7.\"}", "{\"title\": \"Reply from authors (1/2)\", \"comment\": \"Thank you for your detailed review. We would appreciate your insightful comments. We reply to your comments one by one.\\n\\n> However, the obtained CNN approximating architectures look quite unrealistic compared to most practical use-cases of CNNs,\\n> since they specifically try to reproduce a fully-connected architecture, leading to residual blocks of depth ~= D/K,\\n> which is very deep compared to usual CNNs/ResNets (considering, e.g. K=3 and D in the hundreds for images).\\n\\nIt is true that the residual blocks in the original ResNet have 2 layers, while those in ours have much more layers. However, identity connections skipping many layers are not rare. For example, one of the variants of DenseNet (Huang (2017)) used to train ImageNet consists of 201 layers, and its outermost connection skips 48 layers (>20% of the whole networks).\\n\\nAlthough It might be a different discussion point, in view of sparsity, we would argue that our NNs are more practical than ones used in previous literature (e.g., Yarotsky (2017), Schmidt-Hieber (2017), and Imaizumi & Fukumizu (2018)). They imposed somewhat artificial sparse constraints to FNNs by restricting the number of non-zero parameters. However, we need to train with L0 regularization to realize such NNs and hence actual NNs do not have such non-zero sparsity patterns. Contrary to that, our CNN is dense in general since the block-sparse FNNs from which we construct CNNs have dense blocks. We have fixed our paper to remove the sparsity constraints to made it clear that our CNN is dense in general.\\n\\n\\n> In particular, CNNs are typically used when there is some relevant inductive bias such as equivariance\\n> to translations (and invariance with pooling operations) to take advantage of,\\n> so removing this inductive bias by approximating fully-connected architectures seems a bit twisted.\\n\\nAs appeared in Zhou (2018) or Petersen & Voigtlaender (2018), it is one of the standard approaches in the function approximation theory for CNNs to approximate a target function with FNNs and to transform the FNNs into CNNs. Although this approach is somewhat indirect as you pointed out, we believe it is still useful from a viewpoint of inductive bias, too. If we can successfully reflect inductive biases as particular structures of FNNs, like block-sparseness as we did in this paper, CNNs can capture the biases via FNNs. Although this is just an idea, if the dataset has some invariance (such as translation invariance), we can expect blocks in an FNN might have some redundancy in some sense (e.g., blocks are similar to each other). Using the weight-sharing property of CNNs, we might need fewer parameters to realize a function using CNNs than using FNNs, as we pointed out in the conclusion section.\\n\\n\\n> Separately, the presentation of the paper could be significantly improved, for instance by introducing relevant notions more clearly in the introduction and related work sections, and by providing more insight and discussion of the obtained results in the main paper.\\n\\nThank you for your suggestion. We are thinking to add an extended discussion in the next revision.\"}", "{\"title\": \"Reply from authors\", \"comment\": \"We appreciate your detailed and insightful comments. We reply to your comments one by one.\\n\\n> However, it is not very clear how the convolutional structure of CNNs help in the analysis of approximating FNNs. For example, in the analysis of C.1 and C.2, it will help better understand why CNNs may work from a high-level intuition when the authors construct the filters. \\n\\nConvolution with a size-1 filter, inspired by a 1x1 convolution used in image recognition models such as Inception (Szegedy et al. (2014)), is equivalent to dimension-wise affine transformation. Intuitively, it implies CNNs have as powerful learning ability as FNNs. There is room for discussion if our proofs can effectively utilize the convolutional structure of CNNs. However, we have shown that approximation and estimation error rates are no worse than that of FNNs. In particular, CNNs can already achieve the minimax optimal rate for the H\\\\\\u201dolder class. That means even if we make full use of convolutional structure, we have no hope to improve the rate. Considering that the learning ability of CNNs had not been investigated deeply in the literature, we believe our analysis is a critical first step toward unveiling the learning ability of CNNs.\\n\\nWith that being said, we also want to leverage the inductive bias of data to yield advantageous learning ability of CNNs over FNNs. We believe the analysis of CNNs employing FNNs could be a promising strategy. If we can successfully reflect inductive biases as particular structures of FNNs, like block-sparseness as we did in this paper, CNNs can capture the biases via FNNs. Although this is just an idea, if the dataset has some invariance (such as translation invariance), we could expect blocks in an FNN has redundancy in some sense (e.g., blocks are similar to each other). Using the weight-sharing property of CNNs, we might need fewer parameters to realize a function using CNNs than using FNNs, as we pointed out in the conclusion section.\\n\\n\\n> Moreover, it will also help better understand the expressive power of CNNs if the authors can provide some extended discussion on why approximating the block-sparse FNNs rather than arbitrary feed-forward networks. Is there any fundamental reason (or a counterexample) this cannot be realized, or is there to some extent a technical barrier in the analysis? \\n\\nAlmost all FNNs used in the previous studies to approximate some specific function classes have block-sparse structures. For example, Yarotsky (2017), Yarotsky (2018), and Zhou (2018). Same is true of the case of the expansion of a Besov function by wavelet bases (B\\u00f6lcskei et al. (2017). From the viewpoint of functional analysis, block-sparse structure naturally corresponds to expansion of the target function with a set of basis functions approximated by dense FNNs.\\nIt is not trivial how to (approximately) transform general FNNs without block-sparse structures into ResNet-type CNNs, because there is no principled way to decompose an FNN into residual blocks. Although block-sparse FNNs are somewhat theoretical tools, we can realize optimal dense ResNet CNNs, by bypassing them.\"}", "{\"title\": \"Revised version uploaded\", \"comment\": [\"We have uploaded the revised version of our paper. The main differences from the previous one are as follow:\", \"We removed the sparsity constraints (specified S by the previous version) from $\\\\mathcal{F}^{\\\\mathrm{(CNN)}}$ in order to emphasize that the CNNs we consider is dense in general. Accordingly, the statements of Theorem 2 and Corollary 1 (and Lemma 4) are changed so that they do not use S.\", \"We added the lemma (Lemma 7) on how to approximate the \\\\beta-H\\\\\\u201dolder function using block-sparse FNNs by modifying the proof of Schmidt-Hieber (2017).\", \"Fixed typos and grammatical errors and changed several variables for readability.\", \"Thank you for your interest.\"]}", "{\"title\": \"The block sparse structure seems unnecessary given the results in [Schmidt-Hieber 2017].\", \"review\": \"This manuscript shows the statistical error of the ERM for nonparametric regression using the family of a Resnet-type of CNNs. Specifically, two results are showed. First, the authors show that any block-sparse fully connected neural network can be embedded in CNNs. Second, they show the covering number of the family of CNNs. Combining with the existing results of the approximation error of neural nets (Klusowski&Barron 2016, Yarotsky 2017, Schmidt-Hieber 2017), they show the L2 statistical risk.\", \"detailed_comments\": \"1. The intuition of using block-sparse FNN seems unclear. It seems that when $M=1$, it reduces to the sparse NN considered in [Schmidt-Hieber 2017]. In the proof of Corollary 5, the authors directly use the error of approximating Holder smooth function by sparse FNN and show that the construction in [Schmidt-Hieber 2017] is actually block-sparse. Thus, it seems unclear why we should consider such block-sparse family. Can any sparse NN be embedded in the family of CNNs?\\n\\n2. In the Related Work, the authors only compare with 2 previous work on the approximation error of CNN. Actually, this work is more related to [Schmidt-Hieber 2017] due to borrowing the results. It would be better to see what the novelties are compared with that work, especially in terms of the proof techniques.\\n\\n3. The authors claim that the construction of approximator for Holder functions in [Schmidt-Hieber 2017] is block sparse. It would be nice to give more details of the construction since this is not claimed in [Schmidt-Hieber 2017].\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Approximate block sparse fully connected neural networks, the Barron class and the Holder class using the Residual CNNs\", \"review\": \"The authors demonstrate the function expression properties for the Residual type convolutional neural networks to approximate the block sparse fully connected neural networks. Then it is shown that such Res-CNNs can approximate any function as long as it can be expressed by the block-sparse FNNs, including the Barron class and Holder class functions. The price to pay is that the number of parameters is larger than that of the FNNs by a constant factor.\\n\\nThe idea for connecting the expressive ability of CNNs with FNNs is interesting, which can fully take advantage of the power of FNNs to understand CNNs. However, it is not very clear how the convolutional structure of CNNs help in the analysis of approximating FNNs. For example, in the analysis of C.1 and C.2, it will help better understand why CNNs may work from a high-level intuition when the authors construct the filters. \\n\\nMoreover, it will also help better understand the expressive power of CNNs if the authors can provide some extended discussion on why approximating the block-sparse FNNs rather than arbitrary feed-forward networks. Is there any fundamental reason (or a counterexample) this cannot be realized, or is there to some extent a technical barrier in the analysis? \\n\\nMinor issue\\n\\nOn page 20, \\u201cBounds residual blocks\\u201d -> \\u201cBounds for residual blocks\\u201d\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Interesting approximation and estimation results, but considers somewhat unrealistic CNNs\", \"review\": \"The paper studies approximation and estimation properties of CNNs with residual blocks in the context\\nof non-parametric regression, by constructing equivalent fully-connected architectures (with a block-sparse structure),\\nand leveraging previous approximation results for such functions.\\nExplicit risk bounds are obtained for regression functions in Barron and Holder classes.\\n\\nThe main contribution of the paper is Theorem 1, which shows that a class of ResNet-type CNNs\\ncontains a class of \\\"block-sparse\\\" fully-connected networks, with appropriate constraints on various size quantities.\\nThis result allows the authors to obtain a general risk bound for the ResNet CNN that minimizes empirical risk\\n(Theorem 2, which mostly follows Schmidt-Hieber (2017)),\\nas well as adaptations of the bound for the Barron and Holder classes, by relying on existing approximation results.\\n\\nThe construction of Theorem 1 is interesting, and shows that ResNet CNNs can be quite powerful function approximators,\\neven with a filter size that is arbitrarily fixed.\\nHowever, the obtained CNN approximating architectures look quite unrealistic compared to most practical use-cases of CNNs,\\nsince they specifically try to reproduce a fully-connected architecture, leading to residual blocks of depth ~= D/K,\\nwhich is very deep compared to usual CNNs/ResNets (considering, e.g. K=3 and D in the hundreds for images).\\nIn particular, CNNs are typically used when there is some relevant inductive bias such as equivariance\\nto translations (and invariance with pooling operations) to take advantage of,\\nso removing this inductive bias by approximating fully-connected architectures seems a bit twisted.\\nThe approach of reducing the function class to be approximated would seem more relevant here,\\nas in the cited papers Petersen & Voigtlaender (2018) and Yarotsky (2018), and perhaps the results of\\nthe present paper can be useful in such a scenario as well.\\n\\nSeparately, the presentation of the paper could be significantly improved,\\nfor instance by introducing relevant notions more clearly in the introduction and related work sections,\\nand by providing more insight and discussion of the obtained results in the main paper.\", \"more_specific_comments\": [\"Section 1, p.2: define M? define D? M seems to be used for different things in different paragraphs\", \"Section 2: Explain what is \\\"s\\\" in the Barron class, or at least point to the relevant definition in the paper\", \"Section 3.1:\", \"'estimation error' is usually called '(expected) risk' in the statistical literature (also in the introduction). estimation error would have to do with relating R and R^hat\", \"why is the estimator \\\"regularized\\\"?\", \"Definition 2: shouldn't it be D_m^(0) = D instead of 1?\", \"Theorem 1: What is L? Also, it would be helpful to sketch the construction in the main paper given that this is the main result.\", \"Section 4.2: M_1 is the Lipschitz constant of what function?\", \"Section 5.1: \\\"M = 1\\\" this is confusing, maybe use a different letter for the ridge expansion? The discussion on 'relative scale' could be made clearer.\", \"Section 5.2, 'if we carefully look at their proofs': more details on this should be provided.\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}" ] }
SyehMhC9Y7
Deep Imitative Models for Flexible Inference, Planning, and Control
[ "Nicholas Rhinehart", "Rowan McAllister", "Sergey Levine" ]
Imitation learning provides an appealing framework for autonomous control: in many tasks, demonstrations of preferred behavior can be readily obtained from human experts, removing the need for costly and potentially dangerous online data collection in the real world. However, policies learned with imitation learning have limited flexibility to accommodate varied goals at test time. Model-based reinforcement learning (MBRL) offers considerably more flexibility, since a predictive model learned from data can be used to achieve various goals at test time. However, MBRL suffers from two shortcomings. First, the model does not help to choose desired or safe outcomes -- its dynamics estimate only what is possible, not what is preferred. Second, MBRL typically requires additional online data collection to ensure that the model is accurate in those situations that are actually encountered when attempting to achieve test time goals. Collecting this data with a partially trained model can be dangerous and time-consuming. In this paper, we aim to combine the benefits of imitation learning and MBRL, and propose imitative models: probabilistic predictive models able to plan expert-like trajectories to achieve arbitrary goals. We find this method substantially outperforms both direct imitation and MBRL in a simulated autonomous driving task, and can be learned efficiently from a fixed set of expert demonstrations without additional online data collection. We also show our model can flexibly incorporate user-supplied costs at test-time, can plan to sequences of goals, and can even perform well with imprecise goals, including goals on the wrong side of the road.
[ "imitation learning", "forecasting", "computer vision" ]
https://openreview.net/pdf?id=SyehMhC9Y7
https://openreview.net/forum?id=SyehMhC9Y7
ICLR.cc/2019/Conference
2019
{ "note_id": [ "BJxsxiA0kN", "SJgVjK_qAQ", "HygTv3B90m", "HylhHY7KTX", "rJgS9EXK6m", "SJxcueXtpQ", "S1echIHR3X", "HklQf72q3m", "H1eXBFr9h7" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544641267322, 1543305627971, 1543294052633, 1542170947534, 1542169741364, 1542168690481, 1541457586285, 1541223179493, 1541196091181 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1300/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1300/Authors" ], [ "ICLR.cc/2019/Conference/Paper1300/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1300/Authors" ], [ "ICLR.cc/2019/Conference/Paper1300/Authors" ], [ "ICLR.cc/2019/Conference/Paper1300/Authors" ], [ "ICLR.cc/2019/Conference/Paper1300/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1300/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1300/AnonReviewer3" ] ], "structured_content_str": [ "{\"metareview\": \"This paper proposes to combine RL and imitation learning, and the proposed approach seems convincing.\\n\\nAs is typical in RL work, the evaluation of the method is not strong enough to convince the reviewers. Increasing community criticism on RL methods not scaling must be taken seriously here, despite the authors' disagreement.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"RL methods anno 2018\"}", "{\"title\": \"IL comparisons\", \"comment\": \"Thank you for your reply.\", \"q1\": \"Yes, we refer to the noise injection, used for better generalization and avoiding \\\"unstable policies\\\" (Codevilla, et al. 2018). Our reasoning is injected noise constitutes an intervention taken by the machine. Consequently, training could be more dangerous, especially at speed: humans experts are not used to driving with injective noise, and would possibly require special training prior to data collection. An additional requirement is vehicle modifications to actuate such signals. We appreciate it is ambiguous if \\\"injected noise\\\" constitutes \\\"trial-and-error online\\\" (or not), and clarify it constitutes \\u2018intervention\\u2019. In a related paper, (Liang, et al. 2018) solve the generalization issue by using trial-and-error DDPG after imitation learning. Our paper differs from both (Codevilla, et al. 2018) and (Liang, et al. 2018) in that we require no interventions of any sort (injected noise, nor explicit trial and error). Our method instead solves the imitation generalization issue a new way: by explicitly and probabilistically modelling multi-step expert trajectories.\", \"q2\": \"One source of difference is that we used Model-based RL, instead of Model-free RL (which is used in the CARLA CoRL 2017 paper). The Model-Based RL baseline is strong because of its access to the LIDAR map, which provides very useful obstacle cues. Therefore, we should not necessarily expect the same relative performances between IL and RL. Another source of difference is because we use LIDAR in all methods, and not vision (explained further in next question), a clear comparison with the previous vision-based IL benchmark was difficult. We judged a more meaningful comparison was for all baselines to use the exact same input as our method (LIDAR and previous vehicle locations). To do this, we reimplemented the IL baseline for LIDAR input. Given the significant difference between LIDAR and vision, we do not believe the IL baseline performance is suspiciously low. Our experiments fall under the \\\"Navigation\\\" CARLA benchmark, with the sole difference that our setting is always harder: goal locations are always placed further away (we selected goal destinations as the furthest distance from the starting position) for consistency difficulty across trials. Given the significant difference between LIDAR and vision, we will make more explicit in our paper's conclusions that our findings are specific to LIDAR-input cases only.\", \"q3\": \"Whilst vision would certainty be an interesting extension to this work, we used LIDAR for the following reasons. We used the R2P2 RNN method from (Rhinehart et al. 2018) for our imitative model q, which makes use of the LIDAR's overhead representation of the scene to build an overhead spatial cost grid of the scene, with the same 200x200 size as the lidar input (discussed briefly Section 2.2 of updated paper, which we will expand and clarify). This overhead representation helps to learn a spatial cost map, and is in the same 2D Euclidean space that our predictive model reasons about when predicts trajectories, making LIDAR a natural input for out method to use. To use vision as \\\\phi, our method would need to transfer to an overhead representation, which is nontrivial. We agree vision would certainly be a worthwhile extension but is not the focus of this work. We will additionally clarify these reasons for why we use LIDAR in the paper.\"}", "{\"title\": \"Improved text and experiments, still some questions related to IL comparison\", \"comment\": \"Thanks for the reply and revision. The submission has been drastically rewritten (the diff is massive) and I think it is in much better shape, answering some of my concerns around reproducibility and generalization. Furthermore, it reinforces the strengths of the approach (esp. around its flexibility).\\n\\nI am willing to recommend acceptance, but I have some further questions (hence I have only updated my score to a 6 for now). They are mostly related to the comparison with IL (important to validate the claim in the paper that the proposed approach is quantitatively better than both existing IL and RL methods).\\n\\n1) \\\"state-of-the-art CARLA results require trial-and-error based data collection online (Codevilla, et al. 2018)\\\"\\nWhat do you mean? CIL is a behavior cloning approach that is trained off-line? Are you talking about the data augmentation / noise injection?\\n\\n2) Why are the results of IL much worse than the RL ones while in past publications it's the opposite (cf. CoRL'17 CARLA paper for instance)? This is what I meant by using the \\\"standard CARLA benchmark\\\" for the navigation part of the experiments: you would not just compare to baselines or reimplementations. If the results are counter-intuitive or in (apparent) opposition with previous peer-reviewed and published results, then the submission falls in the \\\"extraordinary claims require extraordinary evidence\\\" regime. And in this case, it is still not clear to me why your IL baselines are that low, and what makes your method not testable on the CARLA benchmark for the navigation tasks. This is not a huge deal breaker, because the proposed method has other clear advantages over IL and RL, but this still casts a shadow on the quantitative comparison with IL. \\n\\n3) Why using LIDAR as input and not the images as most related works on CARLA? Your method seems to be directly applicable to image inputs (\\\\phi contains a HxWxC tensor). I understand the main benefit of LIDAR is it provides a much stronger signal for collision avoidance (esp. for dynamic objects), but I would like to see how this approach works in the more common case of image inputs (and this is obviously linked to question 2 above).\"}", "{\"title\": \"relevant citation, contribution clarification\", \"comment\": \"Thank you for your helpful feedback.\", \"q1\": \"\\\"The author should consider adding the related works\\\"\", \"a1\": \"We have included your suggested reference.\", \"q2\": \"\\\"Good paper with detailed experiments, but the idea seems lacking novelty\\\"\", \"a2\": \"We have since given more evidence of the novelty of our method with additional experiments in a revision. Our main contribution is a novel hybridization of model-based RL and Imitation Learning. This enables high performance in CARLA without any trial-and-error learning, and additionally enables flexibility to tasks not observed in the training data, such as avoiding potholes (Table 2, Figure 7) and robust navigation in the presence of noisy goals (Table 3, Figure 8).\"}", "{\"title\": \"generalization experiments, SOA comparison, method and baseline clarifications\", \"comment\": \"Thank you for your helpful feedback.\", \"q1\": \"\\u201cUnclear how this approach would generalize beyond just staying on the road\\u201d\\nWe agree that there are more sophisticated settings that could be used to test different generalization aspects of our method. In theory, expert behaviors could be modelled in such settings by including the relevant information in the context, as noted by the reviewer. However, we designed our original experiments to reduce the number of uncontrolled variables, in order to clearly isolate the benefits of our approach. In order to test other generalization capabilities, we have since conducted additional experiments in several settings: obstacles in the road that were unseen in the demonstrations (i.e. simulated potholes), and noise in the waypoints provided to the controller, which could occur in a real-world setting due to noisy localization. In the pothole experiment, we found that our model was able to navigate around simulated potholes by including them in the cost map, and compared it to our model that was not provided with a cost map of the potholes. This navigation demands the model generalize its planning to situations not observed in the training data, specifically, when the car must partially enter the opposing lane in order to avoid the obstacle. In the noisy waypoint experiment, we tested two different types of noise: high bias, low variance noise, and low bias, high variance noise. In the high variance setting, \\u201cdecoy\\u201d waypoints are added to the set of possible waypoints. The decoys are obtained by significantly perturbing the original waypoints with Gaussian noise, sigma=8 meters. Successful navigation in this setting required the model\\u2019s ability to score its plans by likeliest under the estimated expert\\u2019s distribution of behavior. In the high bias setting, all waypoints were provided on the wrong side of the road, which is modelled with a small amount of observation noise. We found that these waypoints were still sufficient to communicate high-level navigation directions, and that the model usually produced plans on the correct side of the road (where all expert demonstrations occurred). Please see the updated results for quantitative (Tables 2, 3) and qualitative comparisons (Figures 7, 8).\", \"q2\": \"\\u201cComparison to the state of the art (beyond the baselines implemented here) seems needed\\u201d\", \"a2\": \"Our problem motivation is, instead, that of completely offline learning, but state-of-the-art CARLA results require trial-and-error based data collection online (Codevilla, et al. 2018). Additionally, navigation performance isn\\u2019t the sole goal of our method; we also show that our model has flexibility to different test-time queries that require behavior not seen in the training data. However, we have since implemented the \\u201cbranched\\u201d architecture of Codevilla, et al. 2018, and trained it with the same inputs and data used to train our method. We found this approach to slightly outperform the original IL baseline we included in our paper, but underperform the MBRL comparison and our proposed method. Please see the updated results for our quantitative comparison (Table 1).\", \"q3\": \"\\u201cThe method is only described very succinctly in section 2\\u201d\\nWe have included many more details about the method and the implementation in our updated version. Please see Section 2, and Section 2.2 in particular.\", \"q4\": \"\\u201cthe input modalities are not clear, especially for the baselines\\u201d\", \"the_input_modalities_are_identical_for_all_methods\": \"they all receive the same waypoints, and observe the same LIDAR and past trajectory. We clarified this in the updated paper.\", \"q5\": \"\\u201cWhy use a proportional controller as a baseline instead of the standard PID one?\\nWe tested added I+D terms, replacing the P-controller with a PID controller, and found no significant change -- the PID controller fundamentally cannot handle faraway waypoints.\", \"q6\": \"\\u201cSection 2.3 seems like it's missing the extension of equation 2 to the multi-goal case?\\u201d\\nWe have generalized the mathematical explanation, from which all of our inference procedures can be derived. This includes the multigoal case, in Section 2.1 in the updated version. Additionally, we\\u2019ve included a qualitative demonstration of planning to sequential multi-goals (Figure 3).\"}", "{\"title\": \"generalization, new experiments, updated details for reproducibility\", \"comment\": \"Thank you for your helpful feedback.\", \"q1\": \"\\u201cThe authors do not address the problem of IL when the stochasticity in the environment and/or model results in trajectories outside of expert\\u2019s distribution.\\u201d\", \"a1\": \"In our original submission, we evaluated our model\\u2019s ability to control the agent in a held out test scene (Town02). This demonstrated our model\\u2019s ability to generalize its behavior beyond the behaviors observed in the data. As further evidence of generalization, we performed additional experiments designed to force the model to produce trajectories outside of the distribution of observed trajectories. In one, we added simulated potholes to the scene, which we modelled with a cost map. This forced our planning to produce trajectories that avoid the potholes. We found that the model could still complete most of its episodes, while avoiding most potholes, despite the fact that the agent was forced into situations not seen in the training data. Please see the revised paper for these results.\", \"q2\": \"\\u201cthe experiments only compare the proposed algorithm to its components, namely proportional controller, IL only controller and Model Basel RL only controller.\\u201d\", \"a2\": \"We agree that relevant comparison is important. Our current IL comparison is not an ablation of our method, but rather a comparison to prior offline IL work. It most closely resembles the method of Codevilla, et al. \\\"End-to-end driving via conditional imitation learning.\\\" ICRA, 2018. However, this prior method uses categorical command prediction, \\\"turn left/turn right/go straight\\\", for a learned lower-level controller, whereas our variant of this method regresses setpoints provided to a PID controller. We did not make the connection clear in the original paper, which we will fix.\\nWe also conducted additional experiments against the state-of-the-art with the \\u201cbranched\\u201d network of Codevilla, et al. 2018, which we include in our revised comparison. We found this approach to slightly outperform the original IL baseline we included in our paper, but still underperform the MBRL method and our proposed method. Please see the updated paper for our quantitative comparison.\", \"q3\": \"\\u201cthe paper does not provide any detail on the training procedure (Network architecture, cost function, etc), which makes results hard to reproduce\\u201d\", \"a3\": \"In our updated version, we have simplified our explanation and expanded on additional details, including network architecture, cost function, etc. Please see Section 2.2 in the updated paper.\"}", "{\"title\": \"This paper combines Imitation Learning (IL) and Model Base Reinforcement Learning (RL) to come up with a novel algorithm that can take in user-defined targets while maintaining expert like behaviors. This promising approach that combines the benefits of IL and RL but with result performed only in simulation.\", \"review\": \"- Does the paper present substantively new ideas or explore an under explored or highly novel\\nquestion? \\n\\nYes, the paper combines two frameworks (Imitation Learning and Model Base\\nReinforcement Learning) to incorporate target information while fitting to the expert distribution. Maybe, the idea is novel but experiments are only in simulation. \\n\\n- Does the results substantively advance the state of the art?\\n\\n No, the compared methods are not state-of-the-art.\\n\\n- Will a substantial fraction of the ICLR attendees be interested in reading this paper? \\n\\n Yes a substantial fraction of ICLR attendees might be interested in reading the paper.\\n\\n - would I send this paper to one of my colleagues to read?\\n\\nYes. \\n\\n\\n- Quality: \\n\\nThe key point of this paper is that the proposed algorithm is novel and combines\\nthe advantages of Imitation Learning and Model Base Reinforcement Learning. However, the\\nauthors do not address the problem of IL when the stochasticity in the environment and/or model\\nresults in trajectories outside of expert\\u2019s distribution. Additionally, all experiments are done in\\nsimulation only and comparisons are made against components of the proposed algorithm instead\\nof the state-of-the-art. This is definitely a limitation of the paper given recent works on imitation learning and model predictive control as applied to real robotic systems in the task of agile off-road visual navigation. \\n\\nIn addition, the paper does not provide any detail on the training procedure (Network architecture, cost\\nfunction, etc), which makes results hard to reproduce. In addition, the experiments only compare\\nthe proposed algorithm to its components, namely proportional controller, IL only controller and\\nModel Basel RL only controller.\\n\\n- Clarity: \\n\\nEasy to read. Thorough comparison with existing frameworks (Advantages compared to IL and model\\nbased RL).\", \"originality\": \"\\u2013 Novel algorithm presented with success in simulation.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"[Update] Elegant probabilistic formulation with limited experimental validation\", \"review\": \"# Summary\\n\\nThis submission proposes a method to combine the benefits of model-based RL and Imitation Learning (IL) for navigation tasks. The key idea is to i) learn a prior over trajectory distributions from a fixed dataset of demonstrations, and ii) use this learned dynamical model for path planning via probabilistic inference. Reaching target waypoints is done by maximizing the trajectory likelihood conditioned on the planning goal. The prior is learned using R2P2 on LIDAR features and past positions. Experiments using the CARLA driving simulator show that this method can outperform standard control, IL, and model-based RL baselines, while flexibly incorporating test-time goals and costs thanks to its probabilistic formulation.\\n\\n\\n# Strengths\\n\\nThe method is an elegant way to get the best of both worlds in RL and IL, leveraging the recent R2P2 work to estimate a powerful sequential model used for planning via probabilistic inference. The flexibility of the method in considering test-time cost maps and user-defined goals (e.g. to avoid potholes) is appealing, especially since it does not require on-policy data collection.\\n\\nThe proposed planning-as-inference method can in theory handle the multi-modality present in human demonstrations by using a probabilistic model of the observed behaviors as prior over undirected expert trajectories.\\n\\nThe approach seem to outperform both model-based and imitation learning baselines on a simplified version of the CARLA benchmark, including on interesting fine-grained metrics (e.g., comfort based).\\n\\n\\n# Weaknesses\\n\\nThe main weakness of this submission lies in its experimental evaluation, especially the absence of any dynamic objects in the tested environment (\\\"static world CARLA\\\", section 1). It is unclear how this approach would generalize beyond just staying on the road. How would it handle traffic lights, pedestrians, other drivers, weather variations, and more complex driving tasks than waypoint following by traversing mostly free space? How does the prior generalize to more complex behaviors (e.g, by using more contextual information \\\\phi)? How robust is the method to noise in the demonstrations, i.e. non-expert or suboptimal behavior? It seems that estimating the generative prior on human behavior might suffer from the same issues as behavior cloning, e.g., the sample inefficiency due to the combinatorial explosion of causal factors explaining complex human behaviors. It might be in fact even harder to estimate that generative model than use a direct discriminative approach (e.g., a modular pipeline), at the cost of reduced flexibility at test time of course. The currently reported sample efficiency (7000 training samples) and near perfect success rate seem to suggest that this (non-standard) version of the CARLA benchmark is too simple (no weather variations, no dynamic obstacles). Comparison to the state of the art (beyond the baselines implemented here) on the original CARLA benchmark seems needed (especially in the \\\"Nav. dynamic\\\" task).\\n\\nThe method is only described very succinctly in section 2. I do not believe there are enough details (especially around the learning algorithm, hyper-parameters, and other important technical elements) for reproducibility at this stage. Section 2.1 is also quite dense for people not familiar with the R2P2 paper. As the main contribution of the paper is to leverage that model for planning and control, it would be great to maybe discuss a bit deeper. Finally, the input modalities are not clear, especially for the baselines: the proposed method is using LIDAR and localization whereas the IL baseline seems to use vision (while the others just use the trajectory). This makes the fairness of the comparison really unclear (LIDAR is a much stronger signal for just staying on the road).\", \"minor_remarks\": \"- Why use a proportional controller as a baseline instead of the standard PID one?\\n- Section 2.3 seems like it's missing the extension of equation 2 to the multi-goal case?\\n- Typos in section 3 (\\\"trail-and-error\\\"), section 4 (\\\"autonmous\\\", \\\"knowledge to\\\")\\n\\n\\n# Recommendation\\n\\nAlthough the theoretical benefits of the method are well-motivated and clear (off-policy learning, probabilistic model, flexibility at test time), the experimental evaluation (custom simple CARLA test, unclear comparison to baselines) and lack of details impeding reproducibility seems to suggest that this submission needs a bit more work. First, adding more details as suggested above and clarifying the experimental protocol seem like a must, but can be easily addressed by an update to the text. Second, it would be ideal to evaluate the approach on the standard CARLA benchmark in order to compare fairly to the prior art. This is much more involved.\\n\\nI personally like the approach, so although I think it is marginally below the acceptance threshold in its current form, I reserve my judgement for the time being and look forward to the authors' reply.\\n\\n\\n# Update\\n\\nThe submission has been drastically rewritten (the diff is massive) and I think it is in much better shape, answering some of my concerns around reproducibility and generalization. Furthermore, it reinforces the strengths of the approach (esp. around its flexibility).\\n\\nI am willing to recommend acceptance, but I have some further questions (hence I have only updated my score to a 6 for now). They are mostly related to the comparison with IL (important to validate the claim in the paper that the proposed approach is quantitatively better than both existing IL and RL methods). See discussion below for details.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Good paper with detailed experiments, but the idea seems lacking novelty\", \"review\": \"Major Contribution:\\nThe paper introduces a method that combines the advantage and of model-based RL and imitation learning and offset their weakness. The method proposes a probabilistic inference approach to analyze the action of the model.\\n\\nOrganization/Style:\\nThe paper is well written, organized, and clear on most points.\", \"technical_accuracy\": \"I'm not an expert in RL. The method is obscure to me, but from my point of view, the experiments are done quite thoroughly and the results look good.\", \"presentation\": \"Good.\", \"adequacy_of_citations\": \"\", \"the_author_should_consider_adding_the_related_works_include\": \"Bojarski, Mariusz, et al. \\\"End to end learning for self-driving cars.\\\": using CNNs to implement imitation learning for self-driving cars\", \"multimedia\": \"Videos are helpful to understand the method and are well composed.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"1: The reviewer's evaluation is an educated guess\"}" ] }
HygsfnR9Ym
Recall Traces: Backtracking Models for Efficient Reinforcement Learning
[ "Anirudh Goyal", "Philemon Brakel", "William Fedus", "Soumye Singhal", "Timothy Lillicrap", "Sergey Levine", "Hugo Larochelle", "Yoshua Bengio" ]
In many environments only a tiny subset of all states yield high reward. In these cases, few of the interactions with the environment provide a relevant learning signal. Hence, we may want to preferentially train on those high-reward states and the probable trajectories leading to them. To this end, we advocate for the use of a \textit{backtracking model} that predicts the preceding states that terminate at a given high-reward state. We can train a model which, starting from a high value state (or one that is estimated to have high value), predicts and samples which (state, action)-tuples may have led to that high value state. These traces of (state, action) pairs, which we refer to as Recall Traces, sampled from this backtracking model starting from a high value state, are informative as they terminate in good states, and hence we can use these traces to improve a policy. We provide a variational interpretation for this idea and a practical algorithm in which the backtracking model samples from an approximate posterior distribution over trajectories which lead to large rewards. Our method improves the sample efficiency of both on- and off-policy RL algorithms across several environments and tasks.
[ "Model free RL", "Variational Inference" ]
https://openreview.net/pdf?id=HygsfnR9Ym
https://openreview.net/forum?id=HygsfnR9Ym
ICLR.cc/2019/Conference
2019
{ "note_id": [ "r1lEchveeN", "Bkg6-MfsAm", "H1eoDATPRQ", "SJeB9tx4CX", "HyxLpF2lC7", "rye0EHNcpm", "BkgkxrE567", "BkxisNV5Tm", "HylIyNNcpQ", "SkxjYXEq6Q", "S1lBH745TX", "HyxOprf0n7", "B1lWBJF93Q", "r1lsbRy5hm" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544744076270, 1543344644807, 1543130722710, 1542879628859, 1542666685599, 1542239542414, 1542239462912, 1542239394684, 1542239198251, 1542239106880, 1542239037388, 1541445055714, 1541209912519, 1541172739402 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1299/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1299/Authors" ], [ "ICLR.cc/2019/Conference/Paper1299/Authors" ], [ "ICLR.cc/2019/Conference/Paper1299/Authors" ], [ "ICLR.cc/2019/Conference/Paper1299/Authors" ], [ "ICLR.cc/2019/Conference/Paper1299/Authors" ], [ "ICLR.cc/2019/Conference/Paper1299/Authors" ], [ "ICLR.cc/2019/Conference/Paper1299/Authors" ], [ "ICLR.cc/2019/Conference/Paper1299/Authors" ], [ "ICLR.cc/2019/Conference/Paper1299/Authors" ], [ "ICLR.cc/2019/Conference/Paper1299/Authors" ], [ "ICLR.cc/2019/Conference/Paper1299/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1299/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1299/AnonReviewer3" ] ], "structured_content_str": [ "{\"metareview\": \"The paper presents \\\"recall traces\\\", a model based approach designed to improve reinforcement learning in sparse reward settings. The approach learns a generative model of trajectories leading to high-reward states, and is subsequently used to augment the real experience collected by the agent. This novel take on combining model-based and model-free learning is conceptually well motivated and is empirically shown to improve sample efficiency on several benchmark tasks.\", \"the_reviewers_noted_the_following_potential_weaknesses_in_their_initial_reviews\": \"the paper could provide a clearer motivation of why the proposed approach is expected to lead to performance improvements, and how it relates to learning (and uses of) a forward model. Details of the method, e.g., model parameterization is unclear, and the effect of hyperparameter choices is not fully evaluated.\\n\\nThe authors provided detailed replies to all reviewer suggestions, and ran extensive new experiments, including experiments to address questions about hyperparameter settings, and an entirely new use of the proposed model in a learning from demonstration setting. The authors also clarified the paper as requested by the reviewers. The reviewers have not responded to the rebuttal, but in the AC's assessment their concerns have been adequately addressed. The reviewers have updated their scores in response to the rebuttal, and the consensus is to accept the paper.\\n\\nThe AC notes that the authors seem unaware of related work by Oh et al. \\\"Self Imitation Learning\\\" which was published at ICML 2018. The paper is based on a similar conceptual motivation but imitates high-value traces directly, instead of using a generative model. The authors should include a discussion of how their paper relates to this earlier work in their camera ready version.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Novel take on model-based improvement on model-free RL\"}", "{\"title\": \"Final Response\", \"comment\": \"We thank the reviewers for the detailed feedback on our paper. We are glad that the reviewers found our paper to be \\\"solid contribution with well-written motivation with theoretical interpretations\\\" (reviewer 2) and \\\"well written in general\\\" (reviewer 1).\\n\\nWe made the following changes to the manuscript to address the reviewers comments.\\n\\n- We conducted more ablation experiments for the 3 hyper-parameters associated with our model, as asked by the Reviewer 1.\\n\\n- Training backtracking model by using demonstrations (Reviewer 2) and then using the backtracking model for training another policy from scratch. We did experiments on Ant env from mujoco and Seaquest from atari, where we first train a backtracking model from the expert demonstrations, and then use that for training policy. We achieve 2.5x and about 2x sample efficiency in our very preliminary experiments. \\n\\n- Comparison with the forward model (Section G and H) as pointed by Rev 2. Rev 2 mentioned an interesting point of training forward and backward model. Our conclusion is building the backward model is necessarily neither harder nor easier. Realistically, building any kind of model and having it be accurate for more than, say, 10 time steps is pretty hard. But if we only have 10 time steps of accurate transitions, it is probably better to take them backward.\\n\\nWe feel that by conducting extra experiments have improved the quality of the paper a lot, and we are grateful to reviewers for very useful feedback.\"}", "{\"title\": \"Request for feedback ?\", \"comment\": \"Thank you again for the thoughtful review. We would like to know if our rebuttal (see below, \\\"Thanks for your feedback! (n/3) \\\") adequately addressed your concerns. We would also appreciate any additional feedback on the revised paper. Are there any other aspects of the paper that you think could be improved?\"}", "{\"title\": \"Thanks for your time! :)\", \"comment\": \"We would appreciate it if the reviewer could take another look at our changes and additional results, and let us know if the reviewer would like to request additional changes that would alleviate reviewers concerns. We hope that our updates to the manuscript address the reviewer's concerns about clarity, and we hope that the discussion above addresses the reviewer's concerns about empirical significance. We once again thank the reviewer for the thorough feedback of our work.\"}", "{\"title\": \"Paper Updated to address reviewer feedback.\", \"comment\": [\"We have updated the paper with the following changes to address reviewer comments:\", \"Added comparison to forward model (Reviewer 2)\", \"Conducted preliminary experiments to show that the backtracking model can be trained just by using the demonstrations. (Reviewer 2)\", \"Effect of the 3 hyperparameter(s) associated with the proposed model.\", \"Thank you for your time! The authors appreciate the time reviewers have taken for providing feedback. which resulted in improving the presentation of our paper. Hence, we would appreciate it if the reviewers could take a look at our changes and additional results, and let us know if they would like to either revise their rating of the paper, or request additional changes that would alleviate their concerns.\"]}", "{\"title\": \"Exploration and complexity of backtracking model (3/3)\", \"comment\": \">> Are there also reinforcement learning tasks where the proposed methods' improvement is marginal and the extra modeling effort is not justified (e.g. due to increase complexity).\\n\\nWe think that having a backtracking model could always improve the performance. As We evaluate it on a large number of very different domains (when the backtracking model is given as well as when we are learning the backtracking model as in off policy case and on-policy case) and find that in all cases it improves performance. But we also think, that for some environments the backtracking model can be very hard to learn. For other problems, learning a model of the environment is difficult in either direction so those problems would be hard as well. The first issue would be severe if the forward dynamics are strongly many-to-one, for example. The second case applies to any complex environment and especially partially observed ones. Our method shines most when the dynamics are relatively simple but the problems are still hard due to sparse rewards. \\n\\nOn the other hand, the backtracking model could also be used in practical settings like robotics, that involve repeatedly attempting to solve a particular task, and hence resetting the environment between different attempts. Here, we can use a model that learns both a forward policy and a backtracking model, and resetting of the environment can be approximated using the backtracking model. By learning this backtracking model, we can also determine when the policy is about to enter a non-reversible state, and hence can be useful for safety. It remains future work, to investigate this. \\n\\n>> Does this method not also potentially hinder exploration by making the agent learn to go after the same high rewards / Does the direction of the variational problem guarantee coverage of the support of the R > L distribution by samples?\\n\\nThis is a tricky subject and it is hard to come up with principles that will improve exploration in general and to be sure that something doesn't hinder exploration for some problems. In our setup, the exploration comes mostly from the goal generation methods. The backwards model helps more to speed up the propagation of high value to nearby states (indirectly), such that fewer environment interactions are needed but that could perhaps lead to fewer trips to locations with incorrectly assumed low value. On the other hand, the method might cause the exploration of different (better) paths to the same high value states as well, which should be a good thing. In general, since we are seeking high value (i.e. high expected return), so it shouldn't hinder exploration much. But instead if we seek \\u201chigh reward\\u201d states, then it would hinder performance, (as our experiments show).\", \"closing\": \"Thank you for your time. We hope you find that our revision addresses your concerns.\\nPlease let us know if anything is unclear here, if you\\u2019re uncertain about part of the argument, or if there is any other comparison that would be helpful in clarifying things more.\"}", "{\"title\": \"Effect of hyperparameter(s) (2/3)\", \"comment\": \">> What would be the effect of a hyperparameter that balances learning the recall traces and learning the true environment? >> whether enough work was done to understand the effect of the many different hyperparameters that the proposed method surely must have.\\n\\nIn order to address reviewer\\u2019s question, we did more experiments on four room maze as well as on mujoco domain. \\nWe have 3 parameters associated. \\n1) How many traces to sample from backtracking model. \\n2) How many steps each trace should be sampled for i.e is the length of the trajectory sampled. \\n3) And as the reviewer pointed out, the effect of a hyperparameter that balances learning the recall traces and learning the true environment.\\n\\nQ1) How many traces to sample from backtracking model.\\n\\nFor most of our experiments, we sample only single a trace from the backtracking model. But we observe that sampling more traces actually helps for more complex environments. This is also again in contrast as compared to the forward model. . \\n\\nQ2) How many steps each trace should be sampled for ?\\nIn practice, if the agent is limited to one or a few initial states, a concern related to the length of generated backward traces is that longer traces become increasingly likely to deviate significantly from the traces that the agent can generate from its initial state. Therefore, in our experiments, we sample fairly short traces. Figure 8 (Appendix, Section B) shows the Performance of our model (with TRPO) by varying the length of traces from backtracking model. All the time-steps are in thousands i.e (x1000). As evident by the figure, sampling very long traces seems to hinder the performance on all the domains.\\n\\nQ3) Effect of a hyperparameter that balances learning the recall traces and learning the true environment\\n\\nWe have added a Section H in the Appendix containing ablations for the four-room environment and some Mujoco tasks which tells about the effect this hyperparameter has on effective performance. \\n\\nIn Figure 17(Appendix, Section H) we noticed that as we increase the ratio of updates in the true environment to updates using recall traces from the backward model, the performance decreases. This highlights again the advantages of learning from the recall traces. In the second experiment, we see the effect of training from the recall traces multiple times for every iteration of training in the true environment. Figure 18(Appendix, Section H) shows that as we increase the number of iterations of learning from recall traces, we correspondingly need to choose a smaller trace length. For each update in the real environment, making more number of updates from recall traces helps if the trace length is smaller, and if the trace length is larger, it has a detrimental effect on the learning process. \\n\\nIn Figure 19(Appendix, Section H) we again find that for Mujoco tasks doing more updates using the recall traces is beneficial. Also for more updates we need to choose smaller trajectory length.\\n\\nIn essence, there is a balance between how much we should train in the actual environment and how much we should learn from the traces generated from the backward model. In the smaller four room-environment, 1:1 balance performed the best. In Mujoco tasks and larger four room environments, doing more updates from the backward model helps, but in the smaller four room maze, doing more updates is detrimental. So depending upon the complexity of the task, we need to decide this ratio.\"}", "{\"title\": \"Thanks for your feedback! (1/3)\", \"comment\": \"Thanks for the very thorough feedback. We have conducted additional experiments to address the concerns raised about the evaluation, and we clarify specific points below. We believe that these additions address all of your concerns about the work, though we would appreciate any additional comments or feedback that you might have.\\n\\n\\\"I'm not familiar enough with reinforcement learning benchmarks to judge the quality of the experiments compared to the literature as a whole.\\\"\\n\\nThe goal of our experimental evaluation is to demonstrate the effectiveness of the proposed algorithm. We demonstrate that the effectiveness by comparing the proposed algorithm in case when the true backtracking env. was avaliable, as well as when we learned the backtracking model too. We compare our methods to the state-of-the-art SAC algorithm on MuJoCo tasks in OpenAI gym (Brockman et al., 2016) and in rllab (Duan et al., 2016). We use SAC as a baseline as it notably outperforms other existing methods like DDPG, Soft-Q Learning and TD3. The results show that our method outperform on par with SAC in simple domains like swimmer, walker etc. They also provide evidence that the proposed method outperform SAC in challenging high dimensional domains like humanoid and Ant (Figure 7, Main Paper).\\n\\n\\\"It is not entirely obvious to me what parametric models are used for the backtracking distributions.\\\"\", \"the_backtracking_model_we_used_for_all_the_experiments_consisted_of_two_multi_layer_perceptrons\": \"one for the backward action predictor Q(a_t | s_t+1) and one for the backward state predictor Q(s_t | a_t, s_t+1). Both MLPs had two hidden layers of 128 units. The action predictor used hyperbolic tangent units while the inverse state predictor used ReLU units. Each network produced as output the mean and variance parameters of a Gaussian distribution. For the action predictor the output variance was fixed to 1. For the state predictor this value was learned for each dimension. We have also mentioned this in the appendix.\"}", "{\"title\": \"Yes, We can Train the backtracking model offline by watching demonstration. (2/2)\", \"comment\": \"\\\"Would it still work if to train the backtracking model offline by, say, watching demonstration?\\\"\\n\\nAgain, The reviewer raises a good point. Yes, it's possible to train the backtracking model offline by watching demonstrations. And hence, the proposed method can also be used for imitation learning. In order to show something like this, we conducted the following experiment. We trained an expert policy on Mujoco domain (Ant) using TRPO. Using the trained policy, we sample expert trajectories, and using these trajectories we learned the backtracking model in an offline mode. Now, we trained another policy from scratch, but at the same time we sample the traces from the backtracking model. This method is about(2.5)x more sample efficient as compared to PPO, with the same asymptotic performance. We have not done any hyperparameter search right now, and hence it should be possible to improve these results.\\n\\nWe conducted additional experiments for Atari domain(Seaquest) too. For atari we trained an expert policy using a2c. And then using samples from the expert policy we learned a backtracking model. And then we use this backtracking model for learning a new policy from scratch. This method is about(1.8)x more sample efficient as compared to A2C, with the same asymptotic performance. These results are very preliminary but it shows that it may be possible to train the backtracking model in offline mode, and use it for learning a new policy from scratch. \\n\\nPlease let us know if anything is unclear here, if you\\u2019re uncertain about part of the argument, or if there is any other comparison that would be helpful in clarifying things more.\"}", "{\"title\": \"Comparison to Forward Model (1/2)\", \"comment\": \"The authors thank the reviewer for the positive and constructive feedback. We appreciate that the reviewer finds that our method is clearly explained.\\n\\n\\\"how does the backtracking model correspond to a forward-model? And it doesn't seem to be contradictory to me that the two can work together.\\\"\\n\\nThe reviewer raises a good point. This is indeed very useful. The Dyna algorithm uses a forward model to generate simulated experience that could be included in a model-free algorithm. This method was used to work with deep neural network policies, but performed best with models which are not neural networks (Gu et al., 2016a). Our intuition (and as we empirically show, Figure 19, Section H of Appendix) says that it might be better to generate simulated experience from a backtracking model (starting from a high value state) as compared to forward model, just because we know that traces from the backtracking model are good traces, as they lead to high value state, which is not necessarily the case for the simulated experience from a forward model.\\n\\nWe have added Figure 16 in Appendix( Section G) where we evaluate the Forward model with On-Policy TRPO on Ant and Humanoid Mujoco tasks. We were not able to get any better results on with forward model as compared to the Baseline TRPO, which is consistent with the findings from (Gu et al., 2016a).\\n\\nIn essence, building the backward model is necessarily neither harder nor easier. Realistically, building any kind of model and having it be accurate for more than, say, 10 time steps is pretty hard. But if we only have 10 time steps of accurate transitions, it is probably better to take them backward model from different states as compared to from forward model from the same initial state. (as corroborated by the findings in Fig 16 of Appendix G, and Figure 19 of Appendix H). \\n\\nSomething which remains as a part of future investigation is to train the forward model and backtracking model jointly. As the backtracking model is tied to high value states, the forward model could extract the intended goal value from the high value state. When trained jointly, this should help the forward model learn some reduced representation of the state that is necessary to evaluate the reward. Ultimately, when planning, we want the model to predict the goal accurately, which helps to optimize for this \\u201dgoal-oriented\\u201d behaviour directly. This also avoids the need to model irrelevant aspects of the environment. We also mention this in Appendix (Section G).\\n\\n\\n[1] (Gu et al, 2016) Continuous Deep Q-Learning with Model-based Acceleration http://proceedings.mlr.press/v48/gu16.html\"}", "{\"title\": \"Thanks for your feedback!\", \"comment\": \"We thank the reviewer for the positive and constructive feedback.\\n\\n\\\"I would like to see experiments to show the computational time for these components.\\\"\\n\\nIf a backtracking model model is available (like in the maze example), then there is no extra computation time, but in the case where we have to learn a bw model, learning a bw model requires more updates compared to only earning a policy (but a similar number of updates as compared to learning a forward model, i.e., dynamics model of the environment).\\n\\nPlease let us know if anything is unclear here, or if there is any other comparison that would be helpful in clarifying things more.\"}", "{\"title\": \"Well-presented idea but evaluation seems preliminary\", \"review\": \"Revision:\\nThe authors have thoroughly addressed my review and I have consequently updated my rating accordingly.\", \"summary\": \"Model-free reinforcement learning is inefficient at exploration if rewards are\\nsparse / low probability.\\nThe paper proposes a variational model for online learning to backtrack\\nstate / action traces that lead to high reward states based on best previous\\nsamples.\\nThe backtracking models' generated recall traces are then used to augment policy\\ntraining by imitation learning, i.e. by optimizing policy to take actions that\\nare taken from the current states in generated recall traces.\\nOverall, the methodology seems akin to an adaptive importance sampling\\napproach for reinforcement learning.\", \"evaluation\": \"The paper gives a clear (at least mathematically) presentation of the core idea\\nbut it some details about modeling choices seem to be missing.\\nThe experimental evaluation seems preliminary and it is not fully evident when\\nand how the proposed method will be practically relevant (and not relevant).\\n\\nMy knowledgable of the previous literature is not sufficient to validate the\\nclaimed novelty of the approach.\", \"details\": \"The paper is well written and easy to follow in general.\\n\\nI'm not familiar enough with reinforcment learning benchmarks to judge the\\nquality of the experiments compared to the literature as a whole.\\nAlthough there are quite a few experiments they seem rather preliminary.\\nIt is not clear whether enough work was done to understand the effect of the\\nmany different hyperparameters that the proposed method surely must have.\\n\\nThe authors claim to show empirically that their method can improve sample\\nefficiency.\\nThis is not necessarily a strong claim as such and could be achieved on\\nrelatively simple tests.\\nIn the discussion the authors claim their results indicate that their approach\\nis able to accelearte learning on a variety of tasks, also not a strong claim.\\n\\nThe paper could be improved by adding a more clear explanation of the exact way\\nby which the method helps with exploration and how it affects finding sparse\\nrewards (based on e.g. Figure 1).\\nIt seems that since only knowledge of seen trajectories can be used to generate\\npaths to high reward states it only works for generating new trajectories\\nthrough previously visited states.\", \"questions_that_could_be_clarified\": \"- It is not entirely obvious to me what parametric models are used for the\\nbacktracking distributions.\\n- Does this method not also potentially hinder exploration by making the agent\\nlearn to go after the same high rewards / Does the direction of the variational\\nproblem guarantee coverage of the support of the R > L distribution by samples?\\n- What would be the effect of a hyperparameter that balances learning the recall\\ntraces and learning the true environment?\\n- Are there also reinforcement learning tasks where the proposed methods'\\nimprovement is marginal and the extra modeling effort is not justified (e.g.\\ndue to increase complexity).\", \"page_1\": \"iwth (Typo)\", \"page_2\": \"r(s_t) -> r(s_t, a_t)\", \"page_6\": \"Prioritize d (Typo)\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}", "{\"title\": \"Review\", \"review\": \"This paper nicely proposes a back-tracking model that predicts the trajectories that may lead to high-value states. The proposed approach was shown to be effective in improving sample efficiency for a number of environments and tasks.\\n\\nThis paper looks solid to me, well-written motivation with theoretical interpretations, although I am not an expert in RL.\\n\\nComments / questions:\\n- how does the backtracking model correspond to a forward-model? And it doesn't seem to be contradictory to me that the two can work together.\\n- could the authors give a bit more explanation on why the backtracking model and the policy are trained jointly? Would it still work if to train the backtracking model offline by, say, watching demonstration?\\n\\nOverall this looks like a nice paper.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"adding another direction to the model increases the sampling efficiency\", \"review\": \"The authors propose a bidirectional model for learning a policy. In particular, a backtracking model was proposed to start from a high-value state and sample back the sequence of actions and states that could lead to the current high-value state. These traces can be used later for learning a good policy. The experiments show the effectiveness of the model in terms of increase the expected rewards in different tasks. However, learning the backtracking model would add some computational efforts to the entire learning phase. I would like to see experiments to show the computational time for these components.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}" ] }
BkesGnCcFX
Learning Goal-Conditioned Value Functions with one-step Path rewards rather than Goal-Rewards
[ "Vikas Dhiman", "Shurjo Banerjee", "Jeffrey M Siskind", "Jason J Corso" ]
Multi-goal reinforcement learning (MGRL) addresses tasks where the desired goal state can change for every trial. State-of-the-art algorithms model these problems such that the reward formulation depends on the goals, to associate them with high reward. This dependence introduces additional goal reward resampling steps in algorithms like Hindsight Experience Replay (HER) that reuse trials in which the agent fails to reach the goal by recomputing rewards as if reached states were psuedo-desired goals. We propose a reformulation of goal-conditioned value functions for MGRL that yields a similar algorithm, while removing the dependence of reward functions on the goal. Our formulation thus obviates the requirement of reward-recomputation that is needed by HER and its extensions. We also extend a closely related algorithm, Floyd-Warshall Reinforcement Learning, from tabular domains to deep neural networks for use as a baseline. Our results are competetive with HER while substantially improving sampling efficiency in terms of reward computation.
[ "Floyd-Warshall", "Reinforcement learning", "goal conditioned value functions", "multi-goal" ]
https://openreview.net/pdf?id=BkesGnCcFX
https://openreview.net/forum?id=BkesGnCcFX
ICLR.cc/2019/Conference
2019
{ "note_id": [ "Syeog5bvg4", "rkgUkUexxN", "HJgCgQUm1E", "H1lj7OnYAQ", "HJgCIDZICX", "rkxw606xAQ", "SkgRQQbMa7", "rkxW9NfW6m", "SkgVLVMZTm", "S1loBx0K27", "r1l2zvNt3X", "BJl_G5df5m", "H1gzrfOGcQ" ], "note_type": [ "official_comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "comment" ], "note_created": [ 1545177587454, 1544713694462, 1543885558399, 1543256098858, 1543014229904, 1542672063190, 1541702437809, 1541641352549, 1541641291573, 1541165122824, 1541125907804, 1538587152277, 1538585145653 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1298/Authors" ], [ "ICLR.cc/2019/Conference/Paper1298/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1298/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1298/Authors" ], [ "ICLR.cc/2019/Conference/Paper1298/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1298/Authors" ], [ "ICLR.cc/2019/Conference/Paper1298/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1298/Authors" ], [ "ICLR.cc/2019/Conference/Paper1298/Authors" ], [ "ICLR.cc/2019/Conference/Paper1298/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1298/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1298/Authors" ], [ "(anonymous)" ] ], "structured_content_str": [ "{\"title\": \"Response\", \"comment\": \"We address your comments point by point.\\n\\n> I maintain that they key idea behind this paper is not new. On top of that, the way it is presented obfuscates what is really going on. What is the justification for adding the 1-step loss to Q-learning with a constant reward for all transitions? Why will this converge at all? HER just applies Q-learning with modified goals/reward, and Q-learning comes with theoretical guarantees. \\n\\nThe surprising result that a constant reward for all transition converges at all is the main message behind our work. The reason why it converges is because we estimate the path-reward (as done by Kaelbling 1993) instead of future-reward (as done by Q-learning/HER). Our work builds upon Kalebling's work which does not have theoretical guarantees yet, but I do not see any reason why the formulation is averse to theoretical guarantees.\\n\\n> I am not proposing any alternative solutions to the problem. Again, to me the key idea behind HER is that if you've reached state s, then you've achieved the goal of reaching state s. This means that the agent can get a reward of of 1 in the 0/1 reward formulation or a reward of 0 in the -1/0 formulation and the state is considered terminal. There is no need to check for equality of states and time indices. \\n\\nWhat you are describing is the \\\"final\\\" strategy described in HER paper Section 4.5 which we performs worse than \\\"future\\\" strategy. We use \\\"future\\\" strategy in all our experiments. In \\\"future\\\" strategy you have to either compare against the time-index or the goal itself. Moreover, 0 goal reward is different from no-goal reward which is what we propose.\\n\\n> I am not proposing any modifications of HER. I am simply pointing out that the idea that you can do goal-based learning without recomputing rewards is both in the \\u201cHindsight Experience Replay\\u201d paper and in the \\u201cLearning to Achieve Goals\\u201d paper. To me it is the key idea behind HER. If you've reached a state s then you've achieved the goal of reaching state s. \\n\\nThe idea is there in \\\"Learning to Achieve Goals\\\" paper but not in \\\"Hindsight Experience Replay\\\" paper. The idea is not whether you have achieved the goal of reaching state s, but the idea is whether you should get a high-goal-reward on reaching the state s. We maintain that R(s, a, g) = 0 if s == g else -1 is unnecessary and R(s, a) = -1 is enough because only the path-rewards to reach the goal matter, not the eventual \\\"0\\\" reward that you get on reaching the goal. \\n\\nSince our experiments establish that triangular inequality from \\\"Learning to Achieve Goals\\\" is not helpful but the one-step loss is helpful, we bring the useful ideas from \\\"Learning to Achieve Goals\\\" to forefront in deep learning context. This is another way to look at our contributions.\\n\\nWe again thank you for your detailed comments and discussion.\"}", "{\"metareview\": \"This manuscript presents a reinterpretation of hindsight experience replay which aims to avoid recomputing the reward function, and investigates Floyd-Warshall RL in the function approximation setting.\\n\\nThe paper was judged as relatively clear. The authors report a slight improvement in computational cost, which some reviewers called into question. However, all of the reviewers pointed out that the experimental evidence for the method's superiority is weak. Two reviewers additionally raised that this wasn't significantly different than the standard formulation of Hindsight Experience Replay, which doesn't require the computation of rewards for relabeled goals.\\n\\nUltimately, reviewers were in agreement that the novelty of the method and quality of the obtained results rendered the work insufficient for publication. The Area Chair concurs, and urges the authors to consider the reviewers' pointers to the existing literature in order to clarify their contribution for subsequent submission.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Important subject matter but novelty & results insufficient for acceptance.\"}", "{\"title\": \"clarification\", \"comment\": \"> We agree that your proposed modification to HER would negate the reward re-computation requirement. However, HER does not do so. This perspective was influenced by the observation that reward re-computations are redundant. This observation must be non-trivial because HER and its many extensions have not accounted for it.\\n\\nI am not proposing any modifications of HER. I am simply pointing out that the idea that you can do goal-based learning without recomputing rewards is both in the \\u201cHindsight Experience Replay\\u201d paper and in the \\u201cLearning to Achieve Goals\\u201d paper. To me it is the key idea behind HER. If you've reached a state s then you've achieved the goal of reaching state s. \\n\\n> While your solution would work, we think our solution is much simpler to implement because it does not require checking whether sampled t == T. The replay buffer is sampled as is, with only a loss term added to the loss function. These are different ways to instantiate the same ideas which we believe to be non-trivial.\\n\\nI am not proposing any alternative solutions to the problem. Again, to me the key idea behind HER is that if you've reached state s, then you've achieved the goal of reaching state s. This means that the agent can get a reward of of 1 in the 0/1 reward formulation or a reward of 0 in the -1/0 formulation and the state is considered terminal. There is no need to check for equality of states and time indices. \\n\\nI maintain that they key idea behind this paper is not new. On top of that, the way it is presented obfuscates what is really going on. What is the justification for adding the 1-step loss to Q-learning with a constant reward for all transitions? Why will this converge at all? HER just applies Q-learning with modified goals/reward, and Q-learning comes with theoretical guarantees.\"}", "{\"title\": \"There are multiple ways of instantiating an idea and ours is one\", \"comment\": \"We agree that your proposed modification to HER would negate the reward re-computation requirement. However, HER does not do so. This perspective was influenced by the observation that reward re-computations are redundant. This observation must be non-trivial because HER and its many extensions have not accounted for it.\\n\\n> I think it is clear that by replacing the goal with the final (reached) state s_T one can just give a reward of 0 at the final transition and -1 to the preceding ones. There is no need to compute rewards or compare any states.\\n\\nWhile your solution would work, we think our solution is much simpler to implement because it does not require checking whether sampled t == T. The replay buffer is sampled as is, with only a loss term added to the loss function. These are different ways to instantiate the same ideas which we believe to be non-trivial.\\n\\nWe thank you for your detailed comments and feedback.\"}", "{\"title\": \"response\", \"comment\": \"Thank you for the clarifications. I think they confirmed my understanding of the paper.\\n\\nI maintain that the idea that you can do goal-based learning without rewards is both in the \\u201cHindsight Experience Replay\\u201d paper and in the \\u201cLearning to Achieve Goals\\u201d paper.\\n\\nHere\\u2019s what the HER paper says about a trajectory s_1, \\u2026, s_T for a goal g (top of page 4):\\n\\u201cThe pivotal idea behind our approach is to re-examine this trajectory with a different goal \\u2014 while this trajectory may not help us learn how to achieve the state g, it definitely tells us something about how to achieve the state s_T . This information can be harvested by using an off-policy RL algorithm and experience replay where we replace g in the replay buffer by s_T\\u201d.\\nI think it is clear that by replacing the goal with the final (reached) state s_T one can just give a reward of 0 at the final transition and -1 to the preceding ones. There is no need to compute rewards or compare any states.\\n\\nThis is exactly what your one-step loss does. It is equivalent to a Q-learning update on each transition s,a,s\\u2019 with the goal relabeled to s\\u2019. The transition becomes terminal since the goal is reached. Presenting the one-step loss as something new is not accurate.\\n\\nHaving said that, there are multiple ways of instantiating this idea. The HER paper chooses to relabel goals for a trajectories. So in a sense the one-step loss is applied only to the last transition. You propose to apply the relabeling to all transitions. \\u201cLearning to Achieve Goals\\u201d performs all goal updating so it will also relabel each transition with the achieved state s\\u2019 as the goal.\\n\\nComparing these approaches in terms of performance could be interesting, but as your results suggest there is not really a difference between HER and your approach in terms of data efficiency. I don\\u2019t buy the comparison in terms of \\u201creward computations\\u201d because HER can also be implemented in a way where rewards don\\u2019t need to be recomputed.\"}", "{\"title\": \"One-step loss is applicable to all transitions not just terminal condition\", \"comment\": \"> The main contribution of the paper appears to be ... equal to the reward at that timestep.\\n\\nThe one-step loss is, in fact, incorporated for every transition between states, not just the termination condition when the goal is achieved. An alternative perspective of one-step loss is one-step-episode Q-Learning. In other words, the one-step loss function is equivalent to treating every state transition as a full episode and the terminating condition. In the paper we have updated the \\\"one-step loss\\\" section to include this perspective. \\n\\n> It's not clear to me how this is fundamentally different than HER ... the transition achieves the resampled goal.\\n\\nAll our comparisons are already with \\\"sparse reward\\\" R(s,a,g) = (0 if s == g else -1) implementation of HER. As far as we can understand, in your proposed formulation the reward should be R(s,a,g) = (1 if s == g else 0) which is shifted by a constant factor. The sparse reward formulation still possesses the unnecessary dependence on the goal whose redundancy and removal is the emphasis of our work.\\n\\n> Is this not essentially identical to the proposal in this paper? ... deserves an entire paper.\\n\\nNo, this is not identical to the paper. At no point in our algorithm do we check the condition s == g. The proposed one-step loss that learns one-step reward Q(s_t, a_t, g=s_{t+1}) = r_t as we apply one-step loss to every transition. One-step loss is therefore task independent. As mentioned previously, this can also be thought of as one-step hindsight experience replay where the achieved goal at every step is treated as the desired goal.\\n\\n> The authors claim the main advantage here is avoiding recomputation of the reward function for resampled goals ... worth avoiding?\\n\\nIn machine learning, the sample complexity is always distinguished from computation complexity. The only case where the two are comparable is when the samples are generated from simulations which is, admittedly, true for our experiments. However, our proposed improvement is general enough to be applicable to non-simulation experiments.\\n\\nIt is a consequence of this task-dependent reward formulation that it can be re-sampled cheaply, hence, the computational cost is improvement is marginal. But we eliminate a redundancy common to the HER algorithm and its derivatives. With the massive popularity of HER (107 citations and counting), we believe that this is a worthwhile contribution to bring to the attention of the RL community. \\n\\nConsider the example of an agent navigating a maze where the goal is specified in the form of an image. The semantic comparison of the observed image with the goal image is an expensive operation that will require separate training for goal-dependent reward formulation [1]. However, in our proposed formulation, the comparison operation (s == g) in the reward formulation is not needed thereby eliminating the need of another learning module. \\n\\n\\n> All of the experiments in this paper use a somewhat unusual task setup where every timestep has a reward of -1. \\n\\nThis unusual reward formulation is possible because of our contribution (one-step loss). Hence, it is only true for the experiments that are referred to with \\\"Ours\\\" label. All the baselines (HER and FWRL) and \\\"Ours (goal rewards)\\\" operate on the reward structure for HER which is R=(0 if s==g else -1).\\n\\n> Have the authors considered other reward structures, such as the indicator function R=(1 if s==g else 0) or a distance-based dense reward?\\n> Would this proposal work in these cases? If not, how significant is a small change to HER if it can only work for one specific reward function?\\n\\nWe have considered and we are advocating against such reward structures because of their goal dependence. In fact in one experiment we run our algorithm with the reward structure R=(0 if s==g else -1) which is equivalent to yours with a constant shift. These results can be found in Fig. 4(b), labeled as \\\"Ours (goal rewards)\\\". \\n\\nDistance-based dense reward is by definition goal dependent. Our contribution is to eliminate this dependence. RL on dense rewards is easier than sparse rewards. Hence, we do not believe that distance-based reward adds much to our contribution. We do note that our method does work with goal based sparse rewards(Fig. 4b) and hence we would expect to continue to work with dense rewards. \\n\\n> The reconsideration of Floyd-Warshall RL ... recommend this for publication.\\n\\nWe analyze FWRL and added the ablation study of loss function in Appendix Figure 6.\\nIt is clear that FWRL inspired loss function do not contribute to better\\nlearning. Instead, they hurt the performance. We think this is because Bellman inspired loss already captures the information that FWRL inspired constraints intend to capture.\\n\\n[1] Nikolay Savinov, Alexey Dosovitskiy, Vladlen Koltun. \\\"Semi-Parametric Topological Memory for Navigation\\\". In ICLR 2018\"}", "{\"title\": \"Review\", \"review\": \"This paper presents a reinterpretation of hindsight experience replay (HER) that avoids recomputing the reward function on resampled hindsight goals in favor of simply forcing the terminal state flag for goal-achieving transitions, referred to by the authors as a \\\"step loss\\\".\\nThe new proposal is evaluated on two goal-conditioned tasks from low-dimensional observations, and show modest improvements over HER and a function-approximation version of Floyd-Warshall RL, mostly as measured against the number of times the reward function needs to be recomputed.\", \"pros\": [\"minor improvement in computational cost\", \"investigation of classical FWRL technique in context of deep RL\"], \"cons\": [\"computational improvement seems very minor\", \"sparse-reward implementations of HER already do essentially what this paper proposes\"], \"comments\": \"The main contribution of the paper appears to be the addition of what the authors refer to as a \\\"step loss\\\", which in this case enforces the Q function to correctly incorporate the termination condition when goals are achieved. I.E. the discounted sum of future rewards for states that achieve termination should be exactly equal to the reward at that timestep.\\n\\nIt's not clear to me how this is fundamentally different than HER. One possible \\\"sparse reward\\\" implementation of HER involves no reward function recomputation at all, instead simply replacing the scalar reward and termination flag for resampled transitions with the indicator function for whether the transition achieves the resampled goal.\\nIs this not essentially identical to the proposal in this paper? I would consider this a task-dependent implementation detail for an application of HER rather than a research contribution that deserves an entire paper.\\n\\nThe authors claim the main advantage here is avoiding recomputation of the reward function for resampled goals.\\nI do not find this particularly compelling, given that all of the evaluations are done in low-dimensional state space: reward recomputation here is just a low-dimensional euclidean distance computation followed by a simple threshold.\\nIn a world where we're doing millions of forward and backward passes of large matrix multiplications, is this a savings that really requires investigation?\\nIt is somewhat telling that the results are compared primarily in terms of \\\"# of reward function evaluations\\\" rather than wall time. If the savings were significant, I expect a wall time comparison would be more compelling.\\nMaybe the authors can come up with a situation in which reward recomputation is truly expensive and worth avoiding?\\n\\nAll of the experiments in this paper use a somewhat unusual task setup where every timestep has a reward of -1. Have the authors considered other reward structures, such as the indicator function R=(1 if s==g else 0) or a distance-based dense reward?\\nWould this proposal work in these cases? If not, how significant is a small change to HER if it can only work for one specific reward function?\", \"conclusion\": \"In my view, the main contribution is incremental at best, and potentially identical to many existing implementations of HER.\\nThe reconsideration of Floyd-Warshall RL in the context of deep neural networks is a refreshing idea and seems worth investigating, but I would need to see much more careful analysis before I could recommend this for publication.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Our method consistent performs better than baselines when computered in terms of distance from the goal and reward computation\", \"comment\": \"Thank you for your feedback.\\n\\n1. The experimental results are mixed, and do not convincingly demonstrate the\\n effectiveness/superiority of the proposed method.\\n\\nThe results are mixed when the learning curves are compared with respect to the epochs (the number of transition samples) that intentionally does not take reward-recomputation in to account. \\n\\nWhen this computation is taken in to account, our algorithm comprehensively improves upon the baselines in 6 out of 8 experiments. To further highlight these differences we magnify our reward-recomputation plots to eliminate sections where curves overlap and are non-informative. These changes can be found in Figure 2 and 3. \\n\\nWe further reiterate that reward recomputation\\ncost can be significant dependent upon the environment and setup. In cases when\\nthe reward-computation depends upon collisions and haptic feedback of real\\nrobots, the reward recomputation may even be impossible without re-running the\\nexperiment. Hence reducing reward-computation based on a simple loss term is an\\nimportant contribution.\\n\\n\\n2. The idea of the proposed method is relatively simple, and is not theoretically justified.\\n\\nThe main contribution of this paper is to show that goal-conditioned value functions can be learned without requiring goal-reward.\\nWe believe that the simplicity of this proposed idea is the beauty of the method leading to significant changes in performance of the algorithm when reward recomputation is taken in to account.\\n\\nOur algorithm builds upon HER which does not itself possess theoretical guarantees. We would be able to addess this point specifically if the reviewer could clarify what kind of theoretical justification they would expect to see.\"}", "{\"title\": \"In our reward formulation we do not get 0 reward, it is always -1\", \"comment\": \"There are two main reasons for the confusion about the contributions of\\nthis work. \\n\\nFirst, our reward formulation is different from that of Hindsight Experience Replay\\n(HER). In HER, the agent receives -1 reward for all state transitions except on\\nreaching the goal when it receives 0 reward. In contrast, for our reward\\nformulation the agent receives -1 reward for all state transitions including\\nwhen agent reaches and continues to stay at the goal.\\n\\nSecond, our reward formulation is atypical with respect to conventional\\nReinforcement Learning (RL). In conventional RL, a high reward is used to\\nspecify the desired goal (goal-reward). However, this goal-reward is not\\nnecessary in goal-conditioned RL because the goal specification is already given\\nat the start of every episode. We believe that this result is counter-intuitive\\nand will be interesting to the RL community.\\n\\nWe clarify the reviewer's concerns and\\nedit our draft to minimize chances of similar confusion.\", \"clarity\": \"1. The main difference between HER, FWRL and our algorithm lies in the choice of\\n loss terms used. HER uses Eq (3), FWRL uses Eq (3) + L_up + L_lo, and Our\\n algorithm uses Eq (3) + L_step as shown in the pseudo-code. Another difference\\n is due to reward formulation. Because our rewards are independent of reaching\\n the goal, we do not need to recompute rewards. We have added the description\\n about these differences in the appendix to highlight them.\\n\\n2. We have introduced the requested citations at appropriate places in the\\n paper. \\n\\n Since [1] introduced the idea of FWRL before Dhiman et. al. 2018,\\n we replace the attributions accordingly in the paper. We further add\\n discussion points specific to their algorithm in the Related Work and\\n One-Step Loss section.\\n\\nNovelty and Significance\\n1. Our contribution is learning *without* using goal-rewards *using* the\\n shortest path perspective. Our secondary contribution is to extend [1] to\\n deep neural networks.\\n\\n2. As mentioned above, our reward is always -1 *even when* current state is same\\n as the goal state. \\n\\n Similar to HER, our goal states are not absorbing/terminal. Instead the\\n episodes are of fixed number of steps and the agent is encouraged to stay in\\n the goal state for as long as possible. This is how the replay buffer is\\n populated and how the average episode reward is computed. However, the\\n objective maximized is equivalent to treating this fixed length episode\\n problem as if the episodes are terminating on reaching the goal.\\n\\n To further clarify this in the paper, we have added reward structure\\n details to the Introduction (section 1, paragraph 3) and the\\n Experiments section (section 5, end of paragraph 1).\\n\\n3. One-step loss is different from both the terminal step of both Q-Learning and [1].\\n\\n One-step loss is different from terminal step of Q-Learning because it is\\n applied to every state transition rather than just the terminal step of the\\n episode. Having said that it is indeed equivalent to Q-Learning, if every\\n state transition is viewed as a one-step episode with the reached state as\\n the pseudo-goal. Correspondingly we have updated the manuscript in both the\\n introduction and the one-step loss section to include one-step-episode\\n Q-Learning perspective.\\n\\n One-step loss is also different from the terminal step of [1].\\n Referring to Section 3 of [1], we see the one-step loss (Eq. 8)\\n as an alternative of the terminal condition DG*(s, a, g) = 0 if s = g in the\\n recursive definition of DG*(s, a, g). As an alternative, one-step loss \\n translates to DG*(s_t, a_t, g_{t+1}) = -1, for all transitions (s_t, a_t ->\\n g_{t+1}) i.e. it removes the dependence of checking s=g. Although it serves\\n the same purpose of terminal condition in recursive definition but the\\n condition is mathematically different and requires the different\\n assumption that one-step path is the highest reward path between s_t and g_{t+1}. \\n \\n \\n4. As stated earlier, our reward independent of desired goal. The reward\\n re-computation for the pseudo-goals becomes unnecessary because the reward\\n does depend upon the check if current state is same as the desired goal.\\n\\n To further highlight saved reward computation, we magnify on our\\n reward-computation plots removing the uninformative parts of the plots where\\n the curves overlap.\", \"overall_quality\": \"(A) Novelty: As argued above our proposed one-step loss is novel and so is the\\nextension of [1] from tabular domain to deep learning.\\n\\n(B) Significance \\n (a) The counter-intuitive result that goal-conditioned RL does not need goal\\n reward is worth bringing to the attention of the RL community\\n (b) The absence of the requirement of reward-recomputation is significant\\n because in real robotics experiments, the reward computation may not be\\n possible without re-running the entire experiment.\\n\\n[1]: Kaelbling, Leslie Pack. \\\"Learning to achieve goals.\\\" IJCAI. 1993.\"}", "{\"title\": \"Review\", \"review\": \"This paper aims to improve on Hindsight Experience Replay by removing the need to compute rewards for reaching a goal. The idea is to frame goal-reaching as a shortest path problem where all rewards are -1 until the goal is reached, removing the need to compute rewards. While similar ideas were explored in a recent arxiv tech report, this paper claims to build on these ideas with new loss functions. The experimental results do not seem to be any better compared to baselines when measured in terms of data efficiency, but the proposed method requires fewer \\u201creward computations\\u201d.\", \"clarity\": \"While the ideas in the paper were easy to follow, there are a number of problems with the writing. The biggest problem is that it wasn\\u2019t clear exactly what algorithms were evaluated in the experiments. There is an algorithm box for the proposed method in the appendix, but it\\u2019s not clear how the method differs from the FWRL baseline.\\n\\nAnother major problem is that the paper does a poor job of citing earlier related work on RL. DQN is introduced without mentioning or citing Q-learning. Experience replay is mentioned without citing the work of Long-Ji Lin. There\\u2019s no mention of earlier work on shortest-path RL from LP Kaelbling from 1993.\", \"novelty_and_significance\": \"After reading the paper I am not convinced that there\\u2019s anything substantially new in this paper. Here are my main concerns:\\n\\n1) The shortest path perspective for goal-reaching was introduced in \\u201cLearning to Achieve Goals\\u201d by LP Kaelbling [1]. This paper should be cited and discussed.\\n\\n2) I am not convinced that the proposed formulation is any different than what is in Hindsight Experience Replay (HER) paper. Section 3.2 of the HER paper defines the reward function as -1 if the current state is not the same as the goal and 0 if the current state is the same as the goal. Isn\\u2019t this exactly the cost-to-go/shortest path reward structure that is used in this paper?\\n\\n3) This paper claims that the one-step loss (Equation 8) is new, but it is actually the definition of the Q-learning update for transitioning to a terminal state. Since goal states are absorbing/terminal, any transition to a goal state must use the reward as the target without bootstrapping. So the one-step loss is just Q-learning and is not new. This is exactly how it is described in Section 3 of [1].\\n\\n4) The argument that the proposed method requires fewer reward evaluations than FWRL or HER seems flawed. HER defines the reward to be -1 if the current state and the goal are different and 0 if they are the same. As far as I can tell this paper uses the same reward structure, so how is it saving any computation?\\n\\nCan the authors comment on these points and clarify what they see as the novelty of this work?\", \"overall_quality\": \"Unless the authors can convince me that the method is not equivalent to existing work I don\\u2019t see enough novelty or significance for an ICLR paper.\\n\\n[1] \\u201cLearning to Achieve Goals\\u201d LP Kaelbling, 1993.\", \"rating\": \"1: Trivial or wrong\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Evaluation and judgement\", \"review\": \"The paper presents an approach for an approach to addressing multi-goal reinforcement learning, based on what they call \\\"one-step path rewards\\\" as an alternative to the use of goal conditioned value function.\\nThe idea builds on an extension of a prior work on FWRL. \\nThe paper presents empirical comparison of the proposed method with two baselines, FWRL and HER. \\nThe experimental results are mixed, and do not convincingly demonstrate the effectiveness/superiority of the proposed method. \\nThe idea of the proposed method is relatively simple, and is not theoretically justified. \\n\\nBased on these observations, the paper falls short of the conference standard.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"We will update the attribution\", \"comment\": \"Thank you for your comment. We were made aware of this paper recently. We will replace the attribution for the tabular version of the path-rewards idea with Kaelbling (1993) in an updated version of the manuscript.\"}", "{\"comment\": \"The paper cites a recent arXiv paper for the concept of employing the Floyd-Warshall algorithm in goal-based reinforcement learning. This was actually introduced into the reinforcement learning literature 25 years ago in \\\"Learning to Achieve Goals\\\" https://people.csail.mit.edu/lpk/papers/ijcai93.ps\\n by Leslie Pack Kaelbling, in IJCAI 93. However, the extension to the non-tabular case presented here does sound interesting.\", \"title\": \"Floyd-Warshall & RL\"}" ] }
H1gsz30cKX
Fixup Initialization: Residual Learning Without Normalization
[ "Hongyi Zhang", "Yann N. Dauphin", "Tengyu Ma" ]
Normalization layers are a staple in state-of-the-art deep neural network architectures. They are widely believed to stabilize training, enable higher learning rate, accelerate convergence and improve generalization, though the reason for their effectiveness is still an active research topic. In this work, we challenge the commonly-held beliefs by showing that none of the perceived benefits is unique to normalization. Specifically, we propose fixed-update initialization (Fixup), an initialization motivated by solving the exploding and vanishing gradient problem at the beginning of training via properly rescaling a standard initialization. We find training residual networks with Fixup to be as stable as training with normalization -- even for networks with 10,000 layers. Furthermore, with proper regularization, Fixup enables residual networks without normalization to achieve state-of-the-art performance in image classification and machine translation.
[ "deep learning", "residual networks", "initialization", "batch normalization", "layer normalization" ]
https://openreview.net/pdf?id=H1gsz30cKX
https://openreview.net/forum?id=H1gsz30cKX
ICLR.cc/2019/Conference
2019
{ "note_id": [ "HJlihhJcDN", "S1gZEMNOVN", "rkg_dIN9eN", "Bkl6BCQJgE", "HJlQe0YjA7", "rJlrkyKsRm", "Skgn80Is0X", "r1gl9tKq07", "BJl7K-FcCm", "rJxL-c1V0m", "BJe83JIMC7", "SygoSH1b0m", "HJxywPsqaQ", "BJxPwIScaX", "B1lgruiDa7", "HJgkrvsDpm", "rkgrdHowTm", "BJeS4HjvpX", "r1xnufswaX", "B1xtAbjwaX", "SJlbdxjva7", "H1e_BZjmpQ", "H1eV6OUGpQ", "SyeHla6K37", "r1gTIYutnX", "Skl714KOnX" ], "note_type": [ "official_comment", "comment", "comment", "meta_review", "official_comment", "official_comment", "comment", "official_comment", "comment", "official_comment", "official_comment", "comment", "official_comment", "comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "comment", "comment", "official_review", "official_review", "official_review" ], "note_created": [ 1552706739214, 1549447721238, 1545385584129, 1544662597151, 1543376362561, 1543372509186, 1543364180499, 1543309704155, 1543307642754, 1542875646178, 1542770605741, 1542677827234, 1542268758564, 1542243935018, 1542072375811, 1542072118746, 1542071660679, 1542071596768, 1542070900397, 1542070737452, 1542070376953, 1541808448276, 1541724347535, 1541164269331, 1541142868903, 1541080027413 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1297/Authors" ], [ "(anonymous)" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper1297/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1297/Authors" ], [ "ICLR.cc/2019/Conference/Paper1297/Authors" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper1297/Authors" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper1297/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1297/Authors" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper1297/Authors" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper1297/Authors" ], [ "ICLR.cc/2019/Conference/Paper1297/Authors" ], [ "ICLR.cc/2019/Conference/Paper1297/Authors" ], [ "ICLR.cc/2019/Conference/Paper1297/Authors" ], [ "ICLR.cc/2019/Conference/Paper1297/Authors" ], [ "ICLR.cc/2019/Conference/Paper1297/Authors" ], [ "ICLR.cc/2019/Conference/Paper1297/Authors" ], [ "(anonymous)" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper1297/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1297/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1297/AnonReviewer3" ] ], "structured_content_str": [ "{\"title\": \"An implementation of Fixup\", \"comment\": \"Hi,\\n\\nThanks for your interest in our work. A re-implementation based on our paper has been released at https://github.com/hongyi-zhang/Fixup\\n\\nBest,\\nHongyi, Yann and Tengyu\"}", "{\"comment\": \"This paper again shows the relevance of initialization and providing a proper variance flow through networks. This successfully allows to get rid of batch normalization without sacrificing performance. Although this work cites Glorot and He, it seems that they might have overseen that this idea originally stems from (LeCun et al., 1998). Also, this exact idea of replacing batch normalization with proper initialization and activation functions has already been presented for fully connected networks in (Klambauer et al., 2017). These seem to be two more relevant papers that have not been cited in this work.\\n\\nLeCun, Yann A., et al. \\\"Efficient backprop.\\\" Neural networks: Tricks of the trade. Springer, Berlin, Heidelberg, 1998. 9-48.\\nKlambauer, G\\u00fcnter, et al. \\\"Self-normalizing neural networks.\\\" Advances in Neural Information Processing Systems. 2017.\", \"title\": \"Other prior work\"}", "{\"comment\": \"just checking for an update on this i would love to use your method in my work!\", \"title\": \"code\"}", "{\"metareview\": \"The paper explores the effect of normalization and initialization in residual networks, motivated by the need to avoid exploding and vanishing activations and gradients. Based on some theoretical analysis of stepsizes in SGD, the authors propose a sensible but effective way of initializing a network that greatly increases training stability. In a nutshell, the method comes down to initializing the residual layers such that a single step of SGD results in a change in activations that is invariant to the depth of the network. The experiments in the paper provide supporting evidence for the benefits; the authors were able to train networks of up to 10,000 layers deep. The experiments have sufficient depth to support the claims. Overall, the method seems to be a simple but effective technique for learning very deep residual networks.\\n\\nWhile some aspects of the network have been used in earlier work, such as initializing residual branches to output zeros, these earlier methods lacked the rescaling aspect, which seems crucial to the performance of this network.\\n\\nThe reviewers agree that the papers provides interesting ideas and significant theoretical and empirical contributions. The main concerns by the reviewers were addressed by the author responses. The AC finds that the remaining concerns raised by the reviewers are minor and insufficient for rejection of the paper.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Meta-Review\"}", "{\"title\": \"Could you please clarify the first part? Reply to the second part.\", \"comment\": \"Thanks for your comments!\\n\\nOne of the authors here. I think you raised interesting questions in the first part, but am not sure what you mean exactly there. Am I correct that you would like to:\\n(1) see the result of a standard ResNet (i.e. with batch normalization layers) if we initialize the last gamma in each residual branch as 0;\\nand (2) know if we can (or why we cannot) train a residual network with standard initialization and no normalization by setting eta as 1/L?\\n\\nRegarding the second part, indeed Yang & Schoenholz (2017) provide a more detailed characterization of the gradient norms and other quantities, which we very much appreciate. By \\\"generality\\\" we mean our analysis in Section 2 applies to different weight initialization schemes (e.g. not necessarily i.i.d.; can even be data-dependent) except for the i.i.d. assumption on the last fully-connected layer, whereas previous work typically assumes some particular initialization scheme (e.g. Yang & Schoenholz (2017) studied i.i.d. Gaussian weight initialization).\\n\\nOn the other hand, our result in Section 2 does have limitations compared with Yang & Schoenholz (2017), in that it is a lower bound of gradient norm for certain layers. While it explains why gradient explosion happens in standard initialization, it does not tell us when gradient explosion is guaranteed to NOT happen, which is addressed in Yang & Schoenholz (2017) (though with additional assumptions).\\n\\nThat said, the main message we hope to convey (in Section 3 and Appendix B) is that when studying multi-layer neural networks, it may be more important to think about the scale of function update than the scale of gradients (though of course they are related). Similar analysis for multi-layer linear networks is present in e.g. (Arora et al., 2018); and the study of maximal stable learning rate in (Saxe et al., 2013) may be another related finding. We believe this is a good way to study the optimization of deep neural networks.\\n\\nArora, S., Cohen, N., & Hazan, E. (2018). On the optimization of deep networks: Implicit acceleration by overparameterization. arXiv preprint arXiv:1802.06509.\\nSaxe, A. M., McClelland, J. L., & Ganguli, S. (2013). Exact solutions to the nonlinear dynamics of learning in deep linear neural networks. arXiv preprint arXiv:1312.6120.\"}", "{\"title\": \"Thanks for the suggestions\", \"comment\": \"We agree. We will place this bias module inside the residual branch in the next revision.\", \"also_thank_you_for_noting_this_detail____should_definitely_be_corrected\": \")\"}", "{\"comment\": \"Agreed, the correspondence is clearer when the bias is drawn in the residual branch instead of after the +. I saw that you just revised the manuscript, but you could consider making this change as well (since there is no real reason to draw it after the + instead of in the residual branch).\\n\\nAlso, as a minor comment, the \\\"\\u221aL\\\" in the diagram (\\\"scaled down by \\u221aL\\\") is a different color and font than the \\\"scaled down by.\\\"\", \"title\": \"Re:\"}", "{\"title\": \"Revised paper uploaded. New explanations and new results.\", \"comment\": \"Dear AC and anonymous reviewers,\\n\\nThanks for your helpful comments and suggestions! We have significantly revised the justification text of our method based on your feedback. While our method, experiments and existing analysis remain valid, we have added new results that we believe are worth noting:\\n\\n(1) We provide a top-down analysis for motivating the proposed method (see Section 3). We make efforts to rewrite Section 3 and believe now we have convincing justifications to explain our empirical success.\\n(2) To support (1), we derive two new theorems (see Appendix B) which we believe shed new lights on the understanding of neural network training.\\n(3) We add an ablation study section (see Appendix C.1) to show each part of the proposed method play a role in the overall performance.\\n(4) We rewrite the related work section based on the feedback we get since the original submission. In particular, we (i) explain the difference between ZeroInit and normalization methods, (ii) compare our analysis in Section 2 with previous theoretical work, and (iii) compare our proposed method with previous ResNet initialization in practice.\\n(5) Empirical results on Transformer are slightly improved (see Table 3). We also include ResNet-101 results on ImageNet (see Table 2).\\n\\nThanks again for your attention! We are happy to take any questions.\"}", "{\"comment\": \"Hi, thanks for your response.\\n\\nRegarding gamma=0 BN networks, I agree there is some theoretical motivation for your method compared to the Goyal et al. method. However, I would still be very curious to see the result of comparing to gamma=0 BN networks empirically, i.e repeat your suite of tests with the standard resnet but just initialize BN gamma = 0. Also, if your analysis is correct, that there can be problems if eta and L are both large, then why can't one just scale eta as 1/L, at least initially in training?\\n\\nRegarding your comments on Yang & Schoenholz (2017): Correct me if I'm wrong, but the \\\"Axiom 3.1\\\" of that paper seems only assumed for nice presentation. \\\"Axiom 3.2\\\" (gradient independence) indeed seems unreasonable a priori, but as demonstrated in many papers by now (Schoenholz et al. 2017, Xiao et al. 2018, Karakida et al. 2018, Amari et al. 2018, and so on), this assumption leads to highly accurate predictions of gradient norms and other quantities. So while I agree you do not assume certain things in your paper, you also do not get prediction for the mean gradient norms and other quantities that can be verified. Thus claiming \\\"generality\\\" in this scenario seems misleading. In terms of measuring and correcting for gradient explosion, for example, I would think it's much better to get mean predictions of gradient norms rather than bounds which could be vacuous.\\n\\nSchoenholz, Gilmer, Ganguli, Sohl-Dickstein 2017. Deep Information Propagation\\nXiao, Bahri, Sohl-Dickstein, Schoenholz, Pennington 2018. Dynamical Isometry and a Mean Field Theory of CNNs: How to Train 10,000-Layer Vanilla Convolutional Neural Networks\\nKarakida, Akaho, Amari 2018. Universal Statistics of Fisher Information in Deep Neural Networks: Mean Field Approach\\nAmari, Karakida, Oizumi 2018. Fisher Information and Natural Gradient Learning of Random Deep Networks\", \"title\": \"Thanks for your response.\"}", "{\"title\": \"Thanks for the rebuttal\", \"comment\": \"Many thanks for the rebuttal. After reading this and the other reviews, I'd be inclined to keep my score to \\\"accept\\\".\"}", "{\"title\": \"Yes, we will release the code\", \"comment\": \"Thanks for asking! Yes, we will release the code after the review period.\"}", "{\"comment\": \"Will you release the code for this paper? This would be helpful for reproducibility.\", \"title\": \"Code release\"}", "{\"title\": \"It's actually (1) removing extra multiplier(s) and (2) adding biases before conv layers (i.e. after ReLU)\", \"comment\": \"Thanks for asking!\\n\\nIt may appear as we are doing a reordering, but in fact the right of Figure 1 makes two changes to the middle of Figure 1:\\n\\n(1) Deleting extra multiplier(s) so that there is only one multiplier per residual branch. This is because the effect of two (or more) multipliers is similar to that of one multiplier, which is to influence the effective learning rate of the conv layers in the same branch.\\n\\n(2) Adding a bias before each conv layer (i.e. changing ReLU-Conv to ReLU-Bias-Conv). The intuitive justification is that the preferred input mean of the conv layer may be different from the preferred output mean of the ReLU, hence a bias parameter allows for more representation power to satisfy both preferences. This is similar to why a bias term is added before ReLU (e.g. in standard feed-forward networks, Conv-BN-ReLU module, as well as our Conv-Bias-ReLU module).\\n\\nFor additional justifications of (2), also note that there are debates about whether Conv-BN-ReLU or Conv-ReLU-BN is better in practice [1]; on the other hand, in [2, Figure 6 (d)] the authors find the best-performing residual branch to be \\\"BN-Conv-BN-ReLU-Conv-BN\\\". It may appear that the conclusion to draw from [2] is that one should use \\\"more batchnorm and less relu [3]\\\". However, if we remove the normalization layers in \\\"BN-Conv-BN-ReLU-Conv-BN\\\" and delete extra multipliers as per (1), we are left with:\\n\\\"Bias-Conv-Bias-ReLU-Conv-Multiplier-Bias\\\",\", \"which_is_indeed_very_similar_to_what_we_proposed_in_the_right_of_figure_1\": \"\\\"Bias-Conv-Bias-ReLU-Bias-Conv-Multiplier-Bias\\\".\\n\\n------------------\", \"a_side_remark\": \"when comparing middle and right of Figure 1, it may be helpful to switch the \\\"bias\\\" after the \\\"+\\\" into the residual branch, i.e. after the \\\"multiplier\\\", as the correspondence is easier to see this way and these two computation graphs are mathematically equivalent.\\n\\n------------------\", \"references\": \"[1] Batch Normalization before or after ReLU?https://www.reddit.com/r/MachineLearning/comments/67gonq/d_batch_normalization_before_or_after_relu/\\n[2] Han, D., Kim, J., & Kim, J. (2017). Deep pyramidal residual networks. CVPR.\\n[3] Andrej Karpathy. https://twitter.com/karpathy/status/827644920143818753?lang=en\"}", "{\"comment\": \"Why were the biases and multipliers re-ordered, and one multiplier replaced with a bias (as in Figure 1)? The use of the architecture on the right of Figure 1 has still has not been justified over the (seemingly more natural) architecture in the middle of Figure 1.\", \"title\": \"Re-ordering of multipliers and bias\"}", "{\"title\": \"comparison with prior methods and theoretical work\", \"comment\": \"Hi, thanks for your interest and pointer to related work! We believe that both our method and the theoretic analysis contain substantial novelty.\\n\\nA comparison with the gamma=0 alternative:\\n\\nFor the batchnorm implementation, as the other comment pointed out, the suggestion of setting gamma=0 in the last batchnorm dates back at least to (Goyal et al., 2017). We agree that it is a great observation. However, setting gamma=0 for the last batchnorm is not sufficient for training without using a normalization method. As we explain in the paper, only setting the residuals to zero, the Step 1 of our method, will still result in explosion after a few steps. This is why our method requires Step 2 to lead to reliable convergence in all cases we tested.\\n\\nWe summarize some key differences in the following, and also provide a detailed account of why the alternative method of setting gamma=0 would not work. For further information, please also refer to our reply to AnonReviewer1.\\n\\nThe critical insight for our design is that, we would like to ensure the norm of the update to each residual branch function to be O(eta/L) per each step where eta is the maximal learning rate and L is the number of residual branches, hence ensuring the logits do not blow up after O(1/eta) steps. As we show in the updated version, a scalar ResNet model may help understand the argument.\\n\\nStep 2, combined with Step 1, ensures each SGD step updates the residual branch function by O(eta/L) so that the whole network is updated by O(eta). This is the most important component of our method and also distinguishes it from all previous work.\\n\\nFor example, suppose the affine layers in batchnorm is preserved while the normalization layers are removed, and suppose we set gamma=0 in the last affine layer of each residual branch. What will happen in the first SGD update? By chain rule and Kaiming initialization, one can show that the gamma(s) in the last affine layer of each residual branch will get an update of O(eta), whereas the other layers in the residual branch get no updates. It then follows that each residual branch is a function of scale O(eta) after the first SGD update. Furthermore, we can show that all the residual branches are highly correlated after one update, resulting in output logits of O(1 + eta*L) scale, which leads to gradient explosion if L is large and eta is not small, as shown in our analysis.\", \"a_comparison_with_related_theoretic_work\": \"First, we thank you for bringing (Yang & Schoenholz 2017) to our attention. We appreciate the depth and mathematical skills demonstrated in both works, and agree that our analysis does not apply to arbitrary activation functions. That said, we would like to emphasize that our analysis excels in three aspects when compared with related work: general, realistic and simple. We now explain below:\", \"generality\": \"\", \"we_only_make_two_assumptions\": \"(1) positive homogeneity and (2) weight distribution of the fully-connected layer. No other assumptions about the network structure is made (in particular, our analysis applies to (i) both the basic residual block and the bottleneck residual block; (ii) both the original version and the pre-activation version). No assumption about the distribution of other weights is made (in particular, our analysis applies to orthogonal initialization as well as data-dependent initialization).\\n\\nIn contrast, Yang & Schoenholz (2017) only analyzed what they called the \\\"reduced residual network\\\" and the \\\"full residual network\\\", both of which only contains one activation function per each residual branch, hence does not apply to the usual 2-layer block or the bottleneck structure. Their analysis also requires both (Axiom 3.1) symmetry of activation and gradients and (Axiom 3.2) gradient independence. Finally, their analysis does not include convolutional layers, which are a crucial element of practical networks.\", \"reality\": \"With our general and mild assumptions, our analysis directly applies to the models and algorithms people implement.\\n\\nIn contrast, the gradient independence assumption (Axiom 3.2) in (Yang & Schoenholz 2017) requires the forward and backward process to be fully decoupled, which is not the case for the networks that are used in practice.\", \"simplicity\": \"In addition to applying to real-world networks, our proof technique is simple and only involves basic probability and calculus. Our proof length is less than one page. In contrast, the proofs in (Yang & Schoenholz 2017) involve intricate algebraic manipulations and advanced math topics such as the mean field theory, and often span multiple pages.\\n\\nWe would like to note that by all means we sincerely respect the works of (Yang & Schoenholz 2017) and (Hanin & Rolnick 2018), and will discuss their contributions in the revised paper. On the other hand, we also believe that simple and general theories such as our analysis are good things to have and to build upon.\"}", "{\"title\": \"comparison with prior work\", \"comment\": \"Hi, thanks for your interest and pointer to related work! Goyal et al. (2017) made a great observation, however setting gamma=0 for the last batchnorm is not sufficient for training without using a normalization method. As we explain in the paper, only setting the residuals to zero, the Step 1 of our method, will still result in explosion after a few steps. This is why our method requires Step 2 to lead to reliable convergence in all cases we tested.\\n\\nWe summarize some key differences in the following, and also provide a detailed account of why the alternative method of setting gamma=0 would not work. For detailed justifications about Step 1 & 2, please refer to our \\\"general reply (2)\\\" to AnonReviewer1.\\n\\nThe critical insight for our design is that, we would like to ensure the norm of the update to each residual branch function to be O(eta/L) per each step where eta is the maximal learning rate and L is the number of residual branches, hence ensuring the logits do not blow up after O(1/eta) steps. As we show in the updated version, a scalar ResNet model may help understand the argument.\\n\\nStep 2, combined with Step 1, ensures each SGD step updates the residual branch function by O(eta/L) so that the whole network is updated by O(eta). This is the most important component of our method and also distinguishes it from all previous work.\\n\\nWhy simply setting gamma=0 does not work:\\n\\nSuppose the affine layers in batchnorm is preserved while the normalization layers are removed, and suppose we set gamma=0 in the last affine layer of each residual branch. What will happen in the first SGD update? By chain rule and Kaiming initialization, one can show that the gamma(s) in the last affine layer of each residual branch will get an update of O(eta), whereas the other layers in the residual branch get no updates. It then follows that each residual branch is a function of scale O(eta) after the first SGD update. Furthermore, we can show that all the residual branches are highly correlated after one update, resulting in output logits of O(1 + eta*L) scale, which leads to gradient explosion if L is large and eta is not small, as shown in our analysis.\"}", "{\"title\": \"Thanks; we totally agree\", \"comment\": \"Dear AnonReviewer3, thank you for your encouraging review. We totally agree with your comments.\", \"a_side_note_to_your_question\": \"our experiments show that with standard data augmentation, the regularization effect of batch normalization can bring about 0.5% improvement in test accuracy on CIFAR-10, but we hypothesize some advanced regularization methods (such as ShakeDrop or DropBlock) could also make up for this gap.\\n\\n- References:\\n[1] Yamada, Y., Iwamura, M., & Kise, K. (2018). ShakeDrop regularization. arXiv preprint arXiv:1802.02375.\\n[2] Ghiasi, G., Lin, T. Y., & Le, Q. V. (2018). DropBlock: A regularization method for convolutional networks. arXiv preprint arXiv:1810.12890.\"}", "{\"title\": \"Comparison with previous work; practical implications\", \"comment\": \"Dear AnonReviewer2, we appreciate your encouraging review and valuable suggestions. We hope to address your questions below:\\n\\n1. The reviewer hopes to know if \\\"previous contributions from the literature\\\" have similar concepts. \\n\\nWe listed related work we knew of by the time of paper submission. After submission, we did find more related work. Indeed, some previous works propose to initialize the residual branches in a way such that the network output variance is independent of depth, which is a necessary but not sufficient condition for training very deep residual networks, as we show in the updated version.\\n\\nHowever, none of the related work observes that the residual branches should be initialized in a way such that its update is O(eta/L) per SGD step, where eta is the maximal global learning rate and L is the total number of residual branches. This ensures the network has an update of O(eta) per SGD step, which we find is a sufficient condition for training to proceed as fast as batch normalization.\\n\\n2. The reviewer has not found a \\\"convincing argument against the use of batch normalization\\\".\\n\\nEven if a practitioner continues to use batch normalization, we argue that this work helps understand how BatchNorm improves training.\\n\\nAnd for several tasks, batch normalization is not applicable or at least no preferable. Our method holds promise in many of these different tasks. For example, batch normalization is not used in many natural language tasks, where the state-of-the-art models use layer normalization (Vaswani et al., 2017), whereas we show our method can match or supercede its performance. In image super-resolution, it is recently shown that training without batch normalization improves performance (Lim et al., 2017); our method could possibly help achieve further improvement. In image style transfer, instance normalization is currently the standard technique (Ulyanov et al., 2016; Zhu et al., 2017); our method could possibly help as well. In semantic segmentation task, although batch normalization is found useful, its batchsize requirement put a severe constraint on the model size and the parallelizability of training, resulting in heavy burden of cross-GPU communication (Peng et al., 2017); hence using ZeroInit in combination with other regularization may be preferable. In image classification problems, current evidences are still in favor of batch normalization; however, as our method removes the necessity of using batch normalization in training and exposes the severe overfitting problem, future exploration of regularization methods that supersede batch normalization is possible.\", \"references\": \"[1] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, \\u0141. and Polosukhin, I., (2017). Attention is all you need. In Advances in Neural Information Processing Systems (pp. 5998-6008).\\n[2] Lim, B., Son, S., Kim, H., Nah, S., & Lee, K. M. (2017, July). Enhanced deep residual networks for single image super-resolution. In The IEEE conference on computer vision and pattern recognition (CVPR) workshops (Vol. 1, No. 2, p. 4).\\n[3] Dmitry Ulyanov, Andrea Vedaldi, Victor Lempitsky. (2016). Instance Normalization: The Missing Ingredient for Fast Stylization\\n[4] Zhu, J. Y., Park, T., Isola, P., & Efros, A. A. (2017). Unpaired image-to-image translation using cycle-consistent adversarial networks. arXiv preprint.\\n[5] Peng, C., Xiao, T., Li, Z., Jiang, Y., Zhang, X., Jia, K., ... & Sun, J. (2017). Megdet: A large mini-batch object detector. arXiv preprint arXiv:1711.07240, 7.\"}", "{\"title\": \"Reply to more detailed comments\", \"comment\": \"Page 3:\\nEq. 2 is essentially restating the reasoning and conclusion before it in a mathematical way. It can be derived by calculating the variance of both the LHS and RHS of Eq. 1 and applying the independence assumption. The second equality can be shown by mathematical induction. We will clarify in the updated version.\", \"page_4\": [\"Thanks, we will add a figure to clarify each p.h. set example.\", \"Yes, the fact \\\"(1+1/L)^L =~e\\\" is exactly why we would like the update of each residual branch rather than Var[F_l(x_l)] to be O(eta/L). Thanks for asking, we will correct in the updated version.\", \"By \\\"error signal\\\" we mean the partial derivative of the loss function w.r.t. a layer. This term is used in e.g. (Schraudolph, 1998) but we now realize it is not clear. We will clarify its meaning.\", \"Thanks for asking -- this is central to understanding our method. Please refer to our new analysis in justifying Step 1 & 2 above.\", \"Please see the above general reply for justifications of step 3. Once again, we emphasize that our method is an initialization with minimal network components for achieving state-of-the-art performance, and contains no normalization operation.\"], \"page_5\": [\"\\\"\\\\sqrt(1/2) scaling\\\" is rescaling the activations by \\\\sqrt(1/2) after each block. It is proposed as a possible remedy for ResNet without batch normalization in (Balduzzi et al., 2017).\"], \"page_6\": [\"The dataset is CIFAR-10, as stated in the figure caption.\", \"While the difference of the end performance of the two initialization is not huge (7% relative improvement for the median of 5 runs), we note that there is substantial difference in the difficulty of training. Network with ZeroInit is trained with the same learning rate and converge as fast as network trained with batch normalization, while we fail to train a Xavier initialized ResNet-110 with 0.1x maximal learning rate. Personal communication with the authors of (Shang et al., 2017) confirms our observation, and reveals that the Xavier initialized network need more epochs to converge.\", \"Cutout and Mixup both contribute to the final performance in the CIFAR and SVHN experiments, as they likely supersede the regularization benefits of batch normalization. However, training with Xavier initialization cannot generalize as well, mainly because a substantially smaller learning rate has to be used to stabilize training, which in turn hurts generalization. We empirically validate this claim in the updated version.\", \"We answered these questions in the general reply. In short, which layer to zero does not matter, training without step 3 works (though a bit worse). Using step 3 alone will not work due to incorrect scaling of the updates. We will add these experiments in the appendix.\", \"References:\", \"[1] Schraudolph, N. N. (1998). Centering neural network gradient factors. In Neural Networks: Tricks of the Trade (pp. 207-226). Springer, Berlin, Heidelberg.\", \"[2] Balduzzi, D., Frean, M., Leary, L., Lewis, J. P., Ma, K. W. D., & McWilliams, B. (2017). The Shattered Gradients Problem: If resnets are the answer, then what is the question?. arXiv preprint arXiv:1702.08591.\"]}", "{\"title\": \"General reply (2): further justifications\", \"comment\": \"-- The reviewer thinks that among the 3 components of ZeroInit, only Step 2 is justified in a principled manner. Step 1 and Step 3 are not justified by an argument or experiments.\\n\\nWe will clarify the justification for each step in the paper. We hope you will find the following explanation helpful in understanding the effects and importance of each component. These improvements and new ablation experiments will appear in the revised paper.\", \"summary\": \"Step 2, combined with Step 1, ensures each SGD step updates the residual branch function by O(eta/L) so that the whole network is updated by O(eta). This is the most important component of our method and also distinguishes it from all previous work. Step 3 is indeed not essential for training, but the bias parameters (empirically) create better loss landscape, and the multipliers help us avoid tuning the global learning rate schedule.\\n\\nWe now provide further in-depth justifications for each of the above arguments.\\n\\nStep 1 & 2:\\n\\nOn one hand, as explained in the paper, initializing the residual branches to 0 prevents them from exploding and minimizes the lower bound of the gradient in Theorem 2. On the other hand, 0 initialization helps Step 2 limit the norm of the update of the residual branches to O(eta/L), as we now explain:\\n\\nConsider a residual branch with m layers, our goal is to derive the correct scaling for these layers, so that the residual branch is updated by O(eta/L) per gradient step. For simplicity, we assume the network is a composition of scalar functions (i.e. the input, output and hidden layers are all scalars), and there is no activation function. The residual branch can therefore be written as:\\n\\nF(x) = a_1 * ... * a_m * x\\n\\nwhere x is the input to this residual branch, and a_1, ..., a_m are nonnegative scalars (thinking of them as the rescaling of default initialization). Furthermore, we denote the gradient of the objective function w.r.t. F(x) as g. It is then easy to show that the gradient w.r.t. a_i is g * F(x) / a_i. Now if we perform a gradient descent update with step size eta, and calculate the update to F(x) using first-order approximation w.r.t. eta, we will get:\\n\\n\\\\Delta F(x) =~ - eta * g * (F(x))^2 * ((1/a_1)^2 + ... + (1/a_m)^2)\\n\\nNote that we would like the scale of \\\\Delta F(x) to be O(eta/L). Assuming g is O(1), it then follows that the scale of M = (1/a_1)^2 + ... + (1/a_m)^2 should be O(1/(L * (F(x))^2)). Let A = min_i {a_i} and we have (1/A)^2 <= M <= m * (1/A)^2. Put together, we arrive at A = O(sqrt{L} * F(x)). We hence finally get the desired design constraints:\\n\\n(I.) A = min_i {a_i},\\n(II.) F(x) / A = O(1/sqrt{L})\\n\\nIn sum, with (I.) and (II.) satisfied and assuming g is O(1), we can ensure the update of F(x) is O(eta/L), hence the update of the overall network is O(eta).\\n\\nA simple and natural design to satisfy these constraints is our Step 1 and Step 2. Furthermore, setting A to 0 (Step 1) has the additional benefit that each residual branch doesn't need to \\\"unlearn\\\" its random initial state, so that training proceeds faster in the first few epochs.\", \"step_3\": \"Using biases in the linear and convolution layers is a common practice in neural network history. In normalization methods, bias and scale parameters are typically used to restore the representation power after normalization. For example, in batch normalization gamma and beta parameters are used to affine-transform the normalized activations per each channel.\\n\\nStep 3 is the simplest design which provides similar representation power to affine layers. Our design is a substantial simplification of the common practice, in that we only introduce O(K) parameters beyond conv and linear weights (note that our conv and linear layers do not have biases), whereas the common practice includes O(KC) (e.g. batch normalization and weight normalization) or O(KCWH) (e.g. layer normalization) additional parameters, where K is the number of layers, C is the max number of channels per layer and W, H are the spatial dimension of the largest feature maps.\\n\\nFinally, it is important to note that the bias and multiplier parameters are not essential for training to proceed -- without them the training still works, even with 10,000 layers, albeit with suboptimal performance.\"}", "{\"title\": \"General reply (1): No normalization anywhere\", \"comment\": \"Dear AnonReviewer1, we thank you for the very detailed review, and find it valuable for improving the writing of our paper in an updated version. We are happy to hear that you find our observations interesting, and our empirical results strong.\", \"regarding_your_concerns\": \"------------------------------\\n\\n -- The reviewer seems to think our method is \\\"a combination of initialization and normalization\\\".\\n\\nThe proposed method does not use any normalization and so we believe there is a misunderstanding, either about the method, or about what is commonly regarded as normalization.\\n\\nWe do not divide any neural network component by its statistics, neither do we subtract the mean from any activations. In fact, with our method there is **no computation of statistics (mean, variance or norm) at initialization or during any phase of training**.\\n\\nIn a sharp contrast, all normalization methods for training neural networks explicitly normalize (i.e. standardize) some component (activations or weights) through dividing activations or weights by some real number computed from its statistics and/or subtracting some real number activation statistics (typically the mean) from the activations.\\n\\nTo elaborate, we provide a brief historical background on normalization techniques. The first use of such ideas and terminology in modeling visual system dates back at least to Heeger (1992) in neuroscience and to Pinto et al. (2008) and Lyu & Simoncelli (2008) in computer vision, where each neuron output is divided by the sum (or norm) of all of the outputs, a module called divisive normalization. Recent popular normalization methods, such as local response normalization (Krizhevsky et al., 2012), batch normalization (Ioffe & Szegedy, 2015) and layer normalization (Ba et al., 2016) mostly follow this tradition of dividing the neuron activations by their certain summary statistics, often also with the activation mean subtracted. An exception is weight normalization (Salimans & Kingma, 2016), which instead divides the weight parameters by their statistics, specifically the weight norm; weight normalization also adopts the idea of activation normalization for weight initialization. The recently proposed actnorm (Kingma & Dhariwal, 2018) removes the normalization of weight parameters, but still use activation normalization to initialize the affine transformation layers.\\n\\nTherefore, our method is substantially different from all aforementioned techniques, and should not be regarded as being close to a normalization method.\\n\\n- References:\\n[1] Heeger, D. J. (1992). Normalization of cell responses in cat striate cortex. Visual neuroscience, 9(2), 181-197.\\n[2] Pinto, N., Cox, D. D., & DiCarlo, J. J. (2008). Why is real-world visual object recognition hard?. PLoS computational biology, 4(1), e27.\\n[3] Lyu, S., & Simoncelli, E. P. (2008). Nonlinear image representation using divisive normalization. In IEEE Conference on Computer Vision and Pattern Recognition, 2008.\\n[4] Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems (pp. 1097-1105).\\n[5] Ioffe, S., & Szegedy, C. (2015). Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167.\\n[6] Ba, J. L., Kiros, J. R., & Hinton, G. E. (2016). Layer normalization. arXiv preprint arXiv:1607.06450.\\n[7] Salimans, T., & Kingma, D. P. (2016). Weight normalization: A simple reparameterization to accelerate training of deep neural networks. In Advances in Neural Information Processing Systems (pp. 901-909).\\n[8] Kingma, D. P., & Dhariwal, P. (2018). Glow: Generative flow with invertible 1x1 convolutions. arXiv preprint arXiv:1807.03039.\"}", "{\"comment\": \"Hi,\\n\\nThis is an interesting paper. How would you compare your method to the method in [1] setting gamma=0 for every batchnorm going back to the main branch? On the surface the techniques look very similar and the authors in [1] also noted that such initialization improves optimization at the beginning of training.\\n\\n[1] Goyal et al. Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour.\", \"title\": \"Prior work\"}", "{\"comment\": \"Dear authors.\\n\\nThanks for an interesting paper. Incidentally, in the current resnet implementation (at least in TPU) in Tensorflow, the last batchnorm going back into the main branch as \\\\gamma initialized to 0, which I believe achieves a similar effect to what you are doing here, at least from an initialization perspective. This has been around since February of this year.\", \"https\": \"//github.com/tensorflow/tpu/blob/master/models/official/resnet/resnet_model.py#L219\\n\\nIs the resnet in your experiments initialized like so? If not, how does such initialization compare to your initialization (without BN)?\\n\\nIn addition, please correct me if I'm mistaken, but the theoretical analysis of variances in this paper seems to have been done (quite thoroughly) in Yang & Schoenholz 2017 and Hanin & Rolnick 2018, where the theory in the former works for any nonlinearity and predicts the empirical results (for tanh and relu) highly accurately, while the latter mathematically characterizes the activation dynamics. The former paper is missing in the citation, while the latter only gets a passing mention. Could you comment on the novelty of the derivation in the current work and why it's not enough to use results from these two papers?\\n\\nYang & Schoenholz 2017 https://arxiv.org/abs/1712.08969\\nHanin & Rolnick 2018 http://arxiv.org/abs/1803.01719\\n\\nThanks, and looking forward to your reply.\", \"title\": \"Tensorflow implementation of resnet already has zero initialization by default; Comment on prior works\"}", "{\"title\": \"Interesting, but unsure about the impact\", \"review\": \"This paper proposes an exploration of the effect of normalization and initialization in residual networks. In particular, the Authors propose a novel way to initialize residual networks, which is motivated by the need to avoid exploding/vanishing gradients. The paper proposes some theoretical analysis of the benefits of the proposed initialization.\\n\\nI find the paper well written and the idea well executed overall. The proposed analysis is clear and motivates well the proposed initialization. Overall, I think this adds something to the literature on residual networks, helping the reader to get a better understanding of the effect of normalization and initialization. I have to admit I am not an expert on residual networks, so it is possible that I have overlooked at previous contributions from the literature that illustrate some of these concepts already. Having said that, the proposal seems novel enough to me. \\n\\nOverall, I think that the experiments have a satisfactory degree of depth. The only question mark is on the performance of the proposed method, which is comparable to batch normalization. If I understand correctly, this is something remarkable given that it is achieved without the common practice of introducing normalizations. However, I have not found a convincing argument against the use of batch normalization in favor of ZeroInit. I believe this is something to elaborate on in the revised version of this paper, as it could increase the impact of this work and attract a wider readership.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"The method presented is partially based on interesting observations, and it obtains good empirical results (tough not better than competition in general). However, the presentation is somewhat misleading: the method includes normalization elements not discussed, and some of its components are not justified and not tested empirically in isolation.\", \"review\": \"Summary:\\nA method is presented for initialization and normalization of deep residual networks. The method is based on interesting observations regarding forward and backward explosion in such networks with the standard Xavier or (He, 2015) initializations. Experiments with the new method show that it is able to learn with very deep networks, and that its performance is on a par with the best results obtained by other networks with more explicit normalization.\", \"advantages\": [\"The paper includes interesting observations, resulting in two theorems, which show the sensitivity of traditional initializations in residual networks\", \"The method presented seems to work comparable to other state of the art initialization + normalization methods, providing overall strong empirical results.\"], \"disadvantages\": [\"The authors claim to suggest a method without normalization, but the claim is misleading: the network has additive and multiplicative normalization nodes, and their function and placement is at least as \\u2018mysterious\\u2019 as the role of normalization in methods like batch and layer normalization.\"], \"othis_significantly_limits_the_novelty_of_the_method\": [\"it is not \\u2018an intialization\\u2019 method, but a combination of initialization and normalization, which differ from previous ones in some details.\", \"The method includes 3 components, of which only one is justified in a principled manner. The other components are not justified neither by an argument, nor by experiments. Without such experiments, it is not clear what actually works in this method, and what is not important.\", \"The argument for the \\u2018justified\\u2019 component is not entirely clear to me. The main gist is fine, but important details are not explained so I could not get the entire argument step-by-step. This may be a clarity problem, or maybe indicate deeper problem of arbitrary decisions made without justification \\u2013 I am not entirely sure. Such lack of clear argumentation occurs in several places\", \"Experiments isolating the contribution of the method with respect to traditional initializations are missing (for example: experiments on Cifar10 and SVHN showing the result of traditional initializations with all the bells and whistles (cutout, mixup) as the zeroInit gets.\"], \"more_detailed_comments\": \"\", \"page_3\": [\"While I could follow the general argument before eq. 2, leading to the conclusion that the initial variance in a resnet explodes exponentially, I could not understand eq. 2. What is its justification and how is it related to the discussion before it? I think it requires some argumentation.\"], \"page_4\": \"-\\tI did not understand example 2) for a p.h. set. I think an argument, reminder of the details of resnet, or a figure are required.\\n-\\tI could not follow the details of the argument leading to the zeroInit method:\\no\\tHow is the second design principle \\u201cVar[F_l(x_l)] = O( 1/L) justified?\\nAs far as I can see, having Var[F_l(x_l)] = 1/L will lead to output variance of (1+1/L)^L =~e, which is indeed O(1). Is this the argument? Is yes, why wasn\\u2019t it stated? Also: why not smaller than O(1/L)?\", \"ofollowing_this_design_principle_several_unclear_sentences_are_stated\": \"\\uf0a7\\tWe strive to make Var[F_l(x_l)] = 1/L, yet we set the last convolutional layer in the branch to 0 weights. Does not it set Var[F_l(x_l)] = 0, in contradiction to the 1/L requirement?\\n\\uf0a7\\t \\u201cAssuming the error signal passing to the branch is O(1),\\u201d \\u2013 what does the term \\u201cerror signal\\u201d refer to? How is it defined? Do you refer to the branch\\u2019s input?\\n\\uf0a7\\tI understand why the input to the m-th layer in the branch is O(\\\\Lambda^m-1) if the branch input is O(1) but why is it claimed that \\u201cthe overall scaling of the residual branch after update is O(\\\\lambda^(2m-2))\\u201d? what is \\u2018the overall scaling after update\\u2019 (definition) and why is it the square of forward scaling?\\n-\\tThe zero Init procedure step 3 is not justified by any argument in the proceeding discussion. Is there any reason for this policy? Or was it found by trial and error and is currently unjustified theoretically (justified empirically instead). This issue should be clearly elaborated in the text. Note that the addition of trainable additive and multiplicative elements is inserting the normalization back, while it was claimed to be eliminated. If I understand correctly, the \\u2018zeroInit\\u2019 method is hence not based on initialization (or at least: not only on initialization), but on another form of normalization, which is not more justified than its competitors (in fact it is even more mysterious: what should we need an additive bias before every element in the network?)\", \"page_5\": [\"What is \\\\sqrt(1/2) scaling? It should be defined or given a reference.\"], \"page_6\": [\"It is not stated on what data set figure 2 was generated.\", \"In table 2, for Cifar-10 the comparison between Xavier init and zeroInit shows only a small advantage for the latter. For SVHN such an experiment is completely missing, and should be added.\", \"o\\tIt raises the suspect the the good results obtained with zeroInit in this table are only due to the CutOut and mixup used, that is: maybe such results could be obtained with CutOut+Mixup without zero init, using plain Xavier init? experiments clarifying this point are also missing.\"], \"additional_missing_experiments\": [\"It seems that ZeroInit includes 3 ingredients (according to the box in page 4), among which only one (number 2) is roughly justified from the discussion. Step 1) of zeroing the last layer in each branch is not justified \\u2013why are we zeroing the last layer and not the first, for example? Step 3 is not even discussed in the text \\u2013 it appear without any argumentation. For such steps, empirical evidence should be brought, and experiments doing this are missing. Specifically experiments of interest are:\"], \"ousing_zero_init_without_its_step_3\": \"does it work? The theory says it should.\\no\\tUsing only step 3 without steps 1,2. Maybe only the normalization is doing the magic?\\nThe paper is longer than 8 pages.\\n\\nI have read the rebuttal.\", \"regarding_normalization\": \"I think that there are at least two reasonable meanings to the word 'normalziation': in the wider sense is just means mechanism for reducing a global constant (additive normalization) and dividing by a global constant (multiplicative normalization). In this sense the constant parameters can be learnt in any way. In the narrow sense the constants have to be statistics of the data. I agree with the authors that their method is not normalization in sense 2, only in sense 1. Note that keeping the normalization in sense 1 is not trivial (why do we need these normalization operations? at least for the multiplicative ones, the network has the same expressive power without them). I think the meaning of normalization should be clearly explained in the claim for 'no normalization'.\", \"regarding_additional_mathematical_and_empirical_justifications_required\": \"I think such justifications are missing in the current paper version and are not minor or easy to add. I believe the work should be re-judged after re-submission of a version addressing the problems.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Interesting results. Normalization is not necessary to train deep resnets.\", \"review\": \"This paper shows that with a clever initialization method ResNets can be trained without using batch-norm (and other normalization techniques). The network can still reach state-of-the-art performance.\\n\\n\\nThe authors propose a new initialization method called \\\"ZeroInit\\\" and use it to train very deep ResNets (up to 10000 layers). They also show that the test performance of their method matches the performance of state-of-the-art results on many tasks with the help of strong data augmentation. This paper also indicates that the role of normalization in training deep resnets might not be as important as people thought. In sum, this is a very interesting paper that has novel contribution to the practical side of neural networks and new insights on the theoretical side.\", \"pros\": \"1. The analysis is not complicated and the algorithm for ZeroInit is not complicated. \\n2. Many people believe normalization (batch-norm, layer-norm, etc. ) not only improves the trainability of deep NNs but also improves their generalization. This paper provides empirical support that NNs can still generalize well without using normalization. It might be the case that the benefits from the data augmentation (i.e., Mixup + Cutout) strictly contain those from normalization. Thus it is interesting to see if the network can still generalize well (achieving >=95% test accuracy on Cifar10) without using strong data-augmentation like mixup or cutout. \\n3.Theoretical analysis of BatchNorm (and other normalization methods) is quite challenging and often very technical. The empirical results of this paper indicate that such analysis, although very interesting, might not be necessary for the theoretical understanding of ResNets.\", \"cons\": \"1.The analysis works for positively homogeneous activation functions i.e. ReLU, but not for tanh or Swish. \\n2.The method works for Residual architectures, but may not be applied to Non-Residual networks (i.e. VGG, Inception)\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}" ] }
rJliMh09F7
Diversity-Sensitive Conditional Generative Adversarial Networks
[ "Dingdong Yang", "Seunghoon Hong", "Yunseok Jang", "Tianchen Zhao", "Honglak Lee" ]
We propose a simple yet highly effective method that addresses the mode-collapse problem in the Conditional Generative Adversarial Network (cGAN). Although conditional distributions are multi-modal (i.e., having many modes) in practice, most cGAN approaches tend to learn an overly simplified distribution where an input is always mapped to a single output regardless of variations in latent code. To address such issue, we propose to explicitly regularize the generator to produce diverse outputs depending on latent codes. The proposed regularization is simple, general, and can be easily integrated into most conditional GAN objectives. Additionally, explicit regularization on generator allows our method to control a balance between visual quality and diversity. We demonstrate the effectiveness of our method on three conditional generation tasks: image-to-image translation, image inpainting, and future video prediction. We show that simple addition of our regularization to existing models leads to surprisingly diverse generations, substantially outperforming the previous approaches for multi-modal conditional generation specifically designed in each individual task.
[ "Conditional Generative Adversarial Network", "mode-collapse", "multi-modal generation", "image-to-image translation", "image in-painting", "video prediction" ]
https://openreview.net/pdf?id=rJliMh09F7
https://openreview.net/forum?id=rJliMh09F7
ICLR.cc/2019/Conference
2019
{ "note_id": [ "ryxSLIWZgE", "BJgHwssK0m", "HkexNsjFAQ", "BylPIFoFCX", "S1lcPdiF07", "HJgRk7fChQ", "HylZvVLo3m", "Hyg2WIvq27", "rkxiYmDq2m", "BkgYd5Rd3Q" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review", "comment" ], "note_created": [ 1544783436783, 1543252828868, 1543252775738, 1543252303286, 1543252066213, 1541444326405, 1541264473019, 1541203459777, 1541202818704, 1541102193087 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1296/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1296/Authors" ], [ "ICLR.cc/2019/Conference/Paper1296/Authors" ], [ "ICLR.cc/2019/Conference/Paper1296/Authors" ], [ "ICLR.cc/2019/Conference/Paper1296/Authors" ], [ "ICLR.cc/2019/Conference/Paper1296/Authors" ], [ "ICLR.cc/2019/Conference/Paper1296/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1296/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1296/AnonReviewer2" ], [ "~Augustus_Odena1" ] ], "structured_content_str": [ "{\"metareview\": \"The paper proposes a regularization term on the generator's gradient that increases sensitivity of the generator to the input noise variable in conditional and unconditional Generative Adversarial networks, and results in multimodal predictions. All reviewers agree that this is a simple and useful addition to current GANs. Experiments that demonstrate the trade off between diversity and generation quality would be important to include, as well as the experiment on using the proposed method on unconditional GANs, which was conducted during the discussion period.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"a simple regularization for preventing mode collapse\"}", "{\"title\": \"Response to Reviewer 1 (part 2)\", \"comment\": \"(4). \\u201cThe diversity observed seems to mainly be attributable to colour differences rather than more elaborate differences\\u201d\\n\\nWe would like to clarify that the diversity encouraged by our method is not limited to the color differences. In our experiments, we demonstrated more elaborate sample differences on various datasets and tasks as follows:\\n\\n- In map->photo dataset (Figure 2, the last row), our method generates various landmark textures especially on park areas, such as trees, grass, and playgrounds.\\n- In cityscape dataset (Figure 3), our method generates different object textures (buildings, cars, road), lightings and shadows, etc. \\n- In face dataset (Figure 5), our method generates various facial attributes such as gender, age, facial expression, makeups, etc.\\n- In video datasets (Figure 6), our method generates various object dynamics such as different motion categories and speed. \\n\\nPlease note that the diversity in the face and video datasets is very subtle in terms of color differences. For instance, we can create different facial expressions or motions by modifying a few pixels around facial or body landmarks. Experiment results show that our method learns semantically more meaningful factors of variations than just color differences. We also remark that our method tends to capture semantically more meaningful diversities than other approaches (i.e. BicycleGAN, SAVP), as shown in Figure B, C, E, and F. \\n\\nFinally, we demonstrate in the paper that we can incorporate an additional encoder into our regularization, which allows us to capture more meaningful sample differences (Equation 6). This can be very useful when the semantic distance of the samples is not well captured in the generator output space (e.g. face, sentence).\\n\\n\\n(5). The proposed method is marginally better than other methods.\\n\\nWe believe that our method achieved meaningful improvement over existing state-of-the-art multimodal cGAN methods. Although its performance is competitive with BicycleGAN on some datasets (Table 1), we showed that our method can achieve substantially better performance over a wide range of latent dimensions (Table 2), and more challenging datasets (Table 3). Compared to SAVP, our method achieved substantially diverse and realistic results especially on challenging KTH datasets, where SAVP ends up generating deterministic outputs (Table 5).\\n \\nMore importantly, we believe that the main contribution of this paper is proposing a general and principled approach to promote diversity in conditional GANs. Our method achieved consistent improvement over state-of-the-arts on various tasks and frameworks, although such competitors are designed specifically for each task and require non-trivial modifications of baseline cGAN models. As pointed out by other reviewers, alleviating the need to investigate significant changes to model families by focusing instead on a novel optimization objective is an important contribution towards understanding how conditional generative models like cGANs behave.\\n\\n\\n(6). \\u201cIn section 4, an increase of the gradient norm of the generator is implied: does this have any effect on the robustness/sensitivity of the model to adversarial attacks?\\u201d\\n\\nWe are sorry, but we could not understand your question clearly. Below we provide a response based on our best guess on your question, but please let us know if it does not address your concern. We are happy to elaborate and discuss further based upon your comments.\\n\\nWe assume that your concern is on the sensitivity of the generator against the adversarial perturbation on the input condition x, as our regularization increases the norm of the generator gradient. Denoting the very small perturbation as p, your concern can be rephrased as \\u201cWill the proposed regularization increase ||G(x+p,z) - G(x,z)||?\\u201d. Our answer is \\u201cnot necessarily yes\\u201d, as our regularization increases the sensitivity of generator over latent code (||G(x,z1) - G(x,z2)||), not an input condition (||G(x+p,z) - G(x,z)||). Also, in another perspective, the cGAN is trained driven with the conditional likelihood (on top of adversarial loss) as a major objective (e.g., \\u201cL2 loss\\u201d between the generator output and ground-truth output), so a reasonably trained generator model should capture the implicit relationship between input/output pairs in the data and thus would exhibit proper degree of sensitivity to sufficiently different x values (while generating smooth/similar outputs when given very similar x values). Note that we do not need to worry about an adversarial attack on the latent code, as it is always sampled by the model and hidden to users. \\n\\nTo be more concrete, we will provide some experimental results if the reviewer can elaborate more on the attack method (e.g. reference). To our best knowledge, we are not aware of any existing works on the adversarial attack against cGAN generator.\"}", "{\"title\": \"Response to Reviewer 1 (part 1)\", \"comment\": \"We appreciate your constructive and detailed comments. Due to the character limit of openreview comment system, we provide our responses in two parts.\\n\\n(1). \\u201cWhy was the maximum theta (the bound for numerical stability) incorporated in Equation 2? What happens if this is omitted in practice? How is this determined?\\u201d\\n\\nIn principle, our regularization term (||G(x,z1)-G(x,z2)||/||z1-z2||) is an unbounded operator because 1) its numerator can grow arbitrarily large with the unbounded generator and 2) its denominator can approach zero with the almost identical latent codes. The \\\\tau in Equation 2 provides a bound to our regularization term thus ensures its numerical stability. However, we found that our regularization term is practically bounded in most conditional GAN implementations because 1) the generator output is usually bounded by non-linear output function (e.g. [0,1] for sigmoid, [-1,1] for the hyperbolic tangent) and 2) it is very unlikely to sample two near-identical latent codes from standard normal distribution. Specifically, we can probably bound the probability of sampling two random codes z and z' from the N-dimensional multivariate standard normal distribution within a distance of \\\\delta by p(|z-z'|<\\\\delta) \\\\leq (\\\\delta/\\\\sqrt(2\\\\pi))^N. For sufficiently small \\\\delta, we can see that such probability decreases exponentially with the size of the latent code. For example, when \\\\delta=0.001 and N=10, this bound is about 10^-30, which implies that the probability that such an event happens is practically zero. (We will include the proof in the next revision. We are happy to provide more details upon request.). For these reasons, we omitted the \\\\tau in Equation 2 in practice as we described in the following sentence of Equation 3. The only hyper-parameter in our formulation (and in all our experiments) is thus \\\\lambda in Equation 3, which controls the importance of the regularization.\\n\\n\\n(2). \\u201cIn section 5, how is the \\u2018appropriate CGAN\\u2019 determined\\u201d?\\n\\nFor each conditional generation task in the experiment, we chose strong cGAN models from the literature that produces realistic but deterministic outputs, as we described in the 4th line of Section 5. We provided details of cGAN baseline in each task in the second paragraph of each corresponding subsection as follows:\\n - Image to image translation (section 5.1): Zhu et al., 2016 \\n - Image inpainting (section 5.2): Iizuka et al., 2017 \\n - Video prediction (section 5.3): Lee et al., 2018\\nThese models are considered as among state-of-the-art methods (if not \\u201cthe state-of-the-art\\u201d for each task domain). As we described in the paper, we employed the exact same network architectures and hyperparameters provided by the authors for these models, as our formulation requires modification on only objective function. To be self-contained, please note that we also provided detailed settings of these baseline cGANs in the appendix (Section D.1.1, D.2, and D.3.2).\\n\\n\\n(3). \\u201cThe visual quality of the samples illustrated in the paper is inferior to that observed in the state-of-the-art \\u2026\\u201d\\n\\nBased on our experiment results, we did not observe noticeable quality degradation of our method over its cGAN counterparts. As we discussed in the paper, we conducted a human evaluation study to compare visual realism among cGAN baseline, BicycleGAN, and our method. However, we found that there is no clear winning method over others, which implies that the visual quality of samples is in a similar level for these methods. In terms of FID score, on the other hand, our method consistently achieved substantial improvement over cGAN baseline and BicycleGAN (Table 1 and 3), which shows that the distribution of the generated samples by our method (with improved diversity) matches much better to the true distribution than others. \\n\\nPlease note that our main contribution is improving diversity in existing cGANs, which is orthogonal (and complementary) to achieving high-level visual realism via sophisticated architectural designs or training strategies, e.g., BicycleGAN, pix2pixHD, SAVP, ProgressiveGAN, BigGAN (unpublished concurrent ICLR submission) to name a few. Practically, We showed that, with a few lines of additional code, the diversity of generated samples dramatically improves upon all strong-performing cGAN models we tried. Although an exhaustive demonstration of our regularization to the latest and the most resource-heavy cGAN models (e.g., ProgressiveGAN or BigGAN) was infeasible due to time/resource limit for our submission, we believe our experiments provide compelling evidence of wide applicability of our regularizer.\"}", "{\"title\": \"Response to Reviewer 2\", \"comment\": \"We appreciate your insightful and constructive comments. We have updated the paper to include the experiments on unconditional GAN (section C.1). Below we provide our response to your comment.\\n\\n(1). Is the proposed method applicable to unconditional GANs?\\n\\nWe believe that our regularization can be applied to unconditional GAN to relax the mode-collapse problem. To demonstrate this idea, we conducted an experiment using the synthetic data and unconditional GAN model employed in Srivastava et al., 2017. Please see Section C.1 of our revised paper for comprehensive descriptions on experiment settings and results. Below we provide a summary of this experiment. \\n\\nIn this experiment, we used a mixture of eight 2D Gaussian distributions arranged in a ring as a synthetic dataset. We observe that vanilla GAN experiences a severe mode collapse, putting a significant probability mass around a single mode. On the other hand, applying our regularization effectively resolves the mode-collapse problem, enabling the generator to capture all eight modes. Interestingly, our method achieved even higher performance over Srivastava et al., 2017, which also addresses the mode collapse in GAN but for unconditional generation task. It shows that the proposed regularization is also effective in resolving mode collapse problem in unconditional GAN setting.\"}", "{\"title\": \"Response to Reviewer 3\", \"comment\": \"We appreciate your insightful and supportive comments. We have updated the paper to address your concern with converged discriminator (section C.2). Below we provide our responses to your comments.\\n\\n(1) \\u201cI would expect that upper bounding the generator gradient makes sense if a smooth interpolation in latent space is desired\\u201d\\n\\nOur regularization increases the lower-bound of the generator gradient norm to ensure the sensitivity of the generator with respect to the latent code z. Without such bound, we found that the norm of the generator gradient approaches to zero as training progresses, which makes the conditional generator ignoring z. \\n\\nWe can still ensure the smoothness of latent space by bounding our regularization term. This can be achieved implicitly by balancing our regularization with the adversarial loss using the hyperparameter \\\\lambda (Equation 2), or explicitly by introducing an upper-bound to our regularization (\\\\tau in Equation 2). We found that the first trick alone works practically very well to learn smooth latent manifold. (Empirically we found that \\u201cno upper bounding\\u201d (i.e., \\\\tau=\\\\infty) worked just well; please see our response (1) to Reviewer 1 for more detailed information.) Please see sample interpolation results in Figure D, E, G in appendix and videos on the anonymous website (https://sites.google.com/view/iclr19-dsgan/), where we can observe smooth and continuous transition between samples. \\n\\n\\n(2) \\u201cwill it work if the discriminator is allowed to converge before updating the generator?\\u201d\\n\\nThank you for your insightful comment. We empirically validate the effectiveness of our regularization on vanishing gradient problem and reported the results in Section C.2. As suggested by the reviewer, we simulate the vanishing gradient problem by training cGAN baseline until it converges, and retraining the generator from scratch with our regularization while initializing the discriminator with the pre-trained one. Empirically we observed that the pre-trained discriminator can distinguish the real data and generated samples from the randomly initialized generator almost perfectly, and the generator experiences a severe vanishing gradient problem at the beginning of the training. However, even in such cases, we found that the diversity-sensitive regularization helped overcoming this issue throughout the training.\\n\\nIn our experiment on label->image dataset, we found that the generator with our regularizer converges to the similar FID/LPIPS scores (FID: 52.31; LPIPS: 0.16) as the ones reported in the paper (FID: 57.20, LPIPS: 0.18). We observed that our regularization term encourages the generator to efficiently explorer the output space in the early training stage when the discriminator gradients are vanishing, which helps the generator to capture useful gradient signals from the discriminator in the later course of training. This trend can be observed more clearly on our experiments on the synthetic dataset (Section C.1), where our diversity-sensitive regularization spreads the generator landscape and captures meaningful modes. Please find the Section C.2. in the revised paper for more detailed experiment settings and discussions.\"}", "{\"title\": \"Thank you for your comment!\", \"comment\": \"Hi Augustus.\\n\\nThank you for your comment. We will definitely include your paper in the revision of our paper as two methods are related. As you mentioned, Jacobian clamping and our regularizer optimize similar but different objective functions (i.e. the former clamps the generator Jacobian within a certain range, while the later increases it with some rough upper-bound), which leads to different impacts on the generator in practice. In our initial attempts, we tried Jacobian clamping on Facade dataset with grid hyper-parameter search but could not achieve the similar FID / LPIPS score as our method. We will add more thorough discussion and investigation results in the revised version of our paper. Thank you.\"}", "{\"title\": \"An interesting and simple idea.\", \"review\": \"The paper proposes a regularization term for the conditional GAN objective in order to promote diverse multimodal generation and prevent mode collapse. The regularization maximizes a lower bound on the average gradient norm of the generator network as a function of the noise variable.\\n\\nThe regularization is a simple addition to existing conditional GAN models and is certainly simpler than the architectural modifications and optimization tweaks proposed in recent work (BicycleGAN, etc). It is useful to a such a simple solution for preventing mode collapse as well as promoting diversity in generation.\\n\\nIt is shown to promote the generator landscape to be more spread out by lower bounding the expected average gradient norm under the noise distribution. This is a point to be noted when comparing with other work which focus on the vanishing gradients through the discriminator and try to tweak the discriminator gradients. It is a surprising result that such a penalty on the lower bound can prevent mode collapse while also promoting diversity, since I would expect that upper bounding the generator gradient (i.e. lipschitz continuity which wasserstein GANs and related work rely on but for their discriminator instead) makes sense if a smooth interpolation in latent space is desired. \\n\\nIt is also not evident how the vanishing discriminator gradient problem is solved using this regularization -- will it work if the discriminator is allowed to converge before updating the generator?\\n\\nThis simple regularization presented in this paper and its connection to preventing mode collapse feels like an important step towards understanding how conditional generative models like cGANs behave. Alleviating the need to investigate significant changes to model families by focusing instead on a novel optimization objective is an important contribution.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Well written paper with a simple idea for preventing mode-collapse in GANs but with insufficiently experimental validation\", \"review\": \"The paper proposes a simple way of addressing the issue of mode-collapse by adding a regularisation to force the outputs to be diverse. Specifically, a loss is added that maximises the l2 loss between the images generated, normalised by the distance between the corresponding latent codes. This method is also used to control the balance between visual quality and diversity.\\n\\nThe paper is overall well written, introducing and referencing well existing concepts, and respected the 8 pages recommendation.\\n\\nWhy was the maximum theta (the bound for numerical stability) incorporated in equation 2? What happens if this is omitted in practice? How is this determined?\\n\\nIn section 4, an increase of the gradient norm of the generator is implied: does this have any effect on the robustness/sensitivity of the model to adversarial attacks?\\n\\nIn section 5, how is the \\u201cappropriate CGAN\\u201d determined?\\n\\nMy main issue is with the experimental setting that is somewhat lacking. The visual quality of the samples illustrated in the paper is inferior to that observed in the state-of-the-art, begging the question of whether this is a tradeoff necessary to obtain better diversity or if it is a consequence of the additional regularisation.. The diversity observed seems to mainly be attributable to colour differences rather than more elaborate differences. Even quantitatively, the proposed method seems only marginally better than other methods.\\n\\nUpdate post rebuttal\\n-----------------------------\\nThe experimental setting that is a little lacking. Qualitatively and quantitatively, the improvements seem marginal, with no significant improvement shown. I would have liked a better study of the tradeoff between visual quality and diversity, if necessary at all.\\n\\nHowever, the authors addressed well the issues. Overall, the idea is interesting and simple and, while the paper could be improved with some more work, it would benefit the ICLR readership in its current form, so I would recommend it as a poster -- I am increasing my score to that effect.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Interesting idea with good experimental validation\", \"review\": \"The paper proposes a method for generating diverse outputs for various conditional GAN frameworks including image-to-image translation, image-inpainting, and video prediction. The idea is quite simple, simply adding a regularization term so that the output images are sensitive to the input variable that controls the variation of the images. (Note that the variable is not the conditional input to the network.) The paper also shows how the regularization term is related to the gradient penalty term. The most exciting feature about the work is that it can be applied to various conditional synthesis frameworks for various tasks. The paper includes several experiments with comparison to the state-of-the-art. The achieved performance is satisfactory.\\n\\nTo the authors, wondering if the framework is applicable to unconditional GANs.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"comment\": \"Hey, I think our ICML paper http://proceedings.mlr.press/v80/odena18a.html qualifies as related work in this case.\\nIn particular, the Jacobian Clamping algorithm from that paper is pretty similar, though we bound the largest and smallest singular values of the generator jacobian and it looks like you approximately maximize the norm of the whole thing.\", \"title\": \"Related work :)\"}" ] }
BkgiM20cYX
A Self-Supervised Method for Mapping Human Instructions to Robot Policies
[ "Hsin-Wei Yu", "Po-Yu Wu", "Chih-An Tsao", "You-An Shen", "Shih-Hsuan Lin", "Zhang-Wei Hong", "Yi-Hsiang Chang", "Chun-Yi Lee" ]
In this paper, we propose a modular approach which separates the instruction-to-action mapping procedure into two separate stages. The two stages are bridged via an intermediate representation called a goal, which stands for the result after a robot performs a specific task. The first stage maps an input instruction to a goal, while the second stage maps the goal to an appropriate policy selected from a set of robot policies. The policy is selected with an aim to guide the robot to reach the goal as close as possible. We implement the above two stages as a framework consisting of two distinct modules: an instruction-goal mapping module and a goal-policy mapping module. Given a human instruction in the evaluation phase, the instruction-goal mapping module first translates the instruction to a robot-interpretable goal. Once a goal is derived by the instruction-goal mapping module, the goal-policy mapping module then follows up to search through the goal-policy pairs to look for policy to be mapped by the instruction. Our experimental results show that the proposed method is able to learn an effective instruction-to-action mapping procedure in an environment with a given instruction set more efficiently than the baselines. In addition to the impressive data-efficiency, the results also show that our method can be adapted to a new instruction set and a new robot action space much faster than the baselines. The evidence suggests that our modular approach does lead to better adaptability and efficiency.
[ "goal", "mapping module", "instruction", "human instructions", "policies", "modular", "mapping procedure", "stages", "robot", "policy" ]
https://openreview.net/pdf?id=BkgiM20cYX
https://openreview.net/forum?id=BkgiM20cYX
ICLR.cc/2019/Conference
2019
{ "note_id": [ "Hkx7pRnAyV", "r1xT4mNRnm", "HJlkE0Bqh7", "rJg57h2UhQ" ], "note_type": [ "meta_review", "official_review", "official_review", "official_review" ], "note_created": [ 1544634042630, 1541452597254, 1541197351117, 1540963362163 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1295/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1295/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1295/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1295/AnonReviewer2" ] ], "structured_content_str": [ "{\"metareview\": \"The paper proposes a novel approach to interfacing robots with humans, or rather vv: by mapping instructions to goals, and goals to robot actions. A possibly nice idea, and possibly good for more efficient learning.\\n\\nBut the technical realisation is less strong than the initial idea. The original idea merits a good evaluation, and the authors are strongly encouraged to follow up on this idea and realise it, towards a stronger publication.\\n\\nIt be noted that the authors refrained from using the rebuttal phase.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"nice, but unripe\"}", "{\"title\": \"Overall idea is interesting, but novelty is limited and evaluation is poor\", \"review\": [\"The paper proposes a modular approach to the problem of mapping instructions to robot actions. The first of two modules is responsible for learning a goal embedding of a given instruction using a learned distance function. The second module is responsible for mapping goals from this embedding space to control policies. Such a modular approach has the advantage that the instruction-to-goal and goal-to-policy mappings can be trained separately and, in principle, allow for swapping in different modules. The paper evaluates the method in various simulated domains and compares against RL and IL baselines.\", \"STRENGTHS\", \"Decoupling instruction-to-action mapping by introducing goals as a learned intermediate representation has advantages, particularly for goal-directed instructions. Notably, these together with the ability to train the components separately will generally increase the efficiency of learning.\", \"WEAKNESSES\", \"The algorithmic contribution is relatively minor, while the technical merits of the approach are questionable.\", \"The goal-policy mapping approach would presumably restrict the robot to goals experienced during training, preventing generalization to new goals. This is in contrast to semantic parsing and symbol grounding models, which exploit the compositionality of language to generalize to new instructions.\", \"The trajectory encoder operates differently for goal-oriented vs. trajectory-oriented instructions, however it is not clear how a given instruction is identified as being goal- vs. trajectory-oriented.\", \"While there are advantages to training the modules separately, there is a risk that they are reasoning over different portions of the goal space.\", \"A contrastive loss would seemingly be more appropriate for learning the instruction-goal distance function.\", \"The goal search process relies on a number of user-defined parameters\", \"The nature of the instructions used for experimental evaluations is unclear. Are they free-form instructions? How many are there? Where do they come from? How different are the familiar and unfamiliar instructions?\", \"Similarly, what is the nature of the different action spaces?\", \"The domains considered for experimental evaluation are particularly simple. It would be better to evaluate on one of the few common benchmarks for robot language understanding, e.g., the SAIL corpus, which considers trajectory-oriented instructions.\", \"The paper provides insufficient details regarding the RL and IL baselines, making it impossible to judge their merits.\", \"The paper initially states that this distance function is computed from learned embeddings of human demonstrations, however these are presumably instructions rather than demonstrations.\", \"I wouldn't consider the results reported in Section 4.5 to be ablative studies.\", \"The paper incorrectly references Mei et al. 2016 when stating that methods require a large amount of human supervision (data annotation) and/or linguistic knowledge. In fact Mei et al. 2016 requires no human annotation or linguistic knowledge.\", \"Relevant to the discussion of learning from demonstration for language understanding is the following paper by Duvallet et al.\", \"Duvalet, Kollar, and Stentz, \\\"Imitation learning for natural language direction following through unknown environments,\\\" ICRA 2014\", \"The paper is overly verbose and redundant in places.\", \"There are several grammatical errors\", \"The captions for Figures 3 and 4 are copied from Figure 1.\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Proposed method has several limitations, experimental setup is unclear and the results are not convincing.\", \"review\": [\"This submission proposes a method for learning to follow instructions by splitting the policy into two stages: human instructions to robot-interpretable goals and goals to actions. The authors claim to achieve better data efficiency, adaptability, and generalization as compared to the baselines.\", \"Here are some comments/questions:\", \"One of the biggest limitations of the proposed method is that it can only work for one-to-one or many-to-one mapping of instructions to goals. As I understand (please correct me if I am wrong), the method can not work for contextual instructions where the goal depends on the environment and the same instruction can map to different goals, such as 'Go to the largest/farthest object'.\", \"Another limitation of the method is that it requires a set of goals G, which is not trivial to obtain especially in partially observable environments such as embodied navigation in 3D space.\", \"The experimental setup is unclear and several crucial details are missing:\", \"\\\"An instruction for approaching one of the five targets in the arena is generated and passed to the agent at first.\\\" -> how is the instruction generated?\", \"There's no example of the environment or the instruction in the submission\", \"\\\"Within the instruction become approaching more than one targets, one of two added targets is selected as internal targets pair with one of the remaining targets.\\\" I do not understand this sentence. How are the targets generated in the trajectory-oriented task? How are the instructions generated in this task?\", \"Experimental results are not convincing:\", \"The introduction motivates the need for understanding human instructions and the abstract says 'Given a human instruction', but I believe experiments do not have any human instructions.\", \"All the environments seem to be fully-observable, it is not clear whether the method would work in partially-observable environments.\", \"Only vanilla PPO and BC cloning are used as baselines. There are several competing methods for following instructions which the authors cite such as Hermann et al. 2017, Chaplot et al. 2017, Misra et al. 2017, etc. Why weren't any of these approaches used as a baseline?\", \"The submission requires proof-reading, there are several typos in the manuscript (some are listed below), some of them make it very difficult to understand the setting.\", \"Typos:\", \"Sec 3.1 on Pg 4 mentions 'CEM' multiple times, it's not defined until 3.3.2 on Pg 6.\", \"Pg 3 Theses sets -> These sets\", \"Pg 7 where the Reacher pointing at -> where the Reacher is pointing at\", \"Pg 7 What reacher observes the word is its fingertip\\u2019s position, coordinates in two dimension. -> something is wrong in this sentence.\", \"Pg 7 Then comes to the trajectory-oriented task, there are only a few differences from above -> something is wrong in this sentence.\", \"Pg 7 Within the instruction become approaching more than one targets -> something is wrong here\"], \"rating\": \"3: Clear rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Review\", \"review\": \"This paper presents an instruction-following model consisting of two modules: a\\ngoal-prediction model that maps commands to goal representations, and an\\nexecution model that maps goal representations to policies. The second module is\\ntrained without command supervision via a goal exploration process, while the\\nfirst module is trained supervisedly in a metric learning framework.\\n\\nThis paper contains an important core insight---much of what's hard about\\ninstruction following is generic planning behavior that doesn't depend on the\\nsemantics of instructions, and pre-learning this behavior makes it possible to\\nuse natural language supervision more effectively. However, the paper also\\ncontains a number of serious evaluation and presentation issues. It is obviously\\nnot ready to publish (uncaptioned figures, paragraphs interrupted mid-sentence,\\netc.) and should not have been submitted to ICLR in its present form.\\n\\nSUPERVISION AND COMPARISONS\\n\\nI found comparisons between supervision conditions in this paper difficult to\\nunderstand. It is claimed that the natural language instruction following\\napproaches described in the first paragraph \\\"require a large amount of human\\nsupervision\\\" in the form of action sequences. This is not exactly true, as some\\napproaches (e.g. Artzi 2013), can be trained with only task completion signals.\\nMore problematically, all these approaches are contrasted with reinforcement and\\nimitation learning approaches, which are claimed to use \\\"little human\\nsupervision\\\". In fact, most of the approaches listed in this section use exactly\\nthe same supervision---either action sequences (imitation learning) or task\\ncompletion signals (reinforcement learning). Indeed, the primary distinction is\\nthat the \\\"NLP-style\\\" approaches are typically evaluated on their ability to\\ngeneralize to new instructions, while the \\\"RL-style\\\" approaches are evaluated on\\nthe (easier) problem of fitting the complete instruction distribution as quickly\\nas possible.\\n\\nThis confusion carries into the evaluation of the approach proposed in this\\npaper, which is compared to RL and IL baselines. It's hard to tell from the\\ntext, but it appears that this is an \\\"RL-style\\\" evaluation setting, where we\\nonly care about rapid convergence rather than generalization. But the baselines\\nare inadequately described, and it's not clear to me that they condition on the\\ncommands at all. More significantly, it's not clear what an evaluation based on\\n\\\"timesteps\\\" means for a behavior-cloning approach---is this the number of\\ndistinct trajectories observed? The number of gradient steps taken? Without\\nthese explanations it is impossible to interpret the experimental results.\\n\\nGENERALITY OF PROPOSED APPROACH\\n\\nDespite the advantages of the high-level two-phase model proposed, the specific\", \"implementation_in_this_paper_has_two_significant_shortcomings\": \"- No evidence that it works with real language: despite numerous claims\\n throughout the paper that the model is designed to interpret \\\"human\\n instructions\\\", it is revealed on p7 that these instructions consist of one or two\\n 5-way indicator features. This is an extremely impoverished instruction space,\\n especially compared to the numerous papers cited in the introduction that make\\n use of large datasets of complex natural-language strings generated by human\\n annotators. The present experiments do not support the use of the word \\\"human\\\"\\n anywhere in the paper.\\n\\n- No support for combinatorial action spaces. Even if we set aside the\\n distinctions between human-generated instructions and synthetic command\\n languages like used in Hermann Hill & al., the goal -> policy module is\\n defined by a buffer of cached trajectories and goal representations. While\\n this works for the simple environments considered in this paper, it cannot\\n generalize to real-world instruction-following scenarios where the number of\\n distinct goal configurations is too large to tractably enumerate. Again, this\\n is a shortcoming that existing approaches do not suffer from (given\\n appropriate assumptions about the structure of goal space), so the lack of\\n comparisons is problematic.\\n\\nCLARITY\\n\\nThe whole paper would benefit from copy-editing by an experienced English\\nspeaker, but a few sections are particularly problematic:\\n\\n- The first paragraph of 4.1.1 is extremely difficult to understand What does\\n the fingertip do? What exactly is the action space?\\n\\n- The end of the second paragraph is also difficult to understand; after reading\\n it I still don't know what the extra \\\"position\\\" targets do.\\n\\n- 4.1.4 is cut off mid-way through a sentence.\\n\\n- last sentence of 4.2\", \"the_figures_are_also_impossible_to_interpret\": \"three of the four are captioned\\n\\\"overview of the proposed framework\\\", and none are titled.\", \"rating\": \"2: Strong rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
S1lqMn05Ym
Information asymmetry in KL-regularized RL
[ "Alexandre Galashov", "Siddhant M. Jayakumar", "Leonard Hasenclever", "Dhruva Tirumala", "Jonathan Schwarz", "Guillaume Desjardins", "Wojciech M. Czarnecki", "Yee Whye Teh", "Razvan Pascanu", "Nicolas Heess" ]
Many real world tasks exhibit rich structure that is repeated across different parts of the state space or in time. In this work we study the possibility of leveraging such repeated structure to speed up and regularize learning. We start from the KL regularized expected reward objective which introduces an additional component, a default policy. Instead of relying on a fixed default policy, we learn it from data. But crucially, we restrict the amount of information the default policy receives, forcing it to learn reusable behaviors that help the policy learn faster. We formalize this strategy and discuss connections to information bottleneck approaches and to the variational EM algorithm. We present empirical results in both discrete and continuous action domains and demonstrate that, for certain tasks, learning a default policy alongside the policy can significantly speed up and improve learning. Please watch the video demonstrating learned experts and default policies on several continuous control tasks ( https://youtu.be/U2qA3llzus8 ).
[ "Deep Reinforcement Learning", "Continuous Control", "RL as Inference" ]
https://openreview.net/pdf?id=S1lqMn05Ym
https://openreview.net/forum?id=S1lqMn05Ym
ICLR.cc/2019/Conference
2019
{ "note_id": [ "HJlqtuYllE", "rJxZQQzMCX", "BJxuzZMMCm", "B1gITyzGCQ", "rygVCXQWR7", "HJxrOYAJpX", "S1gXUjKKhm", "Bye57sxdhQ" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544751234436, 1542755097489, 1542754576337, 1542754237925, 1542693835914, 1541560685241, 1541147466988, 1541045026069 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1293/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1293/Authors" ], [ "ICLR.cc/2019/Conference/Paper1293/Authors" ], [ "ICLR.cc/2019/Conference/Paper1293/Authors" ], [ "ICLR.cc/2019/Conference/Paper1293/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1293/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1293/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1293/AnonReviewer3" ] ], "structured_content_str": [ "{\"metareview\": \"Strengths\\n\\nThe paper introduces a promising and novel idea, i.e., regularizing RL via an informationally asymmetric default policy \\nThe paper is well written. It has solid and extensive experimental results.\\n\\nWeaknesses\\n\\n\\nThere is a lack of benefit on dense-reward problems as a limitation, which the authors further\\nacknowledge as a limitation. There also some similarities to HRL approaches. \\nA lack of theoretical results is also suggested. To be fair, the paper makes a number of connections\\nwith various bits of theory, although it perhaps does not directly result in any new theoretical analysis.\\nA concern of one reviewer is the need for extensive compute, and making comparisons to stronger (maxent) baselines.\\nThe authors provide a convincing reply on these issues.\\n\\nPoints of Contention\\n\\nWhile the scores are non-uniform (7,7,5), the most critical review, R1(5), is in fact quite positive on many\\naspects of the paper, i.e., \\\"this paper would have good impact in coming up with new \\nlearning algorithms which are inspired from cognitive science literature as well as mathematically grounded.\\\"\\nThe specific critiques of R1 were covered in detail by the authors.\\n\\nOverall\\n\\nThe paper presents a novel and fairly intuitive idea, with very solid experimental results. \\nWhile the methods has theoretical results, the results themselves are more experimental than theoretic.\\nThe reviewers are largely enthused about the paper. The AC recommends acceptance as a poster.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"intuitive idea & theoretical connections; solid experimental results\"}", "{\"title\": \"Thank you for the insightful comments. 1. Many modern algorithms use actor-critic architecture though we will investigate other algorithms in the future work. 2. Theory in the follow-up. 3. Improved clarity\", \"comment\": \"The experiment results of this paper are interesting [...] I recommend the authors to try to provide intuitive explanation for all such interesting observations in the paper.\", \"answer\": \"We did attempt to provide an intuitive explanation for such observations, e.g. on page 7, where we discuss \\u201cdense-reward\\u201d case. (There is also appendix E, with additional experimental results and explanations). But we agree with the reviewer that the clarity of these points can be improved. Please see the updated version of the paper with improved clarity of the interesting observations in the paper. For the dense reward case in particular please also see our reply to AnonReviewer2 above.\", \"references\": \"[1] Espeholt L., Soyer. H., Munos R., Simonyan K., Mnih V., Ward T., Doron Y., Firoiu V., Harley T., Dunning I., Legg S., Kavukcuoglu K., \\\"IMPALA: Scalable Distributed Deep-RL with Importance Weighted Actor-Learner Architectures\\\", 2018, https://arxiv.org/abs/1802.01561\\n\\n[2] Heess N., Wayne G., Silver D., Lillicrap T., Tassa Y., Erez T., \\\"Learning Continuous Control Policies by Stochastic Value Gradients\\\", 2015, https://arxiv.org/abs/1510.09142\\n\\n[3] Munos R., Stepleton T., Harutyunyan A., Bellemare M.G., \\\"Safe and Efficient Off-Policy Reinforcement Learning\\\", 2016, https://arxiv.org/abs/1606.02647\\n\\n[4] Haarnoja T., Zhou A., Abbeel P., Levine S., \\\"Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor\\\", 2018, https://arxiv.org/abs/1801.01290\"}", "{\"title\": \"Thank you for the comment. 1. Scale of our experiments is comparable to the existing DeepRL approaches. 2. We eventually compare to similar to SAC baseline\", \"comment\": \"It becomes more important to compare to stronger baselines like maximum entropy RL (Soft Actor Critic)[...]\", \"answer\": \"We effectively provide this baseline already [Figure 5, left.]. We compare to two entropy regularized baselines: SVG(0) [3] with entropy regularization, and SVG(0) with entropy bonus. The former optimizes the entropy regularized expected reward objective using a Q-function and policy updates via reparametrization in the same way as SAC. The latter also implements SVG(0) policy updates (Q function + reparametrization and back-propagation) but includes entropy only in the policy update as e.g. is general practice in many DRL papers (e.g. [4]). We optimize hyperparameters for each algorithm separately. Compared to the SAC algorithm there are some minor differences in the use of target networks, which are, however, orthogonal to the ideas of our paper (which in unreported experiments made no qualitative difference to the results).\", \"references\": \"[1] Espeholt L., Soyer. H., Munos R., Simonyan K., Mnih V., Ward T., Doron Y., Firoiu V., Harley T., Dunning I., Legg S., Kavukcuoglu K., \\\"IMPALA: Scalable Distributed Deep-RL with Importance Weighted Actor-Learner Architectures\\\", 2018, https://arxiv.org/abs/1802.01561\\n[2] Haarnoja T., Zhou A., Abbeel P., Levine S., \\\"Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor\\\", 2018, https://arxiv.org/abs/1801.01290\\n[3] Heess N., Wayne G., Silver D., Lillicrap T., Tassa Y., Erez T., \\\"Learning Continuous Control Policies by Stochastic Value Gradients\\\", 2015, https://arxiv.org/abs/1510.09142\\n[4] Asynchronous Methods for Deep Reinforcement Learning, Mnih V., Badia A.P., Mirza M., Graves A., Lillicrap T.P., Harley T., Silver D., Kavukcuoglu K., 2016, https://arxiv.org/pdf/1602.01783.pdf\"}", "{\"title\": \"Thank you. Summary: 1. HRL differs from our setup to be directly comparable. The connections are left for future work. 2. Dense-reward setting is too simple for the regularizer to give a significant improvement\", \"comment\": \"As mentioned, the proposed method does not offer significant speed-up in dense-reward settings. [...] it'd be nice to have experiments to show that for some environments the proposed method can out-perform baseline methods even in dense-reward settings.\", \"answer\": \"As we mention in the paper, in the dense-reward setup the problem of learning the policy with KL-regularization to the default one, is not simpler than regular policy learning. We explain it in the appendix E.1 that it is probably due to already strong reward signal. If everywhere in the state space we get a sufficient learning signal to learn the relevant behavior then the point of the default policy (which should help to provide a structured exploration strategy, asking the agent to act consistently to other regions it has seen and learned) is somewhat reduced. Nevertheless, in the Appendix E, we provide additional results for the dense-reward tasks and show that the current method performance doesn\\u2019t become worse comparing to the baseline. Our intuition is that an example where our method would also help in the dense-reward scenario would be the one with weak shaping reward combined with complex action space (e.g. humanoid). Finding such scenarios is left for the follow-up work.\"}", "{\"title\": \"thanks for the reviews; authors response?\", \"comment\": \"Thanks for the detailed review comments thus far.\\nDo the authors wish to add anything or respond in any way?\\n-- area chair\"}", "{\"title\": \"Novel approach\", \"review\": \"This paper shows that significant speed-up gains can be achieved by using KL-regularization with information asymmetry in sparse-reward settings. Different from previous works, the policy and default policy are learned simultaneously. Furthermore, it demonstrates that the default policy can be used to perform transfer learning.\", \"pros\": [\"Overall the paper is well-written and the organization is easy to follow. The approach is novel and most relevant works are compared and contrasted. The intuitions provided nicely complements the concepts and experiments are thorough.\"], \"cons\": [\"The idea of separating policy and default policy seems similar to having high and low level controller (HLC and LLC) in hierarchical control -- where LLC takes proprioceptive observations as input, and HLC handles task specific goals. In contrast, one advantage of the proposed method in this work is that the training is end-to-end. Would have liked to see comparison between the proposed method and hierarchical control.\", \"As mentioned, the proposed method does not offer significant speed-up in dense-reward settings. Considering that most of the tasks experimented in the paper can leverage dense shaping to achieve speed-up over sparse rewards, it'd be nice to have experiments to show that for some environments the proposed method can out-perform baseline methods even in dense-reward settings.\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Review\", \"review\": \"This is a very interesting piece of work. We know from cognitive science literature, that there are 2 distinct modes of decision making - habit based and top-down control (goal directed) decision making. The paper proposes to use this intuition by using information theoretic objective such that the agent follows \\\"default\\\" policy on average and agent gets penalized for changing its \\\"default\\\" behaviour, and the idea is to minimize this cost on average across states.\\n\\nThe paper is very well written. I think, this paper would have good impact in coming up with new learning algorithms which are inspired from cognitive science literature as well as mathematically grounded. But I dont think, paper in its current form is suitable for publication. \\n\\nThere are several reasons, but most important:\\n\\n1) Most of the experiments in this paper use of the order of 10^9 or even 10^10 steps. Its practically not possible for anyone in academia to have such a compute. Now, that said, I do think this paper is pretty interesting. Hence, Is it possible to construct a toy problem which has similar characteristics, and then show similar results using like 10^6 or 10^7 steps ? I think it would be easy to construct a 2D POMPD maze navigation env and test similar results. This would improve the paper, as well as could provide a baseline which people in the future can compare to.\\n\\n2) It becomes more important to compare to stronger baselines like maximum entropy RL ( for ex. Soft Actor Critic). And spend some good of amount time getting these baselines right on these new environments.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Good work in general\", \"review\": \"-- Originality --\\n\\nThis paper studies how to use KL-regularization with information asymmetry to speed up and improve reinforcement learning (RL). Compared with existing work, the major novelty in the proposed algorithm is that it uses a default policy learned from data, rather than a fixed default policy. Moreover, the proposed algorithm also limits the amount of information the default policy receives, i.e., there is an \\\"information asymmetry\\\" between the agent policy and the default policy. In many applications, the default policy is purposely chosen to be \\\"goal agnostic\\\" and hence conducts the \\\"transfer learning\\\". To the best of my knowledge, this \\\"informationally asymmetric\\\" KL-regularization approach is novel.\\n\\n-- Clarify --\\n\\nThe paper is well written in general and is easy to follow.\\n\\n-- Significance --\\n\\nI think the idea of regularizing RL via an informationally asymmetric default policy is interesting. It might be an efficient way to do transfer learning (generalization) in some RL applications. This paper has also done extensive and rigorous experiments. Some experiment results are thought-provoking.\\n\\n-- Pros and Cons\", \"pros\": \"1) The idea of regularizing RL via an informationally asymmetric default policy is interesting. To the best of my knowledge, this \\\"informationally asymmetric\\\" KL-regularization approach is novel.\\n\\n2) The experiment results are extensive, rigorous, and thought-provoking.\", \"cons\": \"1) My understanding is that this \\\"informationally asymmetric\\\" KL-regularization approach is a general approach and can be combined with many policy learning algorithms. It is not completely clear to me why the authors choose to combine it with an actor-critic approach (see Algorithm 1)? Why not combine it with other policy learning algorithms? Please explain.\\n\\n2) This paper does not have any theoretical results. I fully understand that it is highly non-trivial or even impossible to analyze the proposed algorithm in the general case. However, I recommend the authors to analyze (possibly a variant of) the proposed algorithm in a simplified setting (e.g. the network has only one layer, or even is linear) to further strengthen the results.\\n\\n3) The experiment results of this paper are interesting, but I think the authors can do a better job of intuitively explaining the experiment results. For instance, the experiment results show that when the reward is \\\"dense shaping\\\", the proposed method and the baseline perform similarly. Might the authors provide an intuitive explanation for this observation? I recommend the authors to try to provide intuitive explanation for all such interesting observations in the paper.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
rkg5fh0ctQ
Transferring SLU Models in Novel Domains
[ "Yaohua Tang", "Kaixiang Mo", "Qian Xu", "Chao Zhang", "Qiang Yang" ]
Spoken language understanding (SLU) is a critical component in building dialogue systems. When building models for novel natural language domains, a major challenge is the lack of data in the new domains, no matter whether the data is annotated or not. Recognizing and annotating ``intent'' and ``slot'' of natural languages is a time-consuming process. Therefore, spoken language understanding in low resource domains remains a crucial problem to address. In this paper, we address this problem by proposing a transfer-learning method, whereby a SLU model is transferred to a novel but data-poor domain via a deep neural network framework. We also introduce meta-learning in our work to bridge the semantic relations between seen and unseen data, allowing new intents to be recognized and new slots to be filled with much lower new training effort. We show the performance improvement with extensive experimental results for spoken language understanding in low resource domains. We show that our method can also handle novel intent recognition and slot-filling tasks. Our methodology provides a feasible solution for alleviating data shortages in spoken language understanding.
[ "transfer learning", "semantic representation", "spoken language understanding" ]
https://openreview.net/pdf?id=rkg5fh0ctQ
https://openreview.net/forum?id=rkg5fh0ctQ
ICLR.cc/2019/Conference
2019
{ "note_id": [ "SyxTD03x1N", "B1luOHqK07", "H1g8yS5YCQ", "SylXI4cKRX", "HJg3RTKFC7", "ryxYCndyTm", "HJghT0Dqnm", "BJxtUzpFhQ", "Skx-_1juhQ" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_review", "official_review", "official_review" ], "note_created": [ 1543716452725, 1543247216500, 1543247070512, 1543246923163, 1543245268367, 1541536976689, 1541205700344, 1541161553203, 1541087081252 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1290/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1290/Authors" ], [ "ICLR.cc/2019/Conference/Paper1290/Authors" ], [ "ICLR.cc/2019/Conference/Paper1290/Authors" ], [ "ICLR.cc/2019/Conference/Paper1290/Authors" ], [ "ICLR.cc/2019/Conference/Paper1290/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1290/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1290/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1290/AnonReviewer1" ] ], "structured_content_str": [ "{\"title\": \"Thanks for your responses\", \"comment\": \"Thanks a lot for elaborating the difference and adding the discussions. I think that the score 6 (above the threshold) is reasonable for this paper.\"}", "{\"title\": \"explanation on contributions\", \"comment\": \"We want to thank you for your kind and helpful feedback. We've followed your comments and addressed the related concerns in our revision as follows:\\n1) \\\"Hard to distinguish the authors' contributions from previous works\\\". \\nOur model differs from the baseline model by Goyal et al. in three aspects. First, the most important difference is that Goyal et al. treated intents and slots as discrete symbols and train classifiers to make predictions on these symbols. Such an approach limits the knowledge transfer between two domains as the classifier layers (affine transform leading to the softmax) following the upper two bi-LSTMs layers need be re-initialized when we transfer the model via fine-tuning in new domains, where the output labels are different. In our model, we encode intents and slots as continuous representations via their constituent words. The classification problems are transformed to semantic similarity problems and the whole model parameters could be transferred to new domains. \\n\\n The second difference is the usage of gazetteers (lists of slot values for a slot). Goyal et al. used gazetteer features as an additional input. Such features are binary indicators of the presence of an n-gram in a gazetteer. In our model, we used an attention network (Section 3.3) to encode external slot values into a semantic vector that represents the slot from the value perspective, which suits our semantic framework naturally.\\n \\n Finally, there are no connections between the upper two layers in the baseline by Goyal et al.. However, we believe that the output of intent specific bi-LSTMs layer could benefit the slot detection task. As one of our contributions, in our network, we concatenate the output of common bi-LSTMs layer and intent bi-LSTMs layer to feed to the slot bi-LSTMs layer. \\n \\n2) \\\"Authors do not discuss the objective\\\" The overall objective function for the multitask network completes two tasks: an intent classification task and a slot-filling task. We observe that this multitask architecture achieves better results than separately training intent and slot models. In addition, we have the added advantage of having a single model to do these tasks with a smaller total parameter size. This discussion is added in the Section 3.4 in the paper.\\n \\n3) \\\"Authors do not detail exactly how the model is fine-tuned\\\". We try to fine-tune different components of the model. We fix some parts and fine-tune on the other parts. After experimenting on several combinations of components, we found that fine-tuning the whole model gives the best performance. We have added this discovery to the manuscript.\\n \\n4) \\\"This fine-tuning runs the risk of overfitting\\\". In the target domain, we also have a validation set to control the fine-tuning so that there is low risk of overfitting on the target data. We plan to consider different regularization methods in future work.\\n \\n5) \\\"Some curious dips in the plot\\\". Discussion about dips has been added in section 4.1. Basically, we observed that almost all dips in the plots are related to the intent classification task. This might be related to the distribution of intent labels which have changed when we removed some data from the original training data set in the target domain. The change in data distribution increased the difficulty of learning.\\n\\n6) All figures have been modified so they are readable in grey-scale.\"}", "{\"title\": \"explanation on novelty\", \"comment\": \"We want to thank you for your kind and helpful feedback. We've followed your comments and addressed the related concerns in our revision as follows:\\n\\n1) \\\"Pretrain and finetune is straight-forward\\\" With the success of ImageNet, fine-tuning in image processing is very common practice. But in NLP area, especially in the SLU tasks, the process whereby a pre-training step is performed on a large dataset and then fine-tuned on a separate small target dataset has not been well studied previously. In addition, fine-tuning the whole model is not straight-forward. We found that fine-tuning the whole model gives us the best performance. We have added this discovery to the manuscript. In the revision, we proposed to use semantic similarity as a bridge to transfer from one domain to other domains. \\n\\n2) Our work differs from previous works such as Goyal et al. in three ways. First, we encode intents and slots as continuous representations by their constituent words. This allows us to better handle cold-start intents that are new and slots that cannot be handled by the baselines. Secondly, we use an attention network instead of treating gazetteers (slot values) as binary indicators in order to better calculate the similarity between unseen slot values and intents. Third, we discover that the intent prediction can directly help the slot prediction task, and we model it by adding direct links from the intent prediction to the slot prediction output. \\n\\n3) \\\"Improvements ... to Hotel is not obvious\\\". The improvement of \\\"intent2slot\\\" mechanism is not obvious when transferring knowledge to the \\\"Hotel\\\" domain, because the correlation between intents and slots is relatively low in the \\\"Hotel\\\" domain. In other words, most slots can co-occur with any intent. As a result, identifying a slot cannot help us to identify the intent of the sentence.\\n\\n4) We have proofread and corrected the document.\"}", "{\"title\": \"explanation on baselines and performance issues\", \"comment\": \"We want to thank you for your kind and helpful feedback. We've followed your comments and addressed the related concerns in our revision as follows:\\n1) \\\"should be more prior work\\\".\", \"reply\": \"In some cases, the proposed TSSM is not better than the baseline. This is because our model has more parameters than the baseline DNN model and in some cases simper model performs better. However, in many practical domains such as our newly added real-world experiments, TSSM outperforms the DNN consistently. Discussion on these points are added in Section 4.3. Later, in our real-world production environment, in order to reduce model size, we removed some parameters like affine transformation matrix in semantic networks and the refined model performance becomes even better.\"}", "{\"title\": \"Summary of paper revisions\", \"comment\": \"We want to thank the reviewers for their kind and helpful feedback. We've followed all reviewers' comments and addressed the related concerns in our revision as follows:\\n\\n1) Our work differs from previous works, such as Goyal et al., in three ways. First, we encode users\\u2019 intents and slots as continuous representations via constituent words in order to handle novel and cold-start intents and slots that cannot be handled by the baseline methods. \\nSecond, we use an attention network instead of treating gazetteers (slot values) as binary indicators in order to better calculate the similarity between unseen slot values and intents. Results indicate that our methods are better than the baselines. Third, we discover that the predicted intents and the predicted slots might have high correlation values, and we model this correlation by adding direct links from the intent prediction to the slot prediction to improve effectiveness.\\n\\n2) We report two experimental results on three additional Chinese real-world datasets in the revised version. These datasets are collected from three products in our work as a result of our online autonomous customer services. The data and observations were all from real industrial productions. The results show the effectiveness and generality of the proposed model on large-scale real-world problem settings. \\n\\n3) We revise *Section 3.1* to better explain the structure of the proposed model, and we proofread the whole paper multiple times.\\n\\nWe hope the reviewers can kindly reconsider our paper for publication after revision.\\nBelow, we answer questions from each reviewer separately.\"}", "{\"metareview\": \"This paper proposes a transfer learning approach based on previous works on this area, to build language understanding models for new domains. Experimental results show improved performance in comparison to previous studies in terms of slot and intent accuracies in multiple setups.\\nThe work is interesting and useful, but is not novel given the previous work.\\nThe paper organization is also not great, for example, the intro should introduce the approach beyond just mentioning transfer learning and meta-learning.\\nThe improvements over the baselines look good, but the baselines themselves are quite simple. It'd be better to include comparisons with other state of the art methods. Also, the improvements over DNN are not consistent, it would be good to analyze and come up with suggestions on when to use which approach.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Running short on novelty, but richer on the experimental side\"}", "{\"title\": \"The basic idea is not fully original, but the task is important and experiments are clear and complete.\", \"review\": \"This paper focuses on dealing with a scenario where there are \\\"unseen\\\" intents or slots, which is very important in terms of the application perspective.\\n\\nThe proposed approach, TSSM, tries to form the embeddings for such unseen intents or slots with little training data in order to detect a new intent or slot in the current input.\\nThe basic idea in the model is to learn the representations of utterances and intents/slots such that utterances with the same intents/slots are close to each other in the learned semantic space.\\nThe experiments demonstrate the effectiveness of TSSM in the few-shot learning scenarios.\\nThe idea about intent embeddings for zero-shot learning is not fully original (Chen, et al., 2016), but this paper extends to both intent classification and slot filling. \\n\\nThe paper tests the performance in different experimental settings, but the baselines used in the experiments are concerned.\\nThis paper only compares with simple baselines (MaxEntropy, CRF, and basic DNN), but there should be more prior work or similar work that can be used for comparison in order to better justify the contributions of the model.\\nIn addition, this paper only shows the curves and numbers in the experiments, but it is better to discuss some cases in the qualitative analysis, which may highlight the contributions of the paper.\\nAlso, in some figures of Fig. 2, the proposed TSSM is not better than DNN, so adding explanation and discussion may be better.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"The topic is interesting but the novelty is incremental\", \"review\": \"In this paper, an efficient SLU model, called as TSSM, is proposed to tackle the problem of insufficient training data for the task of spoken language understanding. TSSM considers the intent and slot detection as a unified multi-objective optimization problem which is addressed by a meta-learning scheme. The model is pre-trained on a large dataset and then fine-tuned on a small target dataset. Thus, the proposed TSSM can improve the model performance on a small datatset in new domains.\", \"pros\": \"1)\\tThe transfer learning of spoken language understanding is very interesting.\\n2)\\tThe proposed TSSM can integrate the task of intents and slots and take the relationship between intents and slots into consideration.\\n3)\\tFive datasets are used to evaluate the performance of the method.\", \"cons\": \"Overall, the novelty of this paper is incremental and some points are not clear. My main concerns are listed as follows.\\n1)\\tThe authors state that the knowledge transfer is the main contribution of this paper. However, as introduced in 3.5, the transfer scheme in which the model is first pre-trained on a large dataset and then fine-tuned on a small target dataset is very straightforward. For example, currently, almost all methods in the area of object recognition are pre-trained on ImageNet and then fine-tuned on a small dataset for particular tasks.\\n2)\\tAuthors also state that improvements for transferring from Restaurant, Laptop, TV, Atis to Hotel is not obvious. I think the results also need to be reported and the reasons why the improvement is not obvious should be provided and discussed.\\n3)\\tThe paper needs more proofreading and is not ready to be published, such as \\u201cA survey fnor transfer\\u201d and \\u201ca structured multi-objective optimization problems\\u201d.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Transferring SLU Models in Novel Domains\", \"review\": \"Summary: The authors present a network which facilitates cross-domain\\nlearning for SLU tasks where the the goal is to resolve intents and\\nslots given input utterances. At a high level, the authors argue that\\nby fine-tuning a pre-trained version of the network on a small set of\\nexamples from a target-domain they can more effectively learn the\\ntarget domain than without transfer learning.\", \"feedback\": \"* An overall difficulty with the paper is that it is hard to\\ndistinguish the authors' contributions from previous works. For\\nexample, in Section 3.1, the authors take the model of Goyal et al. as\\na starting point but explain only briefly one difference\\n(contatenating hidden layers). In Section 3.2 the contributions\\nbecomes even harder to disentangle. For example, how does this section\\nrelate to other word-embeddings papers cited in this section? Is the\\nproposed method a combination of previous works, and if not, what are\\nthe core new ideas?\\n\\n* Some sections are ad-hoc and should be justified/explained\\nbetter. For example, the objective, which ultimately determines the\\ntrained model behaviour uses a product of experts formulation, yet the\\nauthors do not discuss this. Similarly, the overarching message, that\\nby fine-tuning a suitable model initialisation using small amounts of\\ndata from the target domain is fairly weak as the authors do not\\ndetail exactly how the model is fine-tuned. Presumably, given only a\\nsmall number of examples, this fine-tuning runs the risk of\\noverfitting, unless some form of regularisation is applied, but this\\nis not discussed.\\n\\n* Lastly, there are some curious dips in the plots (e.g., Figure 2 bottom left, Figure 3 top left, bottom left), which deserve more explanation. Additionally, the evaluation section could be improved if the scores were to show error-bars.\", \"minor\": \"All plots should be modified so they are readable in grey-scale.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}" ] }
HJxqMhC5YQ
End-to-End Multi-Lingual Multi-Speaker Speech Recognition
[ "Hiroshi Seki", "Takaaki Hori", "Shinji Watanabe", "Jonathan Le Roux", "John R. Hershey" ]
The expressive power of end-to-end automatic speech recognition (ASR) systems enables direct estimation of the character or word label sequence from a sequence of acoustic features. Direct optimization of the whole system is advantageous because it not only eliminates the internal linkage necessary for hybrid systems, but also extends the scope of potential application use cases by training the model for multiple objectives. Several multi-lingual ASR systems were recently proposed based on a monolithic neural network architecture without language-dependent modules, showing that modeling of multiple languages is well within the capabilities of an end-to-end framework. There has also been growing interest in multi-speaker speech recognition, which enables generation of multiple label sequences from single-channel mixed speech. In particular, a multi-speaker end-to-end ASR system that can directly model one-to-many mappings without additional auxiliary clues was recently proposed. In this paper, we propose an all-in-one end-to-end multi-lingual multi-speaker ASR system that integrates the capabilities of these two systems. The proposed model is evaluated using mixtures of two speakers generated by using 10 languages, including mixed-language utterances.
[ "end-to-end ASR", "multi-lingual ASR", "multi-speaker ASR", "code-switching", "encoder-decoder", "connectionist temporal classification" ]
https://openreview.net/pdf?id=HJxqMhC5YQ
https://openreview.net/forum?id=HJxqMhC5YQ
ICLR.cc/2019/Conference
2019
{ "note_id": [ "SygVniYWgV", "HkeUS1ulpm", "H1gjdIl93m", "rklQVnJK3X" ], "note_type": [ "meta_review", "official_review", "official_review", "official_review" ], "note_created": [ 1544817579999, 1541599038116, 1541174898786, 1541106730567 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1289/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1289/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1289/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1289/AnonReviewer2" ] ], "structured_content_str": [ "{\"metareview\": \"The authors present a system for end-to-end multi-lingual and multi-speaker speech recognition. The presented method is based on multiple prior works that propose end-to-end models for multi-lingual ASR and multi-speaker ASR; the work combines these techniques and shows that a single system can do both with minimal changes.\\n\\nThe main critique from the reviewers is that the paper lacks novelty. It builds heavily on existing work, and does not make any enough contributions to be accepted at ICLR. Furthermore, training and evaluations are all on simulated test sets that are not very realistic. So it is unclear how well the techniques would generalize to real use-cases. For these reasons, the recommendation is to reject the paper.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Limited novelty\"}", "{\"title\": \"not enough novelty\", \"review\": \"This paper presents an end-to-end system that can recognize single-channel multiple-speaker speech with multiple languages.\", \"pros\": [\"The paper is well written.\", \"It shows the existing end-to-end multi-lingual ASR (Seki et al., 2018b) and end-to-end multi-speaker ASR (Seki et al., 2018a) techniques can be combined without any change to achieve reasonable performance.\", \"It demonstrates the challenge of single-channel multi-lingual multiple-speaker speech recognition, and compares the performance of the multiple-speaker system on the mixed speech and the single-speaker system on the isolated speech.\"], \"cons\": [\"It lacks novelty: the proposed framework just simply combines the two existing techniques as mentioned above.\", \"The training and evaluation data are both artificially created by randomly concatenating utterances with different languages from different speakers with different context. I am not sure of how useful the evaluation is, since this situation is not realistic. Also, currently it cannot test the real code-switching since the utterances are not related and not from the same speaker.\", \"There are not enough analyses. E.g. it would be good to analyze what contributes to the gap between the single-speaker ASR system performance on the isolated speech and the multi-lingual multi-speaker ASR system on the mixed speech. How well does the proposed end-to-end framework perform compared to a two-step framework with speaker separation followed by multi-lingual single-speaker ASR?\"], \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Applying known techniques to a non problem\", \"review\": \"The authors propose to build a speech recognition system that has been trained to recognize a recording that has been produced by mixing multiple recordings from different languages together, and allowing for some code switching (also done artificially by concatenating different recordings).\\n\\nWhile this sounds fancy and like a hard problem, it is in fact easier than recognizing two speakers that have been mixed together speaking the same language, which has already been solved in (Seki, 2018a), from what I can tell. I don't see any contribution in this paper, other than explaining how to create an artificial (un-realistic) database of mixed speech in multiple languages, and then training a multi-speaker end-to-end speech recognition system on that database.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Interesting but not good enough!\", \"review\": [\"This paper presents a framework to train an end-to-end multi-lingual multi-speaker speech recognition system. Overall, the paper is quite clear written.\", \"Strengthens:\", \"Experimental results show consistent improvements in speech recognition performance and language identification performance.\", \"Weakness:\", \"I'm not sure whether the framework is novel. The authors have just mixed training data from several languages to train an end-to-end multi-speaker speech recognition system.\", \"I don't see the real motivation why the authors want to make the task harder than needed. The example provided in figure 1 is very rare in reality.\", \"The authors claimed that their system can recognise code-switching but actually randomly mixing data from different languages are not code-switching.\", \"In general, it would be better to have some more analyses showing what the system can do and why.\"], \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
BJx9f305t7
W2GAN: RECOVERING AN OPTIMAL TRANSPORT MAP WITH A GAN
[ "Leygonie Jacob*", "Jennifer She*", "Amjad Almahairi", "Sai Rajeswar", "Aaron Courville" ]
Understanding and improving Generative Adversarial Networks (GAN) using notions from Optimal Transport (OT) theory has been a successful area of study, originally established by the introduction of the Wasserstein GAN (WGAN). An increasing number of GANs incorporate OT for improving their discriminators, but that is so far the sole way for the two domains to cross-fertilize. In this work we address the converse question: is it possible to recover an optimal map in a GAN fashion? To achieve this, we build a new model relying on the second Wasserstein distance. This choice enables the use of many results from OT community. In particular, we may completely describe the dynamics of the generator during training. In addition, experiments show that practical uses of our model abide by the rule of evolution we describe. As an application, our generator may be considered as a new way of computing an optimal transport map. It is competitive in low-dimension with standard and deterministic ways to approach the same problem. In high dimension, the fact it is a GAN-style method makes it more powerful than other methods.
[ "Optimal Transportation", "Deep Learning", "Generative Adversarial Networks", "Wasserstein Distance" ]
https://openreview.net/pdf?id=BJx9f305t7
https://openreview.net/forum?id=BJx9f305t7
ICLR.cc/2019/Conference
2019
{ "note_id": [ "HkgdxdeZl4", "HygCeZ17J4", "Bk7EJWJXkE", "HkgYTx1myE", "Syluu219R7", "SJgFuC0NRX", "BJgkfCRVRQ", "H1gpnpREAX", "rkxEW30V0X", "BJxm4oA4R7", "HJla6JS52m", "SylozPPPnQ", "BkefZGUDnm" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544779760155, 1543856373806, 1543856348432, 1543856320842, 1543269487569, 1542938224567, 1542938119466, 1542938036835, 1542937596473, 1542937386629, 1541193669014, 1541007122564, 1541001722392 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1288/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1288/Authors" ], [ "ICLR.cc/2019/Conference/Paper1288/Authors" ], [ "ICLR.cc/2019/Conference/Paper1288/Authors" ], [ "ICLR.cc/2019/Conference/Paper1288/Authors" ], [ "ICLR.cc/2019/Conference/Paper1288/Authors" ], [ "ICLR.cc/2019/Conference/Paper1288/Authors" ], [ "ICLR.cc/2019/Conference/Paper1288/Authors" ], [ "ICLR.cc/2019/Conference/Paper1288/Authors" ], [ "ICLR.cc/2019/Conference/Paper1288/Authors" ], [ "ICLR.cc/2019/Conference/Paper1288/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1288/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1288/AnonReviewer1" ] ], "structured_content_str": [ "{\"metareview\": \"The paper introduces a W2GAN method for training GAN by minimizing 2-Wasserstein distance using\\nby computing an optimal transport (OT) map between distributions. However, the difference of previous works is not significant or clearly clarified as pointed out some of the reviewers. The advantage of W2GAN over standard WGAN is also superficially explained, and did not supported by strong empirical evidence.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"no clear practice advantage\"}", "{\"title\": \"Reconsidering Decision\", \"comment\": \"Dear reviewer,\\n\\nWe would appreciate that you reconsider your decision given the updated version of the paper. \\n\\nThanks a lot.\"}", "{\"title\": \"Reconsidering Decision\", \"comment\": \"Dear reviewer,\\n\\nWe would appreciate that you reconsider your decision given the updated version of the paper. \\n\\nThanks a lot.\"}", "{\"title\": \"Reconsidering decision\", \"comment\": \"Dear reviewer,\\n\\nWe would appreciate that you reconsider your decision given the updated version of the paper. \\n\\nThanks a lot.\"}", "{\"title\": \"Updated version of the paper\", \"comment\": \"Dear reviewers, we posted a new version of our paper, which includes the following modifications:\\n\\n1- We clarified the message of our paper, our contributions, and improved the writing of the paper in general.\\n\\n2- We provided a clearer analysis of the theoretical result we claim, i.e that the generator recovers an optimal map at the end of training. \\n\\n3- We improved our experimental section, by clarifying our contributions in low dimensional data. We also included a new set of experiments in high dimensional data, where we applied our model to unsupervised domain adaptation and show that it can obtain competitive results. We removed CIFAR-10 experiment, as we realized it does not highlight the main contribution of this paper. Finally, we added low dimensional experiments confirming the theoretical analysis about the evolution of the generated distribution during training.\"}", "{\"title\": \"Response to review\", \"comment\": \"Thank you for your thorough and insightful review. Below we try to answer the questions you addressed.\\n\\n* authors state that the model has \\\"a strong theoretical advantages\\\": can you provide more details about those advantages?\", \"theoretical_advantages_of_w2gan_are_the_following\": \"We can characterize the path the generator is following during training, namely the W2 geodesics. This is not the case for other GANs.\\nA practical consequence of the above is that at the end of training, the generator is recovering an OT map.\\n\\n\\n* The experiments do not show any clear advantages of the method regarding competitors\\n\\nIn high dimensional data, we show that W2GAN outperforms the Barycentric-OT approach by (Seguy et al. 2018) in MV Gaussian to MNIST experiment. In 2D data, our method performs as well as other methods, but is simpler than the two-step approach of Barycentric-OT and has a stronger theoretical basis for recovering OT maps than WGAN and its extensions. As for the discrete method, which achieves perfect results in low dimension, do not scale to continuous and high-dimensional distributions. \\n\\n\\n* Table 1: why are there some points with no arrows? \\n\\nWe only visualize a fixed number (150) of mappings for the sake of clarity. We make this clearer in our updated version.\\n\\n\\n* Table 1: W2-OT seems not to perform better: are there some other advantages (computational?) to use the method?\\n\\nPlease check the general comment which clarifies this point. \\n\\n\\n* In Figure 1, it is quite difficult to evaluate the results on a single image with no comparisons. Again, providing a strong evaluation of the method would help to strengthen the paper.\\n \\nWe will provide a direct comparison with Barycentric-OT of (Seguy et al. 2018) for MNIST experiment in the updated version. As for CIFAR-10, please check our response to the same point raised by Reviewer-1.\"}", "{\"title\": \"Response to review - part 1\", \"comment\": \"Thank you for your thorough and insightful review. Below we try to answer the questions you addressed.\\n\\n* It is difficult to identify the original contributions of the paper\\n\\nWe took into account this consideration seriously. Please check the general comment in which we clarify our contributions. We will also make sure that the updated version of our paper states them clearly.\\n\\n\\n* Most results are known from the OT community\\n\\nWe agree that all the theoretical results from OT theory employed in this paper are well known for OT community. We borrow these results to propose a new model and analyse its behaviour during training. \\n\\n\\n* The differences with the work of Seguy, 2018 is also not obvious\\n\\nThe work of (Seguy et al., 2018) proposes a two-step approach for large-scale OT map. First, they solve the regularized Kantorovitch problem in its dual form to get an optimal *plan*, and then learn a parametric function (a neural network) to approximate the barycentric projection of this optimal plan. This is quite different from our approach where we use an adversarial approach where the discriminator locally approximates the optimal *map* toward the target distribution to provide signal for the generator. What should be compared when approaching the OT map is our generator at the end of training and their model after barycentric projection. \\n\\n\\n* Most of the theoretical considerations of Section 3 is either based on unrealistic assumptions (case 1) or make vague assumptions 'if we ignore the possibly significant effect ...' that seem unjustified so far\\n\\nWe acknowledge that the current analysis is based on idealistic assumptions, but these assumptions are not exclusive to our work and, in fact, most GAN papers make similar assumptions. This is a result of the difficulty of making concrete statements about parametric functions. However, in the updated version, we try to bridge the gap between the practical case of parametric update of the generator, and the idealistic case of continuous optimization in the space of non-parametric generator distribution. For this, we use functional gradient analysis, which helps in interpreting the parameter update as an approximation of an ideal discrete update in the space of probability measures. \\n\\n\\n* why the theoretical analysis on convergence following a geodesic path in a Wasserstein space is valuable from a practical view\\n\\nFrom a practical view, the main advantage is that this allows us to predict that the generator will recover an OT map at the end of training. This is interesting because: \\n1) In domain transfer applications (e.g. unsupervised image or language translation) it would be really valuable to characterize and possibly control the mapping obtained by a generative model\\n2) Large-scale Monge maps have also many practical applications (e.g. domain adaptation). Hence, having a powerful generative model approximating a Monge map can be powerful tool in these applications.\\nAnother possible practical application of following W2 geodesics between two distribution is that it allows us to observe intermediate probability measures, i.e. computing barycenters.\\nThat being said, we believe that characterizing dynamics of generator distribution to be theoretically very interesting and could possibly lead to advances in understanding GANs in general.\"}", "{\"title\": \"Response to review - part 2\", \"comment\": \"* did not understand the final claim of the empirical evidence that other GANs also approximately following the Optimal Transport\\n\\nIn our 2D experiments, we find that WGAN_LP can find a mapping which is very close to the perfect OT generated by the discrete method. While we cannot make theoretical statements about the training dynamics in W1 GANs (because there are infinite W1 geodesics), it seems like in practice -- at least in our 2D experiments -- WGAN-LP seems to recover an OT map. Note that we observe that WGAN-GP does not work well in practice, which is in large part a result of forcing the discriminator\\u2019s gradient to be exactly 1, and hence distorting the local OT direction.\\n\\n\\n* penalization in eq. (5), the expectation is not for all x and y \\\\in R^2, but for x drawn from \\\\mu and y from \\\\nu. Same for L_2 regularization and Eq (7).\\n\\nAbout eq. (5): this is true that the expectation should be taken over the marginals from the mere definition of the entropic (L2) regularized Kantorovitch problem. We corrected that. \\nIn the context of the unregularized Kantorovitch dual, the hard inequality constraint (eq (4)) should actually happen pointwise everywhere on the euclidean space. This is the objective we really want to approach in our context. In order to do that in a tractable manner, we remove the hard constraint and add a corresponding penalty in the objective of the discriminator. The perfect penalty should thus involve an expectation everywhere on the space. Thus one could try enforcing this penalty by sampling between the distributions, or around the distributions, etc. This applies in particular for the penalty term in eq (7).\\n\\n\\n* Proposition 1 is mainly due to Brenier\\n\\nThat is true, and it was not our intention to claim this result as ours. We will make this clearer in the updated version.\\n\\n\\n* Eq (10) : how do you inverse sup and inf ?\\n\\nThanks for noticing this. We will make this clearer in the next version. \\n\\n\\n* when comparing to Seguy 2018, are you using an entropic or a L_2 regularization ? How do you set the regularization strength ?\\n\\nWe use the L2 penalty. We find that in practice the entropic one in the dual results with an exponential signal, and eventually leads to divergence of the discriminator. We set the regularization strength with a hyper-parameter scalar.\"}", "{\"title\": \"Response to review\", \"comment\": \"Thank you for your thorough and insightful review. Below we try to answer the questions you addressed.\\n\\n* Regarding Theorem 1 and Corollary 1: Is it is possible to reason about an imperfect generator class and undertrained discriminator, and get sufficient conditions for convergence (not necessarily exponential) ?\\n\\nThis is a very interesting point. Indeed, assuming a first bound on the difference between the gradient of our discriminator and the one of the perfect Kantorovitch potential, and a second bound on the generator update, one can compose those bounds to obtain, locally at one update, a bound on the deviation from the OT trajectory. It might be interesting then to add those bouds together (with some stochasticity on the direction of the error term) to see at the end of training how far the generated distribution is from the Monge map. In addition, an interesting as well as a hard problem is to ensure that the gradient of the regularized Kantorovitch potential converges toward the gradient of the Kantorovitch potential when the regularization term goes to zero, and how fast it does so. \\n\\n\\n* In Proposition 1, I suspect that p > 2 (see below), which makes the p=2 choice a limit case of the proposition\\n* In proposition 1, (6), use the Holder conjugate of p: ||\\\\nabla||^{1(p-1)-1} =1/||\\\\nabla||^{2-q}. Also better to understand as $q\\\\leq 2$. \\n* looking at the proof of proposition 1, I do not know how you derive the inverse gradient, but I suspect you need in fact $p>2$, which also implies $q<2$ above\\n\\nThose three remarks are linked. We can indeed use the Holder conjugate, but we do not need p>2. When p=2, the term in the denominator is equal to 1 and we actually recover the famous version of this proposition : T(x)=x-\\\\nabla \\\\phi(x). \\nYour question about the way of inverting the gradient is legitimate, we omitted the fact that we use the L2 norm to define the cost function. Using L2 allows both computing and inverting the gradient easily. The case of p=2 is actually the easiest one, because the gradient and its inverse are (a scalar times) the identity. Please note that this proposition is a well known result by (Brenier, 1991) and not our own contribution. We will make sure this is made more explicit in the next version of the paper.\\n\\n\\n* In the interpretation of the equation after (16), isn\\u2019t is possible to interpret the Jacobian terms as a geometric tweak for the update of G ?\\n\\nIn fact, it is really hard to understand the effect of the Jacobian term. We are not sure what you mean by geometric tweak, but this term should be thought of as a projection of the ideal update in the space of probability measures onto the space of parametrized probability measures. According to the functional gradient analysis (which will be available in the next version of the paper), this update is actually meant to be the best update in the parameter space so that the updated generated distribution is closest as possible to the ideal updated distribution. \\n\\n\\n* Cite and compare with https://arxiv.org/pdf/1710.05488.pdf\\n\\nThanks for pointing out this work. We will make sure to cite it in the updated version of our paper. That being said, we believe it is quite different from our work. Their approach devices a generative model which uses an encoder-decoder process to reduce data to a latent space, and then perform the OT with a discriminator-only method (similar to our discriminator) in this latent space. This relies on the fact that T(x)=x-\\\\nabla\\\\phi might provide a good approximation of OT map in low dimensional spaces. Crucially, their generative model does not recover an optimal map between the two distributions in the data space, which is what our approach aims to achieve.\\n\\n\\n* W2-OT is not better than Barycentric-OT (e.g. spirals)\\n\\nAs stated in the general comment to all reviewers, the approximated OT by the discriminator is not meant to be a competitive approach on its own. We use it to provide the correct direction for the generator to be updated during training. The generator, on the other hand, is competitive to other large-scale OT approaches (such as Barycentric-OT).\\n\\n\\n* W2GAN is not better than WGAN-LP\\n\\nWe agree that WGAN-LP seems to perform very well in practice. The main advantage of W2GAN is that it relies on the Wasserstein-2 distance, which allows for a concrete theoretical analysis of its behaviour. This is not possible with Wasserstein-1 distance -- as explained in the comment to all reviewers.\\n \\n\\n* CIFAR-10 experiments \\n\\nThe main objective of our approach is to show that it is a competitive method for large-scale OT. The main point of this experiment is to show that our method can also be a reasonable GAN-based generative model, but it is not meant to show that we can achieve state-of-the-art results with it. We agree that this was not clear from our submission, and we will try to emphasize that in our updated one.\"}", "{\"title\": \"General response to all reviewers\", \"comment\": [\"First, we would like to thank all reviewers for their feedback and thoughtful comments. We acknowledge that the submitted version of the paper had many limitations in terms of clarity of the message and writing. We will fix this in the next version which will plan to submit by Monday.\", \"However, we take this opportunity to emphasize the main contributions of our paper:\", \"We introduce W2GAN, a GAN-based model which can compute optimal transport map in large-scale settings.\", \"We provide a theoretical analysis of training dynamics of the generator of W2GAN, which results in showing that the generator recovers the optimal transport map between the initial generator distribution and the real target distribution. Our analysis relies on two facts: the discriminator locally approximates the optimal transport map, and that it provides signal for the generator which allows it to follow the W2 geodesics during training.\", \"We provide empirical evidence that the generator recovers the optimal transport map between two distributions in both low-dimensional settings and high dimensional settings (MV Gaussians to MNIST).\"], \"we_also_want_to_address_general_concerns_raised_by_the_reviewers\": [\"The discriminator (or W2-OT) is not meant to be a competitive way to compute a Monge map. That was not clear in the first version of the paper. The discriminator is a real function and is not explicitly trained to reproduce a Monge map. What is important is that its gradient orients the generator toward the optimal map. Experiments in Table 1 should be seen as a confirmation of this theoretical fact.\", \"We do not aim to introduce a state-of-the-art GAN model. We regard our model as a principled method for computing OT map, which can scale for high-dimensional data. This is why we do not focus on achieving state-of-the-art results on CIFAR10. The main objective of this result was to show that our model, as opposed to other large-scale OT approaches, can achieve performance comparable to GAN in high-dimensional datasets.\", \"Why use W2 instead of W1?\", \"Our theoretical argument that the generator recovers an optimal map at the end of training relies on analysing the path that it is following during the training. In the W2 case, this path is the unique Wasserstein 2 geodesic between two distributions. Hence, at the end of training, the generator recovers a Monge map. In the W1 case, it might be --although not in an obvious way-- that the generator follows a Wasserstein 1 geodesic, but those geodesics can be infinite. So checking that they correspond to a certain Monge map is not guaranteed, neither is the uniqueness of such map.\"]}", "{\"title\": \"Interesting approach to OT GAN for Wasserstein distances with regularised Kantorovitch duals\", \"review\": [\"pros\", \"formal approach to the problem and a clear understanding of what is missing (Section 6.6); I appreciated Section 3 at large in particular.\", \"I like Theorem 1 and Corollary 1. Is it is possible to reason about an imperfect generator class and undertrained discriminator, and get sufficient conditions for convergence (not necessarily exponential) ?\", \"cons\", \"In Proposition 1, I suspect that p > 2 (see below), which makes the p=2 choice a limit case of the proposition.\", \"The paper should have cited the paper https://arxiv.org/pdf/1710.05488.pdf which goes along similar lines in its Section 3 and make proper comparisons.\", \"experimental results do not do a great favour to the technique proposed: in Table 1, W2-OT is not better than Barycentric-OT (see spiral); in Table 2, W2GAN is not better than WGAN-LP; Figure 1-a is maybe the only Figure with a clearcut advantage. However, the CIFAR examples in Figure 1b look quite bad after zooming. Do the authors have more experiments and comparisons on images ?\"], \"detail\": [\"In proposition 1, (6), use the Holder conjugate of p: ||\\\\nabla||^{1(p-1)-1} =1/||\\\\nabla||^{2-q}. Also better to understand as $q\\\\leq 2$.\", \"looking at the proof of proposition 1, I do not know how you derive the inverse gradient, but I suspect you need in fact $p>2$, which also implies $q<2$ above.\", \"Sentence after (10) grammatically incorrect\", \"In the interpretation of the equation after (16), isn\\u2019t is possible to interpret the Jacobian terms as a geometric tweak for the update of G ?\", \"Lots of mistakes in references: Mistake in the first ref in references, many @JOURNAL/CONF titles do not appear.\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Original contribution unclear\", \"review\": \"The paper W2GAN describes a method for training GAN and computing an optimal transport (OT) map\\nbetween distributions. As far as I can tell, it is difficult to identify the original contributions\\nof the paper. Most results are known from the OT community. The differences with the work of Seguy, 2018\\nis also not obvious. I encourage the authors to establish more clearly the differences of their work\\nwith this last reference. Most of the theoretical considerations of Section 3 is either based on \\nunrealistic assumptions (case 1) or make vague assumptions 'if we ignore the possibly significant effect ...'\\nthat seem unjustified so far. Experimental results do not show evidences of superiority wrt. existing works. \\nAll in all I would recommend the authors to better focus on the original contribution of their works wrt. \\nstate-of-the-art and explain why the theoretical analysis on convergence following a geodesic path in a \\nWasserstein space is valuable from a practical view. Finally, I did not understand the final claim of the\", \"abstract\": \"'Perhaps surprisingly, we also provide empirical evidence that other GANs also approximately following\\nthe Optimal Transport.'. What are those empirical evidences ? It seems that this claim is not supported somewhere \\nelse in the paper.\", \"minor_remarks\": [\"regarding the penalization in eq. (5), the expectation is not for all x and y \\\\in R^2, but for x drawn from \\\\mu and y from \\\\nu.\", \"Same for L_2 regularization\", \"Proposition 1 is mainly due to Brenier\", \"Brenier, Y. (1991). Polar factorization and monotone rearrangement of vector\\u2010valued functions. Communications on pure and applied mathematics, 44(4), 375-417.\", \"from Eq (7), you should give precisely over what the expectations are taken.\", \"Eq (10) : how do you inverse sup and inf ?\", \"when comparing to Seguy 2018, are you using an entropic or a L_2 regularization ? How do you set the regularization strength ?\", \"where is Figure 2.a described in section 4.2 ?\"], \"related_works\": \"- what is reference (Alexandre, 2018) ? \\n - regarding applications of OT to domain adaptation, there are several references on the subject. \\n See for instance \\nCourty, N., Flamary, R., Tuia, D., & Rakotomamonjy, A. (2017). Optimal transport for domain adaptation. IEEE transactions on pattern analysis and machine intelligence, 39(9), 1853-1865.\\nor \\nDamodaran, B. B., Kellenberger, B., Flamary, R., Tuia, D., & Courty, N. (2018). DeepJDOT: Deep Joint distribution optimal transport for unsupervised domain adaptation. ECCV \\nfor a deep variant.\\n - Reference Seguy 2017 and 2018 are the same and should be fused. The corresponding paper\\n was published at ICLR 2018\\n Regarding this last reference, the claim 'As far as we know, it is the first demonstration of a GAN achieving reasonable generative modeling results and an approximation of the optimal transport map between two continuous distributions.' should maybe be lowered ?\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"GANs for OT and OT for GANs\", \"review\": \"The paper proposes W2GAN, a GAN where the objective function relies on a W2 distance. Authors state that the discriminator approximate the W2 distance, and that the generator follows an OT map.\\nWhile I did not see any flaws in the development, the paper is quite bushy and hard to follow. Some questions are still open, for instance in the end of the experiments, authors state that the model has \\\"a strong theoretical advantages\\\": can you provide more details about those advantages?\\nThe experiments do not show any clear advantages of the method regarding competitors. Regarding Table 1, why are there some points with no arrows? W2-OT seems not to perform better: are there some other advantages (computational?) to use the method? In Figure 1, it is quite difficult to evaluate the results on a single image with no comparisons. Again, providing a strong evaluation of the method would help to strengthen the paper. \\n\\nThere are some weird statements and typos mistakes that should be corrected. For example in the first 2 pages: (abstract) \\\"other GANs also approximately following the Optimal Transport\\\", (Introduction) \\\"An optimal map has many important implications such as computing barycenters\\\", \\\"high-dimenisonal\\\", \\\"generator designed\\\", \\\"consideral\\\", \\\"although the theoretical arguments do not scale immediately\\\".\\nThe layout of the bibliography should be deeply reviewed.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}" ] }
ByftGnR9KX
FlowQA: Grasping Flow in History for Conversational Machine Comprehension
[ "Hsin-Yuan Huang", "Eunsol Choi", "Wen-tau Yih" ]
Conversational machine comprehension requires a deep understanding of the conversation history. To enable traditional, single-turn models to encode the history comprehensively, we introduce Flow, a mechanism that can incorporate intermediate representations generated during the process of answering previous questions, through an alternating parallel processing structure. Compared to shallow approaches that concatenate previous questions/answers as input, Flow integrates the latent semantics of the conversation history more deeply. Our model, FlowQA, shows superior performance on two recently proposed conversational challenges (+7.2% F1 on CoQA and +4.0% on QuAC). The effectiveness of Flow also shows in other tasks. By reducing sequential instruction understanding to conversational machine comprehension, FlowQA outperforms the best models on all three domains in SCONE, with +1.8% to +4.4% improvement in accuracy.
[ "Machine Comprehension", "Conversational Agent", "Natural Language Processing", "Deep Learning" ]
https://openreview.net/pdf?id=ByftGnR9KX
https://openreview.net/forum?id=ByftGnR9KX
ICLR.cc/2019/Conference
2019
{ "note_id": [ "BJgw8oT8QV", "S1gsA8yrmV", "SkglhHAEAm", "rJx8PrR4CX", "SJeOXH0VAm", "SkepZq6VR7", "SJx9ptTE07", "HJxVApCX6m", "rJxYmHj02Q", "HJeLfagi3Q", "SyxpAA4c2Q", "SylB4hN5hQ" ], "note_type": [ "official_comment", "comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_review", "official_review", "official_review" ], "note_created": [ 1548307278881, 1548183250971, 1542935976063, 1542935901521, 1542935839733, 1542932997216, 1542932930144, 1541823948435, 1541481760720, 1541242125749, 1541193428930, 1541192748592 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1287/Authors" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper1287/Authors" ], [ "ICLR.cc/2019/Conference/Paper1287/Authors" ], [ "ICLR.cc/2019/Conference/Paper1287/Authors" ], [ "ICLR.cc/2019/Conference/Paper1287/Authors" ], [ "ICLR.cc/2019/Conference/Paper1287/Authors" ], [ "ICLR.cc/2019/Conference/Paper1287/Authors" ], [ "ICLR.cc/2019/Conference/Paper1287/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1287/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1287/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1287/AnonReviewer2" ] ], "structured_content_str": [ "{\"title\": \"Re: Clarification on SCONE\", \"comment\": \"Hi!\\n\\nYes, we are following their settings in both training and testing.\\n\\nFrom the paper description, they are testing on final world state without access to intermediate answers. But during training, they have access to the intermediate answers. For example, in (Suhr and Artzi), they wrote \\\"During training, we create an example for each instruction.\\\" in Section 2, Learning.\"}", "{\"comment\": \"This is a nice work on conversational QA. The authors compare with previous works on SCONE in Table 5. However, some of the previous models (Long et al., Guu et al., Suhr and Artzi) cited in Table 5 assumes a harder problem setting of learning with only the final word state, without access to intermediate answers (i.e., the gold answer_i for i = 1, 2, ..., N-1). It would be great if the authors could clarify this :)\", \"title\": \"Clarification on SCONE\"}", "{\"title\": \"Response to \\\"Some questions on the experiments\\\"\", \"comment\": \"Re: Some questions on the experiments\\n\\n1) Computational efficiency compared to single-turn MC: Without our alternating parallel processing structure, training time will be multiplied by the number of QA pairs in a dialog. After implementing this mechanism, training FlowQA takes roughly 1.5x to 2x of the time training a single-turn model in each epoch.\\n\\n2) Ablation on question-specific context representation: The features mentioned (em, g) are attention vectors obtained from the question. This is the first attention on the question (there are two attentions on the question, see Figure 4). If c is ablated, we are expecting the model to select an answer span from the context without seeing the context. In this case, the model would not work at all. The F1 scores for CoQA/QuAC without exact match feature (em), and attended question embedding (g) are reported below.\", \"flowqa\": \"76.0 / 64.6\\nFlowQA (-em): 75.4 / 62.3\\nFlowQA (-g): 75.5 / 64.5\\n\\n3) Improvements from encoding N answer spans: We are using the same setting for marking the previous N-answers as Choi et al. [1] and Yatskar et al. [2]. We provide a comparison below. The improvement was the biggest (7.2 F1) when marking no previous answer (0-Ans), as FlowQA incorporates history through using the intermediate representation while BiDAF++ had no access. The improvement is less pronounced but still significant (4.0 F1) when marking many previous answers. \\n (FlowQA vs. BiDAF++)\", \"0_ans\": \"59.0 vs. 51.8\", \"1_ans\": \"64.2 vs. 59.9\", \"2_ans\": \"64.6 vs. 60.6\", \"all_ans\": \"64.6 vs. N/A (3-Ans: 59.5)\\n\\n4) Applying FLOW to other tasks: The Flow mechanism is essentially performing a large RNN update on a big memory state, which contains O(Nd) hidden units, N is the length of the passage/context and d is the hidden size per words. Due to the enormous hidden unit size, the big memory state can store all the details of the full passage/context and to operate on this large memory state. Because of the design of the Flow mechanism, we can operate on this enormous memory state efficiently. We believe the Flow mechanism can be useful for problems that require a large amount of memory, beyond the conversational MC and sequential semantic parsing. However, further investigation is needed to verify this claim.\\n\\n[1] Choi et al. QuAC: Question Answering in Context.\\n[2] Yatskar et al. A Qualitative Comparison of CoQA, SQuAD 2.0 and QuAC.\"}", "{\"title\": \"Response to \\\"Some questions on the experiments\\\"\", \"comment\": \"This comment is moved to be below the main response.\"}", "{\"title\": \"Response to Reviewer 3\", \"comment\": \"Thank you for detailed suggestions and feedback.\", \"re\": \"Clarity in section 3\\nThank you for your suggestions for improving the clarity of the paper. Originally, we put all the detail in Section 3 for completeness. We have now moved the parts from existing approaches to Appendix C. Computational efficiency is one of our main practical concerns since the naive implementation is really slow. Below are the experimental results on this issue.\\n\\nSpeedup over the naive implementation (in terms of time per epoch)\", \"coqa\": \"8.1x\", \"quac\": \"4.2x\\nThe prediction performance after each epoch is the same, so the time to complete the training is proportional to this speedup. Since this result is quite succinct, originally we only mentioned in the main text. We have now added this result to the experiment section.\\n\\nFigure 2 and 3 visualizes how the speedup shown above is achieved and how the Flow component is integrated into an existing single-turn model. \\n\\n[1] Choi et al. QuAC: Question Answering in Context.\\n[2] Yatskar et al. A Qualitative Comparison of CoQA, SQuAD 2.0 and QuAC.\"}", "{\"title\": \"Response to Reviewer 2\", \"comment\": \"Thank you for the helpful comments and clarification questions. We have added visualization for the behavior of the Flow mechanism (Appendix A) and analyzed questions where FlowQA answered correctly while previous approaches failed (Appendix B).\", \"re\": \"Question about using partial history\", \"our_model_incorporates_the_conversation_history_in_two_ways\": \"(1) marking the previous answer locations in the evidence document as in prior baselines. (2) incorporating implicit representations generated to answer the most recent question. For the marking in the document (1), our ablation study in Table 3 shows the result for feeding in 0, 1, 2, and the full history. For incorporating implicit representations (2), our model only takes the intermediate representation generated for the most recent question (although the representation of the most recent question is based on its previous representation). The ablation study for explicit marking suggests questions often do not have a long-range dialogue dependency (most questions are related to only the preceding one or two questions).\\n\\n[1] Choi et al. QuAC: Question Answering in Context.\\n[2] Reddy et al. CoQA: A conversational question answering challenge.\"}", "{\"title\": \"Response to Reviewer 1\", \"comment\": \"Thank you so much for your review.\\n\\nThe ablation study on the reasoning layers can be found below (we count the number of context integration layers). The numbers below are the F1 scores for CoQA / QuAC, respectively. We found our original hyperparameter (4 layers) was the most effective one. \\n\\n# Integration layers = 3: 75.5 / 64.2\\n# Integration layers = 4: 76.0 / 64.6 (original result)\\n# Integration layers = 5: 75.3 / 64.1\"}", "{\"title\": \"Re\", \"comment\": \"Hi!\\n\\nEven though I have already replied to you in the email from you, I decided to repost it on OpenReview in case other people have similar questions about the details.\\n\\nFor the unknown/yes/no answers, you can find the approach we took in Page 5 of the FlowQA paper.\\n\\nWe did not append these three words to the end of the context. We used Eq. (17) with different W to compute a score for unknown/yes/no/span. We train this using the ground truth of whether the answer is unknown/yes/no/span. Then for questions with the answer being a span, we train the span prediction. So you can view this as a two-layer prediction. During the inference, we predict if the answer is unknown/yes/no/span. If it is a span, we predict the span.\\n\\nHyperparameters can be found in Appendix A.1. We did not observe severe overfitting. After the maximum validation accuracy is reached, the validation accuracy did not drop severely afterward but just fluctuate a bit. We did notice that some details are not mentioned in the paper, which will be added during the current revision process, and are given below.\\n\\n================\\n\\nWe used a hidden size of 125 for all RNNs, so the hidden vector size output from Bidirectional RNN is 250. The dropout rate is high (0.4) and we use dropout after embedding layers in addition to another dropout before feeding into LSTM layers.\\n\\nThere is a rule of thumb that we employed in fully-aware attention, which follows from the FusionNet paper. In the FusionNet paper, we found that using the stated S(x, y) form is much better than other choices. But it would restrict the vectors x and y to have the same length and similar semantics (since they used the same U to map to a smaller dimension). The rule of thumb we did is to always take the intersecting dimensions of the two vectors.\\n\\nFor example, in equation (12), for the vector y, there are no such dimensions as the output of Flow operation. Therefore in computing S(x, y), we remove the dimensions corresponding to the output of Flow operation in x. Now, both x and y has the same dimensions and these dimensions correspond to a similar semantics.\\n\\nAnother potential issue to remind you about is that U actually maps the concatenation of hidden vectors to a smaller dimension (we set this to be the same as hidden vector size, i.e., 250) as described in the FusionNet paper. This is crucial to prevent fully-aware attention from overfitting.\\n\\nHope you will do well in the challenge! We are also planning to release our FlowQA code after these busy weeks of paper revision.\"}", "{\"metareview\": \"Interesting and novel approach of modeling context (mainly external documents with information about the conversation content) for the conversational question answering task, demonstrating significant improvements on the newly released conversational QA datasets.\\nThe first version of the paper was weaker on motivation and lacked a clearer presentation of the approach as mentioned by the reviewers, but the paper was updated as explained in the responses to the reviewers.\\nThe ablation studies are useful in demonstration of the proposed FLOW approach.\\nA question still remains after the reviews (this was not raised by the reviewers): How does the approach perform in comparison to the state of the art for the single question and answer tasks? If each question was asked in isolation, would it still be the best?\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"novel modeling of context for conversational QA\"}", "{\"title\": \"First model achieving nontrivial improvement on CoQA and QuAC datasets.\", \"review\": \"The paper presents a new model FlowQA for conversation reading comprehension. Compared with the previous work on single-turn reading comprehension, the idea in this paper differs primarily in that it alternates between the context integration and the question flow in parallel. The parallelism enables the model to be trained 5 to 10 times faster. Then this process is formulated as layers of a neural network that are further stacked multiple times. Besides, the unanswerable question is predicted with additional trainable parameters. Empirical studies confirm FlowQA works well on a bunch of datasets. For example, it achieves new state-of-the-art results on two QA datasets, i.e., CoQA and QuAC, and outperforms the best models on all domains in SCONE. Ablation studies also indicates the importance of the concept Flow.\\n\\nAlthough the idea in the paper is straightforward (it is not difficult to derive the model based on the previous works), this work is by far the first that achieves nontrivial improvement over CoQA and QuAC. Hence I think it should be accepted.\\n\\nCan you conduct ablation studies on the number of Reasoning layers (Figure 3) in FlowQA? I am quite curious if a deeper/shallower model would help.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Impressive experimental results but lack of clarity\", \"review\": \"The paper proposes a method to model the flow of context in multi-turn machine comprehension (MC) tasks. The proposed model achieves amazing improvements in the two recent conversational MC tasks as well as an instruction understanding task. I am very impressed by the improvements and the ablation test that actually shows the effectiveness of the FLOW mechanism they proposed.\\n\\nHowever, this paper has a lack of clarity (especially, Section 3) which makes it difficult to follow and easy to lose the major contribution points of the work. I summarized the weaknesses as follows:\\n\\n# lack of motivation and its validation\\nThe paper should have more motivational questions at the beginning of why such flow information is necessary for the task. Authors already mentioned about some of it in Figure 1 and here: \\u201csuch as phrases and facts in the context, for answering the previous questions, and hence provide additional clues on what the current conversation is revolving around\\u201d. However, the improvement of absolute scores in the Experiment section didn\\u2019t provide anything related to the motivation they mentioned. Have you actually found the real examples in the testing set that are correctly predicted by the FLOW model but not by the baseline? Are they actually referring to the \\u201cphrases and facts in the context\\u201d, \\u201cadditional clues on what the current conversation is revolving around\\u201d? Another simple test authors can try is to show the attention between the context in a flow and question and see whether appropriate clues are actually activated given the question. \\n\\n# unclear definition of \\u201cflow\\u201d\\nThe term \\u201cflow\\u201d is actually little over-toned in my opinion. Initially, I thought that flow is a sequence of latent information in a dialog (i.e., question-answer) but it turns to be a sequence of the context of the passage. The term \\u201cflow\\u201d is more likely a sequence of latent and hierarchical movement of the information in my opinion. What is your exact definition of \\u201cflow\\u201d here? Do you believe that the proposed architecture (i.e., RNN sequence of context) appropriately take into account that? RNN sequence of the passage context actually means your attention over the passage given the question in turn, right? If yes, it shouldn\\u2019t be called a flow. \\n\\n# Lack of clarity in Section 3\\nDifferent points of contributions are mixed together in Section 3 by themselves or with other techniques proposed by others. For example, the authors mention the computational efficiency of their alternating structure in Figure 2 compared to sequential implementation. However, none of the experiment validates its efficiency. If the computational efficiency is not your major point, Figure 2 and 3 are actually unnecessary but rather they should be briefly mentioned in the implementation details in the later section. Also, are Figure 2 and 3 really necessary? \\n\\nSection 3.1 and 3.3.1 are indeed very difficult to parse: This is mainly because authors like to introduce a new concept of \\u201cflow\\u201d but actually, it\\u2019s nothing more than a thread of a context in dialog turns. This makes the whole points very hyped up and over-toned like proposing a new \\u201cconcept\\u201d. Also, the section introduces so many new terms (\\u201ccontext integration\\u201d. \\u201cFlow\\u201d, \\u201cintegration layers\\u201d, \\u201cconversational flow\\u201d, \\u201cintegration-flow\\u201d) without clear definition and example. The name itself looks not intuitive to me, too. I highly recommend authors provide a high-level description of the \\u201cflow\\u201d mechanism at first and then describe why/how it works without any technical terms. If you can provide a single example where \\u201cflow\\u201d can help with, it would be nicer to follow it.\\n\\n# Some questions on the experiment\\nThe FLOW method seems to have much more computation than single-turn baselines (i.e., BiDAF). Any comparison on computational cost?\\n\\nIn Table 3, most of the improvements for QuAC come from the encoding N answer spans to the context embeddings (N-ans). Did you also compare with (Yatskar, 2018) with the same setting as N-ans? \\n\\nI would be curious to see for each context representation (c), which of the feature(e.g., c, em, g) affect the improvement the most? Any ablation on this?\\n\\nThe major and the most contribution of the model is probably the RNN of the context representations and concatenation of the context and question at turn in Equation (4). For example, have you tested whether simple entity matching or coreference links over the question thread can help the task in some sense? \\n\\nLastly for the model design, which part of the proposed method could be general enough to other tasks? Is the proposed method task-specific so only applicable to conversational MC tasks or restricted sequential semantic parsing tasks?\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Strong empirical results and well written\", \"review\": \"In this paper, authors proposed a so-called FLOWQA for conversational question answering (CoQA). Comparing with machine reading comprehension (MRC), CoQA includes a conversation history. Thus, FLOWQA makes use of this property of CoQA and adds an additional encoder to handle this. It also includes one classifier to handle with no-answerable questions.\", \"pros\": \"The idea is pretty straightforward which makes use of the unique property of CoQA.\\n\\nResults are strong, e.g., +7.2 improvement over current state-of-the-art on the CoQA dataset. \\n\\nThe paper is well written.\", \"cons\": \"It is lack of detailed analysis how the conversation history affects results and what types of questions the proposed model are handled well.\\n\\nLimited novelty. The model is very similar to FusionNet (Huang et al, 2018) with an extra history encoder and a no-answerable classifier.\", \"questions\": \"One of simple baseline is to treat this as a MRC task by combining the conversation history with documents. Do you have this result?\\n\\nThe model uses the full history. Have you tried partial history? What's the performance?\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
HJlYzhR9tm
Language Modeling with Graph Temporal Convolutional Networks
[ "Hongyin Luo", "Yichen Li", "Jie Fu", "James Glass" ]
Recently, there have been some attempts to use non-recurrent neural models for language modeling. However, a noticeable performance gap still remains. We propose a non-recurrent neural language model, dubbed graph temporal convolutional network (GTCN), that relies on graph neural network blocks and convolution operations. While the standard recurrent neural network language models encode sentences sequentially without modeling higher-level structural information, our model regards sentences as graphs and processes input words within a message propagation framework, aiming to learn better syntactic information by inferring skip-word connections. Specifically, the graph network blocks operate in parallel and learn the underlying graph structures in sentences without any additional annotation pertaining to structure knowledge. Experiments demonstrate that the model without recurrence can achieve comparable perplexity results in language modeling tasks and successfully learn syntactic information.
[ "Graph Neural Network", "Language Modeling", "Convolution" ]
https://openreview.net/pdf?id=HJlYzhR9tm
https://openreview.net/forum?id=HJlYzhR9tm
ICLR.cc/2019/Conference
2019
{ "note_id": [ "rylaJFB4xV", "HJlY7hKORX", "S1lPc9F_AQ", "BkgE6dYO0Q", "Syl2RaaYnm", "SyemNLcO2X", "BJxgClFO27" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544997092550, 1543179297289, 1543178894949, 1543178427974, 1541164499785, 1541084714885, 1541079240201 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1286/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1286/Authors" ], [ "ICLR.cc/2019/Conference/Paper1286/Authors" ], [ "ICLR.cc/2019/Conference/Paper1286/Authors" ], [ "ICLR.cc/2019/Conference/Paper1286/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1286/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1286/AnonReviewer2" ] ], "structured_content_str": [ "{\"metareview\": \"Though the overall direction is interesting, the reviewers are in consensus that the work is not ready for publication (better / larger scale evaluation is needed, comparison with other non-autoregressive architectures should be provided, esp Transformer as there is a close relation between the methods).\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"interesting direction but not ready for publication\"}", "{\"title\": \"CNNs and datasets\", \"comment\": \"Thanks for the insightful comments. CNNs process sentences by gathering larger and larger contexts in each layer, but do not explicitly model the relations among different words. That is why we said they are not easily interpretable and do not explicitly learn the structures of sentences. Our model explicitly learns the relations among words and we can interpret the relations with attention weights.\\n\\nAbout datasets - we did not try to claim that our model is the state-of-the-art LM. Testing the model on the PTB dataset, we are trying to make the points,\\n\\n1. The GTCN model, which is not recurrent, is as good as many recurrent neural models\\n2. The relations among words can be explicitly modeled without supervision\\n\\nWe believe your suggestions on using larger datasets is valuable. We will test our model on larger corpora in our future works.\"}", "{\"title\": \"About experiments\", \"comment\": \"Thanks for the review and comments.\\n\\nFirstly, we did not compare the time complexity since the LSTM model in pytorch was not implemented in python, which made it difficult to compare. In the revised version, we will implement a python LSTM and compare the time it takes to train all the different models.\\n\\nSecondly, thank you for your suggestions on ablation studies. We chose the window size by testing the model on the dev set.\\n\\nAbout the parsing structure - our model is not able to capture the exact parsing trees. With the example in the paper, we attempted to qualitatively indicate that the attentions revealed some syntactic knowledge. We the learned attention, it still needs careful design of the algorithm to find a best parsing, which will be included in future works.\"}", "{\"title\": \"Summarization of related previous works\", \"comment\": \"Thanks for the careful review and detailed comments. In this comment I majorly explain the relations between our model and the transformer model (Vaswani et al. 2017).\\n\\nWe are aware of the transformer model and related works in machine translation, but I did not find any previous work that directly applied transformers on language modeling on the PTB dataset. In terms of the model itself, our model applied a gated self-attention to \\\"simulate\\\" LSTMs. The architecture was purely motivated by LSTMs. On the other hand, we believe the most significant difference between our model and transformer is that transformers apply multi-head attentions, and the GTCN only uses one attention. The multi-head attention mechanism brings a huge number of parameters, compared with the GTCN model.\\n\\nAbout Eq. 13-14 - we used tied output embeddings (Ofir and Wolf, 2016). The output hidden state of the Eq 14 is used to be compared with the word embeddings of the decoders. That's why we said the model predicted the embedding of the next word.\\n\\nEqs. 6 and 7 included typos. Thank you for reminding!\\n\\nIn Eq. 9, \\\\mathbf{W}^p is part of the parameters, and i-j means a line of it. I will make more explanations in the revised version.\\n\\nEq. 5 stands for a normalization method proposed in a related work. We did not use this in GTCN, but it could be. In practice we use softmax attention to normalize the context information.\\n\\nIn Eq. 2, f_{t+1} does rely on f_t. We put the equation here to indicate in LSTM language models, how a context word influences a target word by calculating a weight between them. This motivates the proposal of the GTCN model.\\n\\nIn terms of time complexity, we believe that processing sentences in parallel is more efficient than recurrent models. We did not compare the exact time of the GTCN and LSTM because the LSTM in pytorch is not implemented in Python. We will try to implement python LSTM and compare the time complexities of both models.\\n\\nGTCN is not really the state-of-the-art LM, but in our comparisons, it performs best among the non-recurrent models.\\n\\nAbout the minor questions,\\n\\nWe had a problem in templates because we used the geometry package and messed everything. We will correct this in the revised version.\\n\\nIn Eq. 5, X stands for a matrix that includes all nodes, while x means the embedding of one node.\\nAnd we are using a 10K vocabulary corpus, instead of 10K words.\"}", "{\"title\": \"A well-motivated work, but relations to prior works need to be addressed\", \"review\": \"This paper draws inspiration from recent works on graph convolutional networks and proposes GTCN, a convolutional architecture for language modeling. The key intuition is to treat sentences as (potentially densely-connected) graphs over tokens, instead of sequences as in many RNN-based language models. The model then, when predicting a token, summarizes previous tokens using attention mechanism as context. Empirical evaluation on word-level language modeling on Penn Treebank shows competitive performance.\\n\\nThe idea of this work appears reasonable and well-motivated to me. But the connections to previous works, especially those based on self-attention, should be clearly addressed. Further, writing can be improved, and I would encourage a thorough revision since there are typos making the paper a bit hard to follow.\\nLast but not least, I find several of the claims not very-well supported. Please see details below.\", \"pros\": [\"Well-motivated intuition treating language as structured.\"], \"cons\": [\"Writing can be improved.\", \"Missing discussion of existing works.\"], \"details\": [\"Based on my understanding of Eqs. 6--11, the proposed GTCN seems to be a gated version (also equipped with window-2 convolutions) of the self-attention mechanism. Could the authors comment on how GTCN relates to Vaswani et al. (2017), Salton et al. (2017), among others? Also, empirical comparisons to self-attention based language models might be necessary.\", \"I was confused by Eqs. 13--14 and the text around it. Doesn't one need some kind of classifier (e.g., an MLP) to predict x_{t+1}? Why are these two equations predicting word embedding?\", \"The start of Section 4.1. There seems to be a typo here. I'm assuming the two vectors are `$\\\\mathbf{v}$ and $\\\\mathbf{q}$` here, as in Eqs. 6 and 7.\", \"More clarification on Eq. 9 might be necessary. Is \\\\mathbf{W}^p part of the parameters? I'm guessing \\\\mathbf{W}_{i-j}^p selects a row from the matrix, since there is a dot product outside.\", \"Can the authors clarify Eq. 5? I'm not sure how to interpret it, and it seems not used anywhere else.\", \"Eq. 2 is a bit misleading: it might give the impression that f_{t+1} does not depend on f_t (and so forth), which is not the case for LSTM.\", \"It would be interesting to be how GTCN compare to other models in efficiency, since the paper mentions parallel computation many times.\", \"Contribution.2: GTCN is not really the state-of-the-art model on LM.\", \"Comparison to RNNG: RNNG treats each sentence as a separate sequence, in contrast to most cited works in Table 1, where the whole training (eval) set is treated as a single sequence, and truncate the length when applying BPTT. And according to the second paragraph of Section 5.1, this work follows the latter. To the best of my knowledge, such a difference does have an effect on the perplexity metric. In this sense, RNNG is not comparable to the rest in Table 1. It is perhaps fine to still put it in the table, but please clarify it in the text.\"], \"minors\": [\"Why is the margins above equations seem larger. Can the authors make sure the template is right?\", \"Around Eq.5: why is \\\\mathbf{X} is capitalized in the eq, but not in the text? Are they the same thing?\", \"Section 4.3: the dependence of attention weights $a$ is not reflected in the notation.\", \"Section 5.1: I think what it means here is a `10K` vocabulary, instead of a 10K word tiny corpora.\", \"References\", \"Vaswani et al.. 2017. Attention is All You Need. In Proc. of NIPS.\", \"Salton et al.. 2017. Attentive Language Models.\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Interesting but requires more experiments\", \"review\": \"This work proposes a CNN based language model based on graph neural networks. Basic idea is to compute adjacency matrix for an entire sentence in parallel for faster computation. Empirical results show probably the best performance among CNN approaches but still lags behind the best RNNs.\", \"pros\": [\"A new network based on graph neural networks.\"], \"cons\": [\"The proposed model needs to recompute attention probabilities for each step and it might incur latencies. I'd like to know how slow it is when compared with other CNN approaches and how fast it is when compared with other RNNs.\", \"Lacking experiments. This paper shows only a single table comparing other approaches, and does not present any ablation studies. Note that section 4.4 mentions some details, but does not show any numbers to justify the claim, e.g., why choosing the window size of 10, 20, 30, 40.\", \"This paper claims that the learned model captures the ground truth parse tree in section 4.3. However, this work simply picks a single example in section 5.3 to justify the claim. I'd recommend the author to run a parser to see if the proposed attention mechanism actually capture the ground truth parse trees or not.\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"official review\", \"review\": \"The paper applies graph convolutional networks to Penn Treebank language modeling and provides analysis on the attention weight patterns it uses.\", \"clarity\": \"the paper is very clearly written!\\n\\nThe introduction states that existing CNN language models are \\\"not easily interpretable in that they do not explicitly learn the structures of sentences\\\". Why is this? The model in this paper computes attention values which is interpreted by the authors as corresponding to the structure of the sentence but there are equivalent means to trace back feature computation in other network topologies as well.\\n\\nMy biggest criticism is that the evaluation is done on a very small language modeling benchmark which is clearly out of date. Penn Treebank is the CIFAR10 of language modeling and any claims on this dataset about language modeling are highly doubtful. Models today have tens and hundreds of millions of parameters and training them on 1M words is simply a regularization exercise that does not enable a meaningful comparison of architectures.\\n\\nThe claims in the paper could be significantly strengthened by reporting results on at least a mid-size dataset such as WikiText-103, or better even, the One Billion Word benchmark.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}" ] }
ByetGn0cYX
Probabilistic Planning with Sequential Monte Carlo methods
[ "Alexandre Piche", "Valentin Thomas", "Cyril Ibrahim", "Yoshua Bengio", "Chris Pal" ]
In this work, we propose a novel formulation of planning which views it as a probabilistic inference problem over future optimal trajectories. This enables us to use sampling methods, and thus, tackle planning in continuous domains using a fixed computational budget. We design a new algorithm, Sequential Monte Carlo Planning, by leveraging classical methods in Sequential Monte Carlo and Bayesian smoothing in the context of control as inference. Furthermore, we show that Sequential Monte Carlo Planning can capture multimodal policies and can quickly learn continuous control tasks.
[ "control as inference", "probabilistic planning", "sequential monte carlo", "model based reinforcement learning" ]
https://openreview.net/pdf?id=ByetGn0cYX
https://openreview.net/forum?id=ByetGn0cYX
ICLR.cc/2019/Conference
2019
{ "note_id": [ "S1eDdcG4I4", "r1x1llc5bV", "rJgIdijLl4", "B1xd7Ks8gE", "HJe9yYuVlN", "BJeOZeCll4", "SJgSEm1yg4", "BJebbLfA14", "Sklapa-0kN", "S1xzTol01E", "rJl3_Zc2y4", "HkeszuKYkE", "BJlqzpD_14", "Bkgqc0IOk4", "rylLxlrqR7", "ryeUz24c0m", "H1laAFN9CQ", "B1x5aFNqCX", "ByxFxKEqRQ", "rJerZg4c0m", "SJgI4p6O2m", "HJejH5YDnm", "SylcRv_DhQ" ], "note_type": [ "comment", "official_comment", "official_comment", "official_comment", "comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1551276654883, 1546457063216, 1545153389633, 1545152799837, 1545009378284, 1544769535817, 1544643373019, 1544590840763, 1544588741108, 1544584121588, 1544491380258, 1544292370936, 1544219921814, 1544216210389, 1543290862413, 1543289869956, 1543289301496, 1543289281718, 1543289072599, 1543286781083, 1541098797750, 1541016131401, 1541011409673 ], "note_signatures": [ [ "~Nando_de_Freitas1" ], [ "ICLR.cc/2019/Conference/Paper1285/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1285/Authors" ], [ "ICLR.cc/2019/Conference/Paper1285/Authors" ], [ "(anonymous)" ], [ "ICLR.cc/2019/Conference/Paper1285/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1285/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1285/Authors" ], [ "ICLR.cc/2019/Conference/Paper1285/Authors" ], [ "ICLR.cc/2019/Conference/Paper1285/Authors" ], [ "ICLR.cc/2019/Conference/Paper1285/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1285/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1285/Authors" ], [ "ICLR.cc/2019/Conference/Paper1285/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1285/Authors" ], [ "ICLR.cc/2019/Conference/Paper1285/Authors" ], [ "ICLR.cc/2019/Conference/Paper1285/Authors" ], [ "ICLR.cc/2019/Conference/Paper1285/Authors" ], [ "ICLR.cc/2019/Conference/Paper1285/Authors" ], [ "ICLR.cc/2019/Conference/Paper1285/Authors" ], [ "ICLR.cc/2019/Conference/Paper1285/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1285/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1285/AnonReviewer1" ] ], "structured_content_str": [ "{\"comment\": \"The following are related references on Sequential Monte Carlo methods for planning, and other Monte Carlo and EM methods for planning, which the authors and readers might find useful to know about:\\n\\nHoffman, M. W., Doucet, A., de Freitas, N., & Jasra, A. (2007). On solving general state-space sequential decision problems using inference algorithms (No. TR-2007-04). University of British Columbia, Computer Science.\", \"http\": \"//mlg.eng.cam.ac.uk/hoffmanm/papers/hoffman:2007a.ps\\n\\nHoffman, M. W., Kueck, H., de Freitas, N., & Doucet, A. (2009). New inference strategies for solving Markov decision processes using reversible jump MCMC. In Uncertainty in Artificial Intelligence (pp. 223\\u2013231). \\n\\nHoffman, M. W., de Freitas, N., Doucet, A., & Peters, J. (2009). An Expectation Maximization algorithm for continuous Markov Decision Processes with arbitrary reward. In the International Conference on Artificial Intelligence and Statistics (pp. 232\\u2013239). \\n\\nHoffman, M. W., Doucet, A., de Freitas, N., & Jasra, A. (2007). Bayesian policy learning with trans-dimensional MCMC. In Neural Information Processing Systems (pp. 665\\u2013672). \\n\\nHoffman, M. W., & de Freitas, N. (2012). Inference strategies for solving semi-Markov decision processes. In L. E. Sucar, E. F. Morales, & J. Hoey (Eds.), Decision Theory Models for Applications in Artificial Intelligence: Concepts and Solutions. IGI Global. \\n\\nKueck, H., Hoffman, M. W., Doucet, A., & de Freitas, N. (2009). Inference and Learning for Active Sensing, Experimental Design and Control. In Proceedings of the Iberian Conference on Pattern Recognition and Image Analysis (pp. 1\\u201310). \\n\\nFor code and pdf links, please go to http://mlg.eng.cam.ac.uk/hoffmanm/papers/ \\nI hope you find this useful. Best.\", \"title\": \"Related references on sequential Monte Carlo and other inference methods for planning\"}", "{\"title\": \"RE: Answer\", \"comment\": \"Great.\\n\\n1. Yes, this is the optimism issue raised in Levine 2018. Most control as inference methods use a structured variational objective, which fundamentally gets at this optimism bias.\\n\\n2. I'm not familiar with RL algorithms that suffer from this issue. If you have references, it would be great to add them to the discussion of related works.\"}", "{\"title\": \"Final results on CEM+value function\", \"comment\": \"Our final results with CEM+value function show no improved performance overall over vanilla CEM. This seems mainly due to the fact that the CEM policy and the SAC value function do not match and our value/Q losses diverge.\"}", "{\"title\": \"Answer\", \"comment\": \"Thank you for the more detailed answer, we think we finally understood the source of our disagreement. We believe we do not have the same definition of optimism bias, and while we do not suffer from any agent's delusion about the world, we do suffer from an overestimation bias of the mean return.\\n\\n1. In brief, the issue we believe you are talking about is the objective itself and thus intrinsic to the posterior.\\nIndeed, we do maximize log-Expectation-exp(Return) which is an upper bound on the expected return. Thus maximizing our objective might not mean that we have a good expected return. This is common in many control as inference methods.\\nWe are not certain what terminology is used in RL, but we would rather call that an overestimation bias.\\n\\n2. The optimism bias, even in psychology, is a delusion of the agent about the world, ie the agent believes the world will lead to unrealistically more desirable outcomes ( ie q(s'|s, a) = p(s'|s,a,O) instead of p(s'|s,a) ).\\nThis is actually the issue we were mentioning from the beginning and we explained why we do not suffer from it.\\n\\nWhile point 2 is not an issue for us, point 1 as you raised is indeed one. We will add a paragraph in the final version of the paper to explain the distinction.\\nWhy don't we suffer heavily from 1 then?\\nOur guess is that Mujoco is very close to deterministic and our model of the world learns very rapidly to predict the next state with a very low variance, thus we believe our transitions are close to deterministic, making this less of an issue.\"}", "{\"comment\": \"Hi, I think your work is interesting and have some questions as a reader of your work.\\n\\n1. I cannot figure out how the i.i.d. prior for the action sequences, i.e., \\\\prod_{t=1}^T p(a_t), can be used. I also checked Sergey Levine's tutorial and review on \\\"RL as Inference\\\", but i.i.d. action sequences are not shown in that tutorial. Would you please clarify this part? Personally, I think this part is quite weird.\\n\\n2. Any plan to open your source code?\\n\\n3. I wonder whether you've done a wall-clock-time comparison between model-free RL, e.g., SAC, and your work.\", \"title\": \"Some questions on your work.\"}", "{\"metareview\": \"This paper presents a new approach for posing control as inference that leverages Sequential Monte Carlo and Bayesian smoothing. There is significant interest from the reviewers into this method, and also an active discussion about this paper, particularly with respect to the optimism bias issue. The paper is borderline and the authors are encouraged to address the desired clarifications and changes from the reviewers.\", \"confidence\": \"3: The area chair is somewhat confident\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Promising approach and text should be clarified on some points of active discussion\"}", "{\"title\": \"RE:\", \"comment\": \"Yes, your understanding of the optimism bias is incorrect.\\n\\nThe problem does not stem from inaccuracies in q, in fact, when q = p_env, the optimism bias is present. As you defined in your paper, p(x | O) \\\\propto p_env(x) \\\\exp(\\\\sum r_t). The problem is that p(x | O) incorporates exp(\\\\sum r_t) which biases samples of x towards states and actions with high reward. This is fine for actions, but causes an optimism bias for state transitions. Note that p(x | O) uses the real environment model and does not even depend on q, yet there is still optimism bias. As a result, under the posterior p(x | O), p(s' | s, a, O) != p_env(s' | s, a) which is the optimism bias meant by Levine.\\n\\nFor LQR systems, we compute p(x | y) w/o the reward (ie., no exp(\\\\sum r_t)) term. As a result, there is no optimism bias.\"}", "{\"title\": \"About CEM+value\", \"comment\": \"CEM+value baseline\\n===\\n\\nYes this is an option!\\nSo the proposed algorithm would be to use regular CEM, but for the optimization to maximize \\\\sum_i=1^h r_i + V_{t+h+1} where V is the value function from SAC instead of just \\\\sum_i=1^h r_i.\\n\\nWe are currently running it, and we do have some preliminary results. It does seem to do better on Hopper (some seeds seem to be around 1000 of return which vanilla CEM never did, while others are still very low), we don't see improvements over CEM for HalfCheetah and Walker2d for now.\", \"however_just_a_few_points\": [\"It seems to have some instabilities eg the value and Q-loss seem to always diverge (>>> billions while for (SIR)-SAC it is around 1 to 10). We believe it may be because it is using the value function from SAC while using the policy from CEM which could be really different. In our work, we could expect our planning policy to be closer to SAC policy as SAC policy was actually used as a proposal.\", \"It is probably possible to augment CEM with a value function in a principled and more stable way, but we think it is a contribution in itself and should be explored in a full paper.\"]}", "{\"title\": \"Additional feedback\", \"comment\": \"Hello,\\n\\nWe believe that we have addressed the points raised in your review (notably ESS plots + complex experiments).\\nDid you also have time to look at the updated version of the paper?\\n\\nLooking forward to hearing from you soon,\\nThank you.\"}", "{\"title\": \"About the optimism bias\", \"comment\": \"Optimism bias\\n===\\n\\nFirst, thank you for the fast answer, this is really appreciated. We furthermore had the opportunity recently to discuss with various researchers about our work and this question was raised several times. Therefore we are now convinced that it should be discussed on the final version of paper no matter what conclusion our discussion reaches.\\n\\nFor clarity p(s'|s, a) is what we called p_env(s'|s, a) in our work while q(s'|s, a) would be p_model(s'|s, a).\\n(we were actually wondering if that would be clearer to use in the paper as well.)\\n\\nHow we understand the optimism bias\\n---\\nWhen optimizing the the posterior with a variational proposal q e.g KL(q(x)||p(x|O)), we obtain the objective described in Levine 2018, sec 2.4, eq 10. It contains the expectation under q of the reward, the expectations under q of log p(s'|s, a) and the entropy of q.\\nThe important point is that we have some divergences of the transitions given by q and by p.\\n\\nHowever, if we maximize q with this objective, then we will learn a wrong transition model that assumes overly optimistic transitions.\\nIndeed, this is because the reward signal/optimality has been used implicitly to train the transition model q(s'|s,a). The transition model is actually trying to match p(s' | s, a, O) instead of p(s' | s, a) [Levine 2018, sec 2.4, eq 9]. This is due to the fact that the factorizations of p and q are different. This learned transition model is wrong and we believe this is what is called the optimism bias.\\n\\nThis is corroborated by [Levine 2018 sec 3]:\\n> The problematic nature of the maximum entropy framework in the case of stochastic dynamics, discussed in Section 2.3 and Section 2.4, in essence amounts to an assumption that the agent is allowed to control both its actions and the dynamics of the system in order to produce optimal trajectories, but its authority over the dynamics is penalized based on deviation from the true dynamics.\\n\\n\\nWhy we believe we don't suffer from it and why we think Levine 2018 corroborates our view\\n---\\n\\nThe solution proposed by Levine is to fix q(s'|s,a) to p(s'|s,a) [Levine 2018 sec 3.1], that way, q(s'|s,a) does not \\\"see\\\" the reward/optimality and thus can't be over-optimistic. This can be seen as a type of variational inference as well where some structure (here q(s'|s,a) == p(s'|s,a)) is forced into the variational distribution [Levine 2018 sec 3.2].\\n\\nMore generally, the issue we have to avoid is that q(s'|s,a) should NOT be trained to match p(s'|s, a, O) as jointly optimizing everything would do.\\nIn our case, we specifically force q(s'|s, a) to match p(s'|s, a) by doing it explicitly and training q(s'|s, a) to match p(s'| s, a) by MLE.\\n\\nFiltering, control and the posterior\\n---\\nMore generally, targeting a posterior like we do seems to be very widespread and established in the filtering and control communities. For instance a Kalman smoother estimates perfectly the posterior p(x_{1:T} | y_{1:T}) for linear-gaussian systems. \\nDo you believe that these methods also suffer from the optimism bias?\\nOur understanding is that they don't as the transition model (even when imperfect) is not trained to optimize the posterior but either known, modeled by hand or estimated by MLE from transition data (as we do).\\n\\n\\nI'd like to re-emphasize that we are open to the discussion. \\n- If you can convince us that we do suffer from the optimism bias, we'll gladly add a subsection discussing it and why we think our method still works or how we could improve on it maybe.\\n- If we can convince you that this is not an issue in this work, we believe we should still state in our paper why this is the case.\\n\\nPlease feel free to detail your thoughts and tell us exactly where you disagree with us.\"}", "{\"title\": \"RE:\", \"comment\": \"1. As done in your proposal, can the value function from SAC be used?\\n\\n2. The optimism bias does not stem from model error. The exact posterior with a perfect model suffers from optimism bias in stochastic environments. This is what is meant by Levine 2018.\"}", "{\"title\": \"Thank you for these clarifications.\", \"comment\": \"Glad to hear that the experiments are now corrected. My rating remain the same as before.\"}", "{\"title\": \"Clarifications\", \"comment\": \"1) \\u201cHowever, baselines that establish the claim that SMC improves planning which leads to improved control are missing (such as CEM + value function). \\u201c\\n\\nWe understand that CEM with a value function would be an interesting baseline, but we are not aware of any work that introduced what you are mentioning. Could you point us toward relevant work using CEM with a value function? For example, even the most recent work we could find using CEM (Hafner 2018) does not use a value function.\\n\\nFor instance, it is unclear to us how the value function should be learned. The most natural way to learn a value function, would be to do it online (ie learn the value function induced by the non-parametric CEM policy). Another alternative would be to learn a value function offline, but this would would be expensive since it would require to do a full planning step i.e. querying the generative model for multiple steps and then correcting the action chosen. Then we could either correct the expectation with importance sampling or use a Q-function similar to SAC.\\n\\nWe think there are many ways this could be designed, leading to various performances and behaviors: this is a very interesting direction, but we believe this would require a full paper rather than being introduced as a baseline.\\n\\nIn any case, we believe we have indeed very strong evidence to support our claims that SMC improves the sample efficiency of the model free proposal (section 5.2 experiments were done with 20 seeds following best practices from Henderson 2017 and Colas 2018, which is greatly superior to what is usually done in the field).\\n\\n2) \\u201cThe optimism bias stems from targeting the posterior, and is not due to errors in modeling the transitions\\u201d\\n\\nAre your referring to \\u201cexact inference in the graphical model produces an \\u201coptimistic\\u201d policy that assumes some degree of control over the system dynamics.\\u201d - Levine 2018, section 5.4?\\nIn our case, the model is NOT trained jointly with the policy (only from buffer data), so the policy does not assume any control on the system\\u2019s dynamics, thus our posterior is not overly optimistic.\"}", "{\"title\": \"RE:\", \"comment\": \"The optimism bias stems from targeting the posterior, and is not due to errors in modeling the transitions.\"}", "{\"title\": \"Answer to the reviewer\", \"comment\": \"We would like to thank the reviewer for the encouraging comments and important references.\\n\\n\\u201cWhen it comes to your SMC algorithm you will suffer from path degeneracy. [...] However, this can easily be fixed via backward simulation [...]\\u201d\\n\\nYes, we thank you for the suggestion. Particle Gibbs with Ancestral Sampling had been also brought to our attention to tackle this issue, but we choose to keep it simple in this work to focus more on introducing the idea rather than on getting the best results.\\n\\n1) \\u201cThere are no theoretical results on the properties of the proposed approach. However, given the large body of literature when it comes to the analysis of SMC methods I would expect that you can provide some results via the nice bridge that you have identified.\\u201d\\n\\nWe believe our method is grounded when p_model = p_env and we have access to the optimal value function.\\nHowever in most RL settings, both these assumptions are violated and it lessens the impact of the analysis.\\nA very interesting theoretical analysis we wish to make is to look if we can still provide some guarantees when the model and value function are approximately optimal, but a full theoretical study is still upcoming and out of scope of this paper.\\n\\n2) \\u201cWould this be possible to implement in a real-world setting with real-time requirements?\\u201d\\n\\nWe think it is possible if we replan every few steps instead of every step and keep a reasonable number of particles. Several methods bringing SMC methods to real-time systems exist. For instance, for embedded systems with real-time constraints, a FPGA implementation of SMC has been proposed (Ndeved et al, 2014, https://www.sciencedirect.com/science/article/pii/S1474667016429812).\\nWe also believe that additionally to a good search algorithm, we need to learn good representations (eg if the input is an image) and plan in the latent space.\\n\\n3) \\u201cA very detailed question when it comes to Figure 5.2 (right-most plot), why is the performance of your method significantly degraded towards the end? It does recover indeed, but I still find this huge dip quite surprising.\\u201d\\n\\nIndeed. We had more time to investigate this during the review period and we realized that some of our jobs were killed around step 40k. We have since rerun all our experiments and closely monitored that no such thing happened again. We are now confident our updated results are much stronger and show with high confidence the real performance of our method.\"}", "{\"title\": \"Answer to the reviewer\", \"comment\": \"We would like to thank the reviewer for the correction and added additional experimental details to better understand the behaviour of the method.\\n\\n1) \\u201c[...] the SMC algorithm adopted [...] is the simplest and earliest SMC algorithm adopted.[...]. I do not see the modern parts of these algorithms.\\u201d\\n\\nThis is fair point, we corrected the sentence.\\n\\n2) \\u201cThe experiment section reports the return, but it is unclear to me how the SMC algorithm in this case. For example, what is the effective sample size (ESS) in these settings?\\u201d\\n\\nIn this case, the SMC algorithm now clearly outperforms the SAC baseline as you can see in the updated version of the plot. Furthermore, we have added a new section in the Appendix A.8 describing the evolution of the ESS during training. While it is not very high, is is usually around 15% of the sample size, which we believe is reasonable so that we do not suffer heavily from weight degeneracy. \\n\\n3) \\u201cBut it is unclear to me how the algorithm proposed is applicable in complex continuous tasks as claimed.\\u201d\\n And \\u201cThe experiment described seems to be a 2-dimensional set up. How does the algorithm perform with a high-dimensional planning problem?\\u201d\\n\\nYes, the 2d experiment is merely illustrative, the complex continuous tasks mentioned are illustrated with the experiments on Mujoco in subsection 2 of the experiments. In section 5.2, we have updated our performance results on the 3 classic Mujoco environments. Their respective state/action dimensions are: \\nWalker2d-v2, state (17,), action (6,)\\nHopper-v2, state (11,), action (3,)\\nHalfCheetah-v2, state (17,), action (6,)\\n\\nStill, we have removed the mention of \\u201chigh dimensional\\u201d as control tasks (ie Mujoco), while complex, are maybe not what the statistical community would call \\u201chigh dimensional\\u201d. Also, vanilla particle filters are known to suffer from the curse of dimensionality, especially if the proposal is poor.\\nA solution we leave to future work would be to do the planning in latent space, in that case our method could scale even with very high dimensional inputs.\"}", "{\"title\": \"Answer 3/3\", \"comment\": \"13) \\u201cSec 3.2, mentions an action prior for the first time. Where does this come from?\\u201d\\n\\nThis action prior comes from the factorization of the HMM model in section 2.1 (it is typically considered constant or already included in the reward). We follow the notation of Levine 2018 that omits it for conciseness. We decided to add a footnote on eq 2.1 for clarity as well as a section in the Appendix A.2.\\n\\n14) \\u201cSec 3.3 derives updates assuming a perfect model, but we learn a model. What are the implications of this?\\u201d\\n\\nThis is a necessary assumption that most planning algorithm (CEM, LQR\\u2026) make. Implications of this assumption are model compounding errors on the plan. To be more robust to model errors, it is typical to replan a each time step (Model Predictive Control) as we do. We added some clarification in this subsection.\\n\\n15) \\u201cPlease ensure the line #'s and the algorithm line #'s match.\\u201d\\n\\nWe have updated the algorithm section. Now the lines should match and the algorithm in written in a more comprehensive way.\\n\\n16) \\u201cDoes CEM use a value function? If not, it seems like a reasonable baseline to consider CEM w/ a value function to summarize the values beyond the planning horizon. This will evaluate whether SMC or including the value function is important.\\u201c\\n\\nWe think it is fair to compare to it as it is. Indeed, CEM is a method that has been used successfully in multiple settings e.g. Tetris and it is the default algorithm for planning in the deep RL community (e.g. Chua 2018) and is a baseline algorithm for us.\\nMoreover, as we do not do any learning in the toy example, SMCP does not use a value function. Even then, we see that our algorithm can handle multimodality while CEM cannot.\\n\\n17) \\u201cModel learning is not described in the main text though it is a key component of the algorithm. The appendix lacks details and contradicts itself.\\u201d + \\u201cComparing to state-of-the-art model-based RL.\\u201d\\n\\nWe corrected inconsistencies and added details. Note that we used a fairly standard probabilistic model (gaussian likelihood) and focus most of the space to describe our contribution: the planning algorithm, since any good model would work well.\\nThese work are indeed relevant, but also complementary to ours. For example, Model Ensemble could potentially improve our results and those of the planning baselines. We added references to these papers in the text.\\n\\n18) \\u201cIn Sec 5.1, the authors provide an example of SMCP learning a multimodal policy. This is interesting, but can the authors explain when this will be helpful?\\u201d\\n\\nRL algorithms are known to suffer from a mode seeking behavior and often only discover suboptimal solutions. We believe the ability to handle multimodality could help discovering new solutions to a task.\"}", "{\"title\": \"Answer 2/3\", \"comment\": \"6) \\u201cHow were the task # of steps chosen? They seem arbitrary. What is the performance at 1million and 5million steps?\\u201d\\n\\nAs stated in the conclusion, our algorithm is expensive. Given that we train the model, the SAC networks (policy, value and Q functions), and we perform a full planning a each time step (MPC), training for 250k steps already takes a few days.\\nWe decided to allocate our computing resources on producing more seeds rather than longer runs. It should be noted however that we do not expect our algorithm to keep outperforming SAC in the long run. We believe this is a behavior to be expected when planning with imperfect models, in the long run, the model-free method will find a good policy while the planning part will still suffer from model errors. We think this is also the case for humans; when confronted with a new situation we tend to plan, but as we become more familiar with it, our reflexes/habitus are more accurate.\\nAs a solution, we could also learn when and how long to plan, but we believe this is out of scope for this work.\\n\\n\\n7) \\u201cWas SAC retuned for this small number of samples/steps?\\u201d\\n\\nNo, it was not, we took the default values from the SAC paper. However we think it is fair since we use the exact same version of SAC for our proposal distribution and thus the only difference is from the planning algorithm.\\n\\n8) \\u201cClarify where the error bars come from in Fig 5.2 in the caption.\\u201d\\nYes we have added clarification. The error bars are 1 standard deviation from the mean with 20 seeds for each algorithm. This is the default setting for the confidence interval computation with the seaborn package.\\n\\n9) \\u201cIn the abstract, the authors claim that the major challenges in planning are: 1) model compounding errors in roll-outs and 2) the exponential search space. Their method only attempts to address 2), is that correct? If so, can the authors state that explicitly.\\u201d\\n\\nYou are correct, we reformulate the introduction to clearly state the problem we are tackling: search algorithm. We do acknowledge that this is a very important issue -but that is not part of our contribution- in the related work and conclusion sections.\\n\\n10) \\u201cI found the discussion of SAC at the end of Sec 2.1 confusing. As I understand SAC, it does try to approximate the gradient of the variational bound directly. Can the authors clarify what they mean?\\u201d \\n\\nWe clarified the discussion. We think the distinction is mostly that a policy gradient algorithm would use a Monte Carlo return while SAC uses soft value functions and the policy is taken to be the Boltzmann distribution over the soft-Q values. This discussion was inspired by Section 4.2 of Levine (2018).\\n\\n11) \\u201cThe connection between Gu et al.'s work on SMC and SAC was unclear in the intro, can the authors clarify?\\u201d\\n\\nWe think this discussion is actually more adapted for the related work section. There, we have now clarified the connection between Gu\\u2019s work on SMC and SAC. \\n\\n12) \\u201cIn Sec 4.1, a major difference between MCTS and SMC is that MCTS runs serially, whereas SMC runs in parallel. This should be noted and then it's unclear whether SMC-Planning should really be thought of as the maximum entropy tree search equivalent of MCTS.\\u201d + \\u201cIn Sec 4.1, the authors claim that Alpha-Go and SMCP learn proposals in similar ways. However, SMCP minimizes the KL in the reverse direction (from stated in the text). This is an important distinction.\\u201d + \\u201cIn Sec 4.3, the authors note that Gu et al. learn the proposal with the reverse KL from SMCP. VSMC (Le et al. 2018, Naesseth et al. 2017, Maddison et al. 2017) is the analogous work to Gu et al. that learn the proposal using the same KL direction as SMCP. The authors should consider citing this work as it directly relates to their algorithm.\\u201d + \\u201cIn Sec 4.3, the authors claim that their direction of minimizing KL is more appropriate for exploration. Gu et al. suggest the opposite in their work \\n\\nThe reviewer correctly pointed out some inconsistencies and vagueness in the related work section. We decided to rewrite it concisely and only focus on pointing toward relevant work to ours.\"}", "{\"title\": \"Answer to reviewer 1/3\", \"comment\": \"We would like to thank the reviewer for this very thorough review. We believe that these comments are making the paper clearer and stronger.\\n\\n1) \\u201c[...] the experimental results do not provide compelling support for the algorithm.\\u201c\\n\\nWe agree the initial results were not compelling in that regard. We have updated the results and we now believe the performance of our planning method appears clearly. We used 20 seeds and also added a significance test following guidelines by Colas et al 2018 in Appendix A.7. We furthermore added more experimental details in the Appendix A.5 and A.8\\n\\n2) \\u201cLevine 2018 explains that with stochastic transitions, computing the posterior leads to overly optimistic behavior because the transition dynamics are not enforced, whereas the variational bound explicitly enforces that. Is that an issue here?\\u201d\\n\\nOur model is trained by maximum likelihood as in Chua et al. 2018 only from data, separately from the policy and planning. Thus, the policy has no control over the system dynamics, hence the model is not encouraged to yield over-optimistic transitions. We have added details about the model training procedure in the experiments section and have update our pseudo-code for clarity.\\n\\n3) \\u201cThe value function estimated in SAC is V^\\\\pi the value function of the current policy. The value function needed in Sec 3.2 is a different value function. Can the authors clarify on this discrepancy?\\u201d\\n\\nIndeed. However as we do not have access to the optimal value function, we use the current value function of SAC as a proxy. As the SAC-policy will converge to a policy closer to optimality, so will its value function. Therefore we think this is a sensible practical choice, and this is similar to what is done in actor-critic methods for instance.\\n\\n4) \\u201cThe SMC procedure in Alg. 1 appears to be incorrect. It multiplies the weights by exp(V_{t+1}) before resampling. This needs to be accounted for by setting the weights to exp(-V_{t+1}) instead of uniform. See for example auxiliary particle filters.\\u201d\\n\\nYes, there was indeed an issue with the weight update that we have now fixed and it does indeed align with your intuition. \\nTo be precise, we believe the weight update should be done pour multiplying with the previous weight by exp(r - log pi + V\\u2019 -log E_{s_t | a_t-1 s_t-1} exp V(s_t)).\\nWe thought (wrongly) that the log-expectation-exp was equal to the normalization constant when normalizing the weights, thus redundant. However this normalization constant takes its expectation under the states the particles are in at time t rather than the transition dynamics as it should be done.\\nBy fixing the update, we now believe we have the right formula, and this allows us to have an unweighted empirical estimate of the posterior.\\nThis is indeed similar in spirit to the auxiliary particle filter, we thank you for the reference and for pointing out the issue, it helped us derive the right update formula.\\n\\n5) \\u201cHow was the planning horizon h chosen? Is the method sensitive to this choice? What is the model accuracy?\\u201d + \\u201cAt the end of Sec 2.2, the authors claim that the tackle the particle degeneracy issue (a potentially serious issue) by \\\"selecting the temperature of the resampling distribution to not be too low.\\\" I could not find further discussion of this anywhere in the paper or appendix.\\u201d\\n\\nWe did not do any extensive hyperparameter search in the beginning. We tried mostly temperatures in the range [1-10]. We checked the ESS while training to make sure we did not have any weight degeneracy issue. See A.8 for a plot of the ESS during training.\\nWe have tried horizons from 5 to 50, and while the performance is pretty stable across this range of horizons, h~20 seems a good value to work with for Walker2d and HalfCheetah. Hopper was more challenging and we found out that typically shorter horizons worked marginally better. \\nThe path degeneracy is indeed a very serious issue, and we definitely suffer from it even when tuning the temperature. While some modern smoothing algorithms like Particle Gibbs with Ancestor Sampling can alleviate it, our goal in this work is to introduce a new simple and motivated way of doing planning rather than obtaining the best performance possible.\"}", "{\"title\": \"Overview of the changes\", \"comment\": \"We would like first to thank all reviewers for their work. We did a major revision of the paper based on the issues pointed out. We believe this current form is now much clearer and stronger and addresses the points raised by the reviewers.\", \"outline_of_the_revisions\": [\"Simplified the abstract and clarified the introduction.\", \"Fixed small typos and inaccuracies in section 2 (Background).\", \"We reworked section 3.3 and 3.4 (SMCP) and fixed an issue in the weight update.\", \"We added new strong and significant experimental results on Mujoco.\", \"We reworked and wrote a more comprehensive section 5 (Related work) and discussed relevant papers, such as the ones pointed out by the reviewers.\", \"Appendices: Included additional details and experimental figures.\"]}", "{\"title\": \"Sequential Monte Carlo (SMC) has since its inception some 25 years ago proved to be a powerful and generally applicable tool. The authors of this paper continue this development in a very interesting and natural way by showing how SMC can be used to solve challenging planning problems. This is a enabled by reformulating the planning problem as an inference problem via the recent trend referred to as \\\"control as inference\\\".\", \"review\": \"Sequential Monte Carlo (SMC) has since its inception some 25 years ago proved to be a powerful and generally applicable tool. The authors of this paper continue this development in a very interesting and natural way by showing how SMC can be used to solve challenging planning problems. This is a enabled by reformulating the planning problem as an inference problem via the recent trend referred to as \\\"control as inference\\\". While there is unfortunately no real world experiments, the simulations clearly illustrate the potential of the approach.\\nWhile the idea of viewing control as inference is far from new the idea of using SMC in this context is clearly novel as far as I can see. Well, there has been some work along the same general topic before, see e.g.\\nAndrieu, C., Doucet, A., Singh, S.S., and Tadic, V.B. (2004). Particle methods for change detection, system identification, and contol. Proceedings of the IEEE, 92(3), 423\\u2013438.\\nHowever, the particular construction proposed in this paper is refreshingly novel and interesting. Hence, I view the specific idea put fourth in this paper as highly novel. The general idea of viewing control as inference goes far back and there are very nice dual relationships between LQG and the Kalman filter established and exploited long time ago.\\n\\nThe authors interprets \\\"control as inference\\\" as viewing the planning problem as a simulation exercise where we aim to approximate the distribution of optimal future trajectories. A bit more specifically, the SMC-based planning proposed in the paper stochastically explores the most promising trajectories in the tree and randomly removes (via the resampling operation) the less promising branches. Importantly there are convergence guarantees via the use of SMC. The idea is significant in that it opens up for the use of the by now strong SMC body of methods and analysis when it comes to challenging and intractable planning problems. I foresee many interesting developments to follow in the direction layed out by this paper. \\n\\nWhen it comes to your SMC algorithm you will suffer from path degeneracy (as all SMC algorithms does, see e.g. Figure 1 in https://arxiv.org/pdf/1307.3180.pdf) and if h is large I think this can be a problem for you. However, this can easily be fixed via backward simulation. For an overview of backward simulation see \\nLindsten, F. and Schon, T. \\\"Backward simulation methods for Monte Carlo statistical inference\\\". Foundations and Trends in Machine Learning, 6(1):1-143, 2013.\\n\\nI am positive to this paper (clearly reveled by my score as well), but there are of course a few issues as well:\\n1. There are no theoretical results on the properties of the proposed approach. However, given the large body of literature when it comes to the analysis of SMC methods I would expect that you can provide some results via the nice bridge that you have identified.\\n2. Would this be possible to implement in a real-world setting with real-time requirements?\\n3. A very detailed question when it comes to Figure 5.2 (right-most plot), why is the performance of your method significantly degraded towards the end? It does recover indeed, but I still find this huge dip quite surprising.\", \"minor_details\": \"* The initial references when it comes to SMC are wrong. The first papers are:\\nN.J. Gordon, D. Salmond and A.F.M. Smith, Novel approach to nonlinear/non-Gaussian Bayesian state estimation, IEE Proc. F, 1993\\nL. Stewart, P. McCarty, The use of Bayesian Belief Networks to fuse continuous and discrete information for target recognition and discrete information for target recognition, tracking, and situation assessment, in Proc. SPIE Signal Processing, Sensor Fusion and Target Recognition,, vol. 1699, pp. 177-185, 1992.\\n G. Kitagawa, Monte Carlo filter and smoother for non-Gaussian nonlinear state-space models, JCGS, 1996 \\n* When it comes to the topic of learning a good proposal for SMC with the use of variational inference the authors provide a reference to Gu et al. (2015) which is indeed interesting and relevant in this respect. However, on this hot and interesting topic there has recently been several related papers published and I would like to mention:\\nC. A. Naesseth, S. W. Linderman, R. Ranganath, D. M. Blei, Variational Sequential Monte Carlo. Proceedings of the 21st International Conference on Artificial Intelligence and Statistics, Lanzarote, Spain, April 2018.\\nC. J. Maddison, D. Lawson, G. Tucker, N. Heess, M. Norouzi, A. Mnih, A. Doucet, and Y. Whye Teh. Filtering variational objectives. In Advances in Neural Information Processing Systems, 2017.\\nT. A. Le, M. Igl, T. Jin, T. Rainforth, and F. Wood. AutoEncoding Sequential Monte Carlo. arXiv:1705.10306, May 2017.\\n\\nI would like to end by saying that I really like your idea and the way in which you have developed it. I have a feeling that this will inspire quite a lot of work in this direction.\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"More work/evaluation on the SMC part needed\", \"review\": \"This paper proposes a sequential Monte Carlo Planning algorithm that depicts planning as an inference problem solved by SMC. The problem is interesting and the paper has a nice description of the related work. In terms of the connection between the the problem and Bayesian filtering as well as smoothing, the paper has novelty there. But it is unclear to me how the algorithm proposed is applicable in complex continuous tasks as claimed.\\n\\nIn the introduction, the authors wrote that \\\"We design a new algorithm, Sequential Monte Carlo Planning (SMCP), by leveraging modern methods in Sequential Monte Carlo (SMC), Bayesian smoothing, and control as inference\\\". From my understanding, the SMC algorithm adopted is the bootstrap particle which is the simplest and earliest SMC algorithm adopted. The Bayesian smoothing algorithm described is also standard. I do not see the modern parts of these algorithms.\\n\\nThe experiment section reports the return, but it is unclear to me how the SMC algorithm in this case. For example, what is the effective sample size (ESS) in these settings?\\n\\nThe experiment described seems to be a 2-dimensional set up. How does the algorithm perform with a high-dimensional planning problem?\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Interesting preliminary work, but requires major revisions\", \"review\": \"The authors formulate planning as sampling from an intractable distribution motivated by control-as-inference, propose to approximately sample from the distribution using a learned model of the environment and SMC, then evaluate their approach on 3 Mujoco tasks. They claim that their method compares favorably to model-free SAC and to CEM and random shooting (RS) planning with model-based RL.\\n\\nThis is an interesting idea and an important problem, but there appear to be several inconsistencies in the proposed algorithm and the experimental results do not provide compelling support for the algorithm. In particular,\\n\\nLevine 2018 explains that with stochastic transitions, computing the posterior leads to overly optimistic behavior because the transition dynamics are not enforced, whereas the variational bound explicitly enforces that. Is that an issue here?\\n\\nThe value function estimated in SAC is V^\\\\pi the value function of the current policy. The value function needed in Sec 3.2 is a different value function. Can the authors clarify on this discrepancy?\\n\\nThe SMC procedure in Alg 1 appears to be incorrect. It multiplies the weights by exp(V_{t+1}) before resampling. This needs to be accounted for by setting the weights to exp(-V_{t+1}) instead of uniform. See for example auxiliary particle filters.\", \"the_experimental_section_could_be_significantly_improved_by_addressing_the_following_points\": [\"How was the planning horizon h chosen? Is the method sensitive to this choice? What is the model accuracy?\", \"Does CEM use a value function? If not, it seems like a reasonable baseline to consider CEM w/ a value function to summarize the values beyond the planning horizon. This will evaluate whether SMC or including the value function is important.\", \"Comparing to state-of-the-art model-based RL (e.g., one of Chua et al. 2018, Kurutach et al. 2018, Buckman et al. 2018).\", \"How were the task # of steps chosen? They seem arbitrary. What is the performance at 1million and 5million steps?\", \"Was SAC retuned for this small number of samples/steps?\", \"Clarify where the error bars come from in Fig 5.2 in the caption.\", \"At the moment, SMCP is within the error bars of a baseline method.\"], \"comments\": \"In the abstract, the authors claim that the major challenges in planning are: 1) model compounding errors in roll-outs and 2) the exponential search space. Their method only attempts to address 2), is that correct? If so, can the authors state that explicitly.\\n\\nRecent papers (Chua et al. 2018, Kurutach et al. 2018, Buckman et al. 2018, Ha and Schmidhuber 2018) all show promising model-based results on continuous state/action tasks. These should be mentioned in the intro.\\n\\nThe connection between Gu et al.'s work on SMC and SAC was unclear in the intro, can the authors clarify?\\n\\nFor consistency, ensure that sums go to T instead of \\\\infty.\\n\\nI found the discussion of SAC at the end of Sec 2.1 confusing. As I understand SAC, it does try to approximate the gradient of the variational bound directly. Can the authors clarify what they mean?\\n\\nAt the end of Sec 2.2, the authors claim that the tackle the particle degeneracy issue (a potentially serious issue) by \\\"selecting the temperature of the resampling distribution to not be too low.\\\" I could not find further discussion of this anywhere in the paper or appendix.\\n\\nSec 3.2, mentions an action prior for the first time. Where does this come from?\\n\\nSec 3.3 derives updates assuming a perfect model, but we learn a model. What are the implications of this?\\n\\nPlease ensure the line #'s and the algorithm line #'s match.\\n\\nModel learning is not described in the main text though it is a key component of the algorithm. The appendix lacks details (e.g., what is the distribution used to model the next state?) and contradicts itself (e.g., one place says 3 layers and another says 2 layers).\\n\\nIn Sec 4.1, a major difference between MCTS and SMC is that MCTS runs serially, whereas SMC runs in parallel. This should be noted and then it's unclear whether SMC-Planning should really be thought of as the maximum entropy tree search equivalent of MCTS.\\n\\nIn Sec 4.1, the authors claim that Alpha-Go and SMCP learn proposals in similar ways. However, SMCP minimizes the KL in the reverse direction (from stated in the text). This is an important distinction.\\n\\nIn Sec 4.3, the authors note that Gu et al. learn the proposal with the reverse KL from SMCP. VSMC (Le et al. 2018, Naesseth et al. 2017, Maddison et al. 2017) is the analogous work to Gu et al. that learn the proposal using the same KL direction as SMCP. The authors should consider citing this work as it directly relates to their algorithm.\\n\\nIn Sec 4.3, the authors claim that their direction of minimizing KL is more appropriate for exploration. Gu et al. suggest the opposite in their work. Can the author's justify their claim?\\n\\nIn Sec 5.1, the authors provide an example of SMCP learning a multimodal policy. This is interesting, but can the authors explain when this will be helpful?\\n\\n====\\n\\n11/26\\nAt this time, the authors have not responded to reviews. I have read the other reviews. Given the outstanding issues, I do not recommend acceptance.\\n\\n12/7\\nAfter reading the author's response, I have increased my score. However, baselines that establish the claim that SMC improves planning which leads to improved control are missing (such as CEM + value function). Also, targeting the posterior introduces an optimism bias that is not dealt with or discussed.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
SyeKf30cFQ
A theoretical framework for deep and locally connected ReLU network
[ "Yuandong Tian" ]
Understanding theoretical properties of deep and locally connected nonlinear network, such as deep convolutional neural network (DCNN), is still a hard problem despite its empirical success. In this paper, we propose a novel theoretical framework for such networks with ReLU nonlinearity. The framework bridges data distribution with gradient descent rules, favors disentangled representations and is compatible with common regularization techniques such as Batch Norm, after a novel discovery of its projection nature. The framework is built upon teacher-student setting, by projecting the student's forward/backward pass onto the teacher's computational graph. We do not impose unrealistic assumptions (e.g., Gaussian inputs, independence of activation, etc). Our framework could help facilitate theoretical analysis of many practical issues, e.g. disentangled representations in deep networks.
[ "theoretical analysis", "deep network", "optimization", "disentangled representation" ]
https://openreview.net/pdf?id=SyeKf30cFQ
https://openreview.net/forum?id=SyeKf30cFQ
ICLR.cc/2019/Conference
2019
{ "note_id": [ "S1xW__YVxE", "SyeIhpXAR7", "HJeAzKeA0m", "rkgcPMmZAm", "ByxFsnnbp7", "r1gmaqNbam", "HJeS_wyh2m", "HklFc7kchX", "HkeLn38Kn7", "Syl7jfgxq7", "S1edfAN19m" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review", "official_comment", "comment" ], "note_created": [ 1545013352744, 1543548333708, 1543534869960, 1542693474068, 1541684385463, 1541651130612, 1541302124850, 1541170065320, 1541135533831, 1538421402748, 1538375183656 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1284/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1284/Authors" ], [ "ICLR.cc/2019/Conference/Paper1284/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1284/Authors" ], [ "ICLR.cc/2019/Conference/Paper1284/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1284/Authors" ], [ "ICLR.cc/2019/Conference/Paper1284/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1284/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1284/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1284/Authors" ], [ "(anonymous)" ] ], "structured_content_str": [ "{\"metareview\": \"This paper studies the behavior of gradient descent on deep neural network architectures with spatial locality, under generic input data distributions, using a planted or \\\"teacher-student\\\" model.\\n\\nWhereas R1 was supportive of this work, R2 and R3 could not verify the main statements and the proofs due to a severe lack of clarity and mathematical rigor. The AC strongly aligns with the latter, and therefore recommends rejection at this time, encouraging the authors to address clarity and rigor issues and resubmit their work again.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Further iteration needed\"}", "{\"title\": \"Clear some confusions\", \"comment\": \"Thanks R3 for taking time to read the revised paper! We really appreciated it.\\n\\nThe assumption of \\\"locally connected neural network\\\" is indeed important. However we regard this assumption as an important contribution of this paper rather than a restriction. First of all, network with such structures (e.g., CNN) indeed works in practice but how they work remain an open problem. Therefore, a theoretical analysis is important. Furthermore, without such structural assumptions, a general analysis of neural networks won't lead to meaningful conclusion that connects to what we see in practice. Our paper is an attempt to make such connections. \\n\\nThe assumptions in Theorem 2 are intuitive and are explained right after Theorem 2. For the first assumption, the intuition is: the summarization variable z_\\\\alpha at the receptive field \\\\alpha mainly captures the input content x_\\\\alpha at that receptive field. Given that summarization variable z_\\\\alpha, others have minor effects. Therefore, we assume that P(x_\\\\alpha | z_\\\\alpha, z_{others}) = P(x_\\\\alpha | z_\\\\alpha). The second assumption is technical and is true when z_\\\\alpha splits x_\\\\alpha well. In reality, we can always turn the two assumptions into bounds and Theorem 2 becomes bounds as well. \\n\\nWe argue that these assumptions are way more natural than parametric forms (e.g., Gaussian inputs), and serve as one step closer to the real situations.\", \"marginalized_gradient\": \"At iteration t, we can always treat the current weights as constants and take expectation over the data distribution. In fact, it is a standard practice in many previous theoretical papers [1-4], no matter whether population gradient or stochastic gradient are used.\\n\\nWe didn't take expectation with respect to the output of the neuron. Do you mean the label $y$? In this case, it is not an issue since y is assumed to be the deterministic function of the input x. Therefore, taking expectation with respect to (x, y) is the same as with respect to x (the input) alone. We already explained it in the paper (Page 4, just before \\\"Marginalized Gradient\\\"). \\n\\nFinally, the notations $g_j(x_k)$, $g_j(x_j)$ and $g_j(x)$ were intentionally overloaded to make the notation cleaner. Otherwise one would need to use different functions for different version of the gradient signal. Alternatively we could use $g_{j\\\\rightarrow k}(x_k)$, which is a bit heavy in notation (e.g., too many \\\"k\\\"). Therefore, this is more or less a subjective thing. \\n\\nThroughout the paper, it is a convention that node j is the parent of node k. We will mention clearly the role of j and k in Theorem 2 in the next revision.\", \"references\": \"[1] R. Ge et al. Learning One-hidden-layer Neural Networks with Landscape Design. https://arxiv.org/abs/1711.00501\\n[2] S. Du et al. Gradient Descent Learns One-hidden-layer CNN: Don't be Afraid of Spurious Local Minima\", \"https\": \"//arxiv.org/abs/1702.07966\\n[4] Z. Pan and J. Feng, Empirical Risk Landscape Analysis for Understanding Deep Neural Networks. https://openreview.net/pdf?id=B1QgVti6Z\"}", "{\"title\": \"The notation is still not clearly explained.\", \"comment\": \"I appreciate the effort of the authors to make the paper more accessible. However, in the updated version, it seems that some notation is still not clearly defined or explained. Some examples are as follows. In equation (2), the notation $g_j(x_k)$ seems to coincide with $g_j(x)$, which might be misleading. In Theorem 1, what is the relationship between j and k? It would be better to elaborate more on how to use the recursive relations to compute the marginal gradient.\\nMoreover, on page 4, Equ.20 should be Equ. 4.\\n\\nMy main concern about Theorem 2 is that the assumption might not hold. Still, in the revised version, the authors are unable to come up with an example that shows these abstract assumptions are true.\\n\\nMy another concern is with the assumption of the locally connected neural network. It seems that the whole derivation hinges on this assumption. \\n\\nFurthermore, the pivotal part of the framework is the Marginalized Gradient, which is defined by taking conditional expectations of the input of each layer. However, the weights of the neural network depend on the input data, which makes it unable to take expectation as if the weights are deterministic. Treating these weights as deterministic numbers are only possible if you consider the SGD setting where you use a fresh sample to evaluate the expectation. Even in this case, the expectation should be taken only with respect to the input data. However, it seems that, in the paper, expectations are taken with respect to the input and outputs of each neuron.\"}", "{\"title\": \"Revision (v2)\", \"comment\": \"We have updated the main text of our paper to make the writing more clear. Please take a look.\\n\\n1. Our teacher-student setting is introduced with more explanation and examples (e.g., a conceptual comparison between our modeling and top-down generative model). \\n\\n2. We have added more related works. \\n\\n3. The assumptions in Theorem 2 are explained, showing they are mild assumptions. \\n\\n4. The relationship between input data distribution P(x) and the conditional distribution of P(z_\\\\beta|z_\\\\alpha) is explained. \\n \\n5. We put a table (Tbl. 2) explaining all notations used in Sec. 5.2. \\n\\n6. A better explanation of how backpropagation of BatchNorm was regarded as a projection and how it is compatible with our framework. S(f) now is defined. \\n\\nDue to time limit, we will add empirical study in the camera ready (if there is one).\"}", "{\"title\": \"thanks for the clarification\", \"comment\": \"After the clarification the model seems to be making more sense (on the other hand I really couldn't find how the teacher is defined in the original version, looking forward to your updated version). I will try to evaluate the paper again once the revision is uploaded.\"}", "{\"title\": \"Rebuttal\", \"comment\": \"We thank the reviewers for their insightful comments.\\n\\nWe acknowledge that there is confusion in terms of paper organization and math notations. We are working on a revision, which will be uploaded by Nov. 23. \\n\\nFor now we first address the main questions raised by the reviewers. \\n\\n1. [R2] There is no main theorem in this paper. \\n\\nIt remains a grand challenge for the whole community to come up with a theorem that relates data distribution to the properties of deep and nonlinear network. As the major contribution and a first step, we propose a reformulation that explicitly relates data distribution to the gradient descent optimization procedure. With this reformulation, we now can explicitly study how data distribution affects the property of the network. Along this direction, we put a few initial discussions in the paper. An application of this reformulation towards a major theorem is left to future work. \\n\\nBesides, we also discover a property of back-propagated gradient of BatchNorm (Sec. 4) and show that this property is preserved in the reformulation. \\n\\n2. [R1][R2] How teacher is defined.\\n\\nThe teacher is specified in a bottom-up manner (rather than top-down, as suggested by the reviewer 2). First the lowest layers of summarization is computed, then the second lowest layer of summarization is computed based on the lowest layers (ref. Sec 3.1: z_\\\\alpha = z_\\\\alpha(z_\\\\beta)), until the top-level summarization is computed, which is the class label y. At each stage, we assume that the upwards function be deterministic and typically drop irrelevant information w.r.t the class label. The reason why we want a deterministic function is for the proof of Theorem 2. Note that while top-down graphical model requires nondeterministic function (since new information needs to be added), assuming deterministic function in a bottom-up setting is natural. \\n\\n3. [R2] How z_\\\\alpha is picked and how to define P(x_\\\\alpha|z_\\\\alpha):\\n\\nz_\\\\alpha is picked by the teacher. Since the teacher provides the classification labels for the student, any choice of z_\\\\alpha would fit to the theory. Intuitively, z_\\\\alpha is a summarization of the content x_\\\\alpha within the receptive field \\\\alpha, which contributes to the final label y. \\n\\nDue to a loss of information, there are multiple x_\\\\alpha that maps to the same z_\\\\alpha. Therefore, we can define P(x_\\\\alpha|z_\\\\alpha). \\n\\n4. [R2][R3] Is the assumption in Theorem 2 (and 3) realistic? \\n\\nTypically, z_\\\\alpha and z_\\\\beta are overlapping (Fig. 1(b)). In particular, if \\\\alpha is a parent of \\\\beta, then the receptive field \\\\alpha covers \\\\beta. With this in mind, the two assumptions in Theorem 2: \\n\\n P(x_\\\\alpha|z_\\\\alpha, z_\\\\beta) = P(x_\\\\alpha|z_\\\\alpha) \\n\\n and \\n\\n P(x_\\\\beta|z_\\\\alpha, z_\\\\beta) = P(x_\\\\beta|z_\\\\beta) \\n\\nare natural, since each z is most related to the information of its own receptive field. \\n\\nTheorem 3 is a limiting case of Theorem 2, which gives an example about when the assumptions of theorem 2 hold exactly. If the assumptions of Theorem 2 are relaxed (e.g., ||P(x_\\\\alpha|z_\\\\alpha, z_\\\\beta) - P(x_\\\\alpha|z_\\\\alpha)|| \\\\le \\\\epsilon), still we have bounds (instead of equalities) in Theorem 2. \\n\\n5. [R1] The data distribution is indirectly characterized by the conditional distribution P(z_\\\\alpha | z_\\\\beta), which may not be ideal/questionable. \\n\\nWe think it is more like a merit rather than a shortcoming. An indirect specification (like what we give in the paper) gives much more flexibility of the distribution x. In contrast, a direct/parametric specification (e.g., the input data is Gaussian) might look mathematically clear but is probably not true in practice. On the other hand, given this indirect specification, we agree that the resulting distribution of input deserves further empirical study (e.g., via sampling x given z). \\n\\n 6. [R1] Empirical study\\nWe will add more empirical studies in the next revision. We already observe the convergence of the reformulations (Theorem 2) under random conditional distribution of summarization variable (P(z_\\\\alpha|z_\\\\beta)), and much faster convergence if BatchNorm is used.\"}", "{\"title\": \"This paper proposes a new framework for understanding the Relu networks in theory. However, the assumptions are not justified and the definitions seems not clear.\", \"review\": \"This paper proposes a new approach to understand the theory of RELU neural networks. Using a teacher-student setting, this paper studies the batch normalization and the disentangled representations of neural networks. However, the definitions of some of the concepts and notation are not sufficiently clear. In addition, the assumptions that the main results of this paper depend on do not have clear intuitions.\", \"detailed_comments\": \"1. It seems that this paper over claims its contribution. It is not clear why the \\\"teacher-student setting\\\" can be called a theoretical framework, even the definitions of the teacher and the student are not clear. It seems that the new framework is just a way to compute the relations of the gradients of neurons based on a few assumptions (Theorem 2).\\n\\n2. I found it very hard to follow the notations given in this paper. The main reason is that many of the terms appear without a definition, and the reader has to guess what they stand for. For example, in equation (2), w_{jk} seems to be the weight between nodes j and k, where k is a child of j. But this term is not defined. As another example, all the matrices in Theorem 9 are not defined. They just suddenly appear. In addition, S(f) in (11) is not defined. I would suggest the authors to spend one section to carefully define everything. \\n\\n3. The theorems all depends on some assumptions that are unclear whether will hold in practice or not. For example, in theorem 2, it is hard to see what kind of data distribution satisfy these three conditions. Although in Theorem 3 the author gave a sufficient condition, we still don't know what kind of $X$ satisfies this. For example, does Gaussian distribution satisfy those? This problem also happens to other theorems. It would be much better to make sure that these assumptions are unrealistic.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"The authors propose a framework that utilizes the teacher-student setting and give some impressive evaluations on deep neural networks. This paper has rigorous theoretical analysis, but lacks necessary experiments.\", \"review\": \"The authors propose a framework that utilizes the teacher-student setting to evaluate deep locally connected ReLU network. The framework explicitly formulates data distribution, which has not been considered by previous works. The authors also show that their framework is compatible with Batch Normalization and favors disentangled representation when data distributions have factorizable structures. Based on this framework, the authors re-explain some common issues of deep learning, such as overfitting.\\n\\nMy major concerns are as follows.\\n\\n1. The framework is based on the teacher-student setting, and the authors claim that \\\"the teacher generates classification label via a hidden computational graph\\\". However, how the teacher can be designed is not clear in the paper.\\n\\n2. The data distribution included in this paper is $P(z_{\\\\alpha}, z_{\\\\beta})$, where $z_{\\\\alpha}$ and $z_{\\\\beta}$ are all summarization variables. From this perspective, it only has an indirect connection with original data distribution $P(x)$ or $P(x_{\\\\alpha}, x_{\\\\beta})$, and thus it could be questionable whether $P(z_{\\\\alpha}, z_{\\\\beta})$ is a convincing representation.\\n\\n3. The authors may want to conduct more experiments to better support their claims.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"review\", \"review\": \"This paper gives a model for understanding locally connected neural networks. The main idea seems to be that the network is sparsely connected, so each neuron is not going to have access to the entire input. One can then think about the gradient of this neuron locally while average out over all the randomness in the input locations that are not relevant to this neuron. Using this framework the paper tried to explain several phenomena in neural networks, including batch normalization, overfitting, disentangling, etc.\\n\\nI feel the paper is poorly written which made it very hard to understand. For example, as the paper states, the model gives a generative model for input (x,y) pairs. However, I could not find a self-contained description of how this generative model works. Some things are described in Section 3.1 about the discrete summarization variables, but the short paragraph did not describe: (a) What is the \\\"multi-layer\\\" deterministic function? (b) How are these z_\\\\alpha's chosen? (c) Given z's how do we generate x? (d) What happens if we have z_\\\\alpha and z_\\\\beta and the regions \\\\alpha and \\\\beta are not disjoint? What x do we use in the intersection?\\n\\nIn trying to understand the paper, I was thinking that (a)(b) The multilayer deterministic function is a function which gives a tree structure over the z_\\\\alpha's, where y is the root. (I have no idea why this should be a deterministic function, intuitively shouldn't y be chosen randomly, and each z_\\\\alpha chosen randomly conditioned on its parent?) (c) there is a fixed conditional distribution of P(x_\\\\alpha|z_\\\\alpha), and I really could not figure out (d). The paper definitely seems to allow two receptive fields to intersect as in Figure 1(b).\\n\\nWithout understanding the generative model, it is impossible for me to evaluate the later results. My general comments there is that there are no clear Theorems that summarizes the results (the Theorems in the paper are all just Lemmas that are trying to work towards the final goal of giving some explanations, but the explanations and assumptions are not formally written down). Looking at things separately (as again I couldn't understand the single paragraph describing the generative model), the Assumption in Theorem 3 seems extremely limiting as it is saying that x_j is a discrete distribution (which is probably never true in practice). I wouldn't say \\\"the model does not impose unrealistic assumptions\\\" in abstract if you are going to assume this, rather the model just makes a different kind of unrealistic assumptions (Assumptions in Theorem 2 might be much weaker, but it's hard to judge that).\\n\\n==== After reading the revision\\n\\nThe revised version is indeed more clear about how the teacher network works, and I have tried to understand the later parts of the paper again. The result of the paper really relies on the two assumptions in Theorem 2. Of the two assumptions, the first one seems to be intuitive (and it is OK although exact conditional independence might be slightly strong). The second assumption is very unclear though as it is not an assumption that is purely about the model/teacher network (which are the x and z variables), it also has to do with the learning algorithm/student network (f's and g's). It is much harder to reason about the behavior of an algorithm on a particular model and directly making an assumption about that in some sense hides the problem. The paper mentioned that the condition is true if z is fine-grained, but this is very vague - it is definitely true if z is super fine-grained to satisfy the assumption in Theorem 3, but that is too extreme.\\n\\nOverall I still feel the paper is a bit confusing and it would benefit from having a more concrete example. I like the direction of the work but I can't recommend for recommendation at this stage.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Will add more related works in the next revision.\", \"comment\": \"Thanks for your comment!\\n\\nWe appreciate your interest to our paper. \\n\\nWe totally agree that there should be more related works in the submission, in particular for the great work on theoretical foundation of information bottleneck. We will add them in the next revision.\", \"we_want_to_emphasize_that_there_is_one_substantial_difference_between_our_work_and_the_works_of_information_bottleneck\": \"we model data distribution as explicit terms in our reformulation of deep and locally connected nonlinear network. To our best knowledge, this is novel. By imposing different conditions on the data distribution, there could be many interesting consequences. In our paper, we only barely scratch the surface.\", \"for_mathematical_rigorousness\": \"Given the assumptions in the paper are true, to our best knowledge and efforts, all the statements named \\\"theorems\\\" in our paper are rigorous. You can check the Appendix for all the detailed proofs. Note that it is totally possible that we might make mistakes. If so we would happily revise the paper and/or retract.\\n\\nWe are looking forward to your detailed comments.\"}", "{\"comment\": \"This is really an interesting and technical theoretical paper. I've added this submission on my reading list and detailed comments will come later. However, to my knowledge, the references in this paper are very insufficient and some related works are not cited. To my knowledge, the author should cite papers, for example, like the following,\\n\\nAchille, Alessandro, and Stefano Soatto. \\\"Emergence of invariance and disentangling in deep representations.\\\"\\n\\nAnother thing I'm concerned is that if this is not a seminar paper to our deep learning theory community, the title should not contain words like \\\"framework\\\". My question is, \\\"Have you really proposed a mathematically rigorous framework for deep ReLU network? \\\"\", \"title\": \"Insufficient Exposition of Previous Works\"}" ] }
SkgKzh0cY7
Unsupervised Video-to-Video Translation
[ "Dina Bashkirova", "Ben Usman", "Kate Saenko" ]
Unsupervised image-to-image translation is a recently proposed task of translating an image to a different style or domain given only unpaired image examples at training time. In this paper, we formulate a new task of unsupervised video-to-video translation, which poses its own unique challenges. Translating video implies learning not only the appearance of objects and scenes but also realistic motion and transitions between consecutive frames. We investigate the performance of per-frame video-to-video translation using existing image-to-image translation networks, and propose a spatio-temporal 3D translator as an alternative solution to this problem. We evaluate our 3D method on multiple synthetic datasets, such as moving colorized digits, as well as the realistic segmentation-to-video GTA dataset and a new CT-to-MRI volumetric images translation dataset. Our results show that frame-wise translation produces realistic results on a single frame level but underperforms significantly on the scale of the whole video compared to our three-dimensional translation approach, which is better able to learn the complex structure of video and motion and continuity of object appearance.
[ "Generative Adversarial Networks", "Computer Vision", "Deep Learning" ]
https://openreview.net/pdf?id=SkgKzh0cY7
https://openreview.net/forum?id=SkgKzh0cY7
ICLR.cc/2019/Conference
2019
{ "note_id": [ "BJleml9NyN", "HyekQszq27", "S1bVl93d27", "H1e426gJhX" ], "note_type": [ "meta_review", "official_review", "official_review", "official_review" ], "note_created": [ 1543966743865, 1541184279034, 1541093868289, 1540455851937 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1283/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1283/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1283/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1283/AnonReviewer1" ] ], "structured_content_str": [ "{\"metareview\": \"In this work, a central idea introduced by CycleGAN is extended from 2D convolutions to 3D convolutions to ensure better consistency of style transfer across time. Authors demonstrate improvements on a variety of datasets in comparison to frame-by-frame style transfer.\", \"reviewer_pros\": [\"Seems to be effective at enforcing improved consistency over time\", \"Proposed medical dataset may be good contribution to community.\", \"Good quality evaluation\"], \"reviewer_cons\": [\"All reviewers felt the technical novelty was low.\", \"Some questions arose around quantitative results, left unanswered by authors.\", \"Experiments missing some baseline approaches\", \"Architecture limited to fixed length video segments\", \"Reviewer consensus is to reject. Authors are encouraged to continue their work and take into account suggestions made by reviewers, including adding additional comparison baselines\"], \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Reasonable extension of prior work to additional dimensions.\"}", "{\"title\": \"limited novelty\", \"review\": \"This paper present a spatio-temporal (i.e., 3D version) of Cycle-Consistent Adversarial Networks (CycleGAN) for unsupervised video-to-video translation. The evaluations on multiple datasets show the proposed model is better able to work for video translation in terms of image continuity and frame-wise translation quality.\\n\\nThe major contribution of this paper is extending the existing CycleGAN model from image-to-image translation and video-to-video translation using 3D convolutional networks, while it additionally proposes a total penalty term to the loss function. So I mainly concern that such contribution might be not enough for the ICLR quality.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Review\", \"review\": \"1) Summary\\nThis paper proposes a 3D convolutional neural network based architecture for video-to-video translation. The method mitigates the inconsistency problem present when image-to-image translation methods are used in the video domain. Additionally, they present a study of ways to better setting up batched for the learning steps during networks optimization for videos, and also, they propose a new MRI-to-CT dataset for medical volumetric image translation. The proposed method outperforms the image-to-image translation methods in most measures.\\n\\n\\n\\n2) Pros:\\n+ Proposed network architecture mitigates the pixel color discontinuity issues present in image-to-image translation methods.\\n+ Proposed a new MRI-to-CT dataset that could be useful for the community to have a benchmark on medical related research papers.\\n\\n3) Cons:\", \"limited_network_architecture\": [\"The proposed neural network architecture is limited to only generate the number of frames it was trained to generate. Usually, in video generation / translation / prediction we want to be able to produce any length of video. I acknowledge that the network can be re-used to continue generating number of frames that are multiples of what the network was trained to generate, but the authors have not shown this in the provided videos. I would be good if they can provide evidence that this can be done with the proposed network.\"], \"short_videos\": [\"Another limitation that is related to the previously mentioned issue is that the videos are short, which in video-to-video translation, it should not be difficult to generate longer videos. It is hard to conclude that the proposed method will work for large videos from the provided evidence.\"], \"lack_of_baselines\": [\"A paper from NVIDIA Research on video-to-video synthesis [1] (including the code) came out about a month before the ICLR deadline. It would be good if the authors can include comparison with this method in the paper revision. Other papers such as [2, 3] on image-to-image translation are available for comparison. The authors simply say such methods do not work, but show no evidence in the experimental section. I peeked at some of the results in the papers corresponding websites, and the videos look consistent through time. Can the authors comment on this if I am missing something?\"], \"additional_comments\": \"The authors mention in the conclusion that this paper proposes \\u201ca new computer vision task or video-to-video translation, as well as, datasets, metrics and multiple baselines\\u201d. I am not sure that video-to-video translation is new, as it has been done by the papers I mention above. Maybe I am misunderstanding the statement? If so, please clarify. Additionally, I am not sure how the metrics are new. Human evaluation has been done before, the video colorization evaluation may be somewhat new, but I do not think it will generalize to tasks other than colorization. Again, If I am misunderstanding the statement, please let me know in the rebuttal.\\n\\n\\n\\n4) Conclusion\\nThe problem tackled is a difficult one, but other papers that are not included in experiments have been tested on this task. The proposed dataset can be of great value to the community, and is a clearly important piece of this paper. I am willing to change the current score if they authors are able to address the issues mentioned above.\", \"references\": \"[1] Ting-Chun Wang, Ming-Yu Liu, Jun-Yan Zhu, Guilin Liu, Andrew Tao, Jan Kautz, and Bryan Catanzaro. \\\"Video-to-Video Synthesis\\\". In NIPS, 2018.\\n[2] Xun Huang, Ming-Yu Liu, Serge Belongie, Jan Kautz. Multimodal Unsupervised Image-to-Image Translation. In ECCV, 2018.\\n[3] Ming-Yu Liu, Thomas Breuel, Jan Kautz. Unsupervised Image-to-Image Translation Networks. In NIPS, 2017.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Limited technical novelty\", \"review\": \"This paper proposes a spatio-temporal 3D translator for the unsupervised image-to-image translation task and a new CT-to-MRI volumetric images translation dataset for evaluation. Results on different datasets show the proposed 3D translator model outperforms per-frame translation model.\", \"pros\": [\"The proposed 3D translator can utilize the spatio-temporal information to keep the translation results consistent across time. Both color and shape information are preserved well.\", \"Extensive evaluation are done on different datasets and the evaluation protocols are designed well. The paper is easy to follow.\"], \"cons\": [\"The unsupervised video-to-video translation task has been tested by previous per-frame translation model, e.g. CycleGAN and UNIT. Results can be found on their Github project page. Therefore, unsupervised video-to-video translation is not a new task as clarified in the paper, although this paper is one of the pioneers in this task.\", \"The proposed 3D translator extend the CycleGAN framework to video-to-video translation task with 3D convolution in a straightforward way. The technical novelty of the paper is limited for ICLR. I think the authors are working on the right direction, but lots of improvement should be done.\", \"As to Table 4, I am confused about the the per-frame pixel accuracy results. Does the 3D method get lower accuracy than 2D method?\", \"As to the GTA segmentation->video experiments, the 3D translator seems cause more artifacts than the 2D method (page 11,12). Also, the title of the figure on page 11 should both be \\u201cGTA segmentation->video\\u201d\", \"Overall, the technical innovation of this paper is limited and the results are not good enough. I vote for rejection.\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}" ] }
SkGuG2R5tm
Spreading vectors for similarity search
[ "Alexandre Sablayrolles", "Matthijs Douze", "Cordelia Schmid", "Hervé Jégou" ]
Discretizing floating-point vectors is a fundamental step of modern indexing methods. State-of-the-art techniques learn parameters of the quantizers on training data for optimal performance, thus adapting quantizers to the data. In this work, we propose to reverse this paradigm and adapt the data to the quantizer: we train a neural net whose last layers form a fixed parameter-free quantizer, such as pre-defined points of a sphere. As a proxy objective, we design and train a neural network that favors uniformity in the spherical latent space, while preserving the neighborhood structure after the mapping. For this purpose, we propose a new regularizer derived from the Kozachenko-Leonenko differential entropy estimator and combine it with a locality-aware triplet loss. Experiments show that our end-to-end approach outperforms most learned quantization methods, and is competitive with the state of the art on widely adopted benchmarks. Further more, we show that training without the quantization step results in almost no difference in accuracy, but yields a generic catalyser that can be applied with any subsequent quantization technique.
[ "dimensionality reduction", "similarity search", "indexing", "differential entropy" ]
https://openreview.net/pdf?id=SkGuG2R5tm
https://openreview.net/forum?id=SkGuG2R5tm
ICLR.cc/2019/Conference
2019
{ "note_id": [ "H1gkAJGeg4", "BkgZlICzyN", "rygYNIkTRm", "Ske30l-iAm", "S1g23fYDRX", "BJeQGL2HCQ", "S1epFsSOpm", "SJlQccS_TQ", "BygLXqBd6m", "Bkxs4_VP2m", "B1x0Ka5M2X", "HkeDt3GOsQ" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544720326528, 1543853544565, 1543464496610, 1543340243693, 1543111347932, 1542993418542, 1542114180851, 1542113930705, 1542113821615, 1540995122940, 1540693382312, 1540004991397 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1282/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1282/Authors" ], [ "ICLR.cc/2019/Conference/Paper1282/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1282/Authors" ], [ "ICLR.cc/2019/Conference/Paper1282/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1282/Authors" ], [ "ICLR.cc/2019/Conference/Paper1282/Authors" ], [ "ICLR.cc/2019/Conference/Paper1282/Authors" ], [ "ICLR.cc/2019/Conference/Paper1282/Authors" ], [ "ICLR.cc/2019/Conference/Paper1282/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1282/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1282/AnonReviewer1" ] ], "structured_content_str": [ "{\"metareview\": \". Describe the strengths of the paper. As pointed out by the reviewers and based on your expert opinion.\\n\\n- The proposed method is novel and effective\\n- The paper is clear and the experiments and literature review are sufficient (especially after revision).\\n \\n2. Describe the weaknesses of the paper. As pointed out by the reviewers and based on your expert opinion. Be sure to indicate which weaknesses are seen as salient for the decision (i.e., potential critical flaws), as opposed to weaknesses that the authors can likely fix in a revision.\\n\\nThe original weaknesses (mainly clarity and missing details) were adequately addressed in the revisions.\\n\\n3. Discuss any major points of contention. As raised by the authors or reviewers in the discussion, and how these might have influenced the decision. If the authors provide a rebuttal to a potential reviewer concern, it\\u2019s a good idea to acknowledge this and note whether it influenced the final decision or not. This makes sure that author responses are addressed adequately.\\n\\nNo major points of contention.\\n\\n4. If consensus was reached, say so. Otherwise, explain what the source of reviewer disagreement was and why the decision on the paper aligns with one set of reviewers or another.\\n\\nThe reviewers reached a consensus that the paper should be accepted.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"novel and effective method for more effective quantisation of vectorial representations\"}", "{\"title\": \"End2end uses a quantization layer at training time\", \"comment\": \"Thank you for the feedback. The method \\\"Catalyst + Lattice + end2end\\\" refers to using a quantization layer during training with the straight-through estimator described in Section 4.2. In contrast, the version \\\"Catalyst + Lattice\\\" also optimizes Eqn (4) but without including the quantization layer during training. We will clarify this point in the manuscript, and add the precisions regarding the architecture as suggested above.\"}", "{\"title\": \"Thank you for the detailed explanation\", \"comment\": \"The above explanation has been very helpful and would be a great addendum to the manuscript.\", \"minor_question\": \"In table 1, what is 'Catalyst + Lattice + end2end' and how is it different from 'Catalyst + Lattice'? I have been unable to find an explanation anywhere in the manuscript.\"}", "{\"title\": \"The architecture was cross-validated\", \"comment\": \"Thank you for your feedback.\\n\\n1) Regarding the architecture, we discarded many choices that are tailored to specific assumptions (like convolutional layers that assume spatial or temporal shift-invariance), since we do not make these assumptions about our data. Then, we started from a simple multi-layer perceptron, with the standard ReLU non-linearity and pre-activation batch norm. We chose to keep the same width at each layer, and varied depth and width to find optimal parameters (see details below).\", \"we_also_tried_several_variants_of_the_architecture_and_the_training_regime_that_did_not_improve_performance_or_made_it_worse\": \"* other types of architectures (we tested expansion: 128-256-512, 512-1024; reduction: 64-48-32, 1024-512; and both: 128-256-512-256-128-64)\\n* sampling new training points every nth epoch (tried every epoch, every 3, 10, 30 epochs) \\n* varying learning rate decay (factors 1, 1.01, 1.05 or 1.1)\\n* variants of hard negative mining: only using negatives for which there is no positive closer to the query point\\n\\nNone of these variants improved above the architecture/schedule, which had the advantage of being conceptually simpler.\\nThe parameters reported in the paper were the best we found with the amount of training data available at hand and with our standard optimization scheme. To conclude on this point, on one side it is difficult to answer the question \\\"if/how we can improve it\\\" for most works in deep learning. What we suggested for the architecture is the best choice we found to date. On the other side, from our experiments, we generally observed that the choice of the discretization layer has more impact on the overall performance of the system than the choice of a particular structure for the catalyzer (compare binary and lattice for instance), therefore there is likely some margin on improvement in this direction. \\n\\n2) \\\"The performance improves when augmenting the width of the network, but with diminishing returns.', how was the width of the network(s) used for the empirical evaluation selected. Were they fixed to some value (if so, why) or were they cross-validated over?\\\"\\n\\nWe conducted preliminary experiments on the validation set, varying the depth of the network from 0 to 6 latent layers, and the size of the latent layers from 128 to 2048 by doubling it every time. The best results with respect to depth were obtained with 2 latent layers; doubling the width provided increments of 2 - 5 points until 512, then 512 \\u2192 1024 resulted in a 0.5 improvement and 1024\\u2192 2048 resulted in a negligible improvement while having a longer runtime.\"}", "{\"title\": \"Thank you for the response\", \"comment\": \"Thank you for your response regarding the training times. However, the response regarding the network architecture leaves me unsatisfied. The response (mostly) tells me how the proposed architecture works. It does not really tell me how the authors landed on this architecture. Without knowing how we got here, it is hard to know if/how we can improve it.\\n\\nAlso, related to 'The performance improves when augmenting the width of the network, but with diminishing returns.', how was the width of the network(s) used for the empirical evaluation selected. Were they fixed to some value (if so, why) or were they cross-validated over?\"}", "{\"title\": \"Updated paper\", \"comment\": \"We updated the paper to correct typos, add some precisions and include the references suggested by the reviewers.\"}", "{\"title\": \"Response to Review #1\", \"comment\": \"We thank the reviewer for their review and comments. We provide detailed answers below.\\n\\n\\\"My main concerns come from experimental results.\\\"\\nUpon publication of the paper, we will release the code that replicates the experiments. \\n\\n\\\"(1) Table 1: where are the results of OPQ and LSQ from? [...] It is not consist to the LSQ paper\\\"\\nIn our experiments, we used the reference public implementation from the authors of LSQ [1]. The discrepancy in the reported 64-bit recall1@1 comes from the fact that the datasets are different: we use Bigann1m (28.4 recall) whereas the LSQ paper reports results on Sift1m. We conducted experiments on Bigann1m because the training set associated with Sift1m is too small (100k vectors) for learning the catalyzer. As a sanity check, we re-ran the code of [1] on Sift1m and obtained 28.99, which is consistent with the results reported by [A] (Table 1, LSQ, 29.37) and [Martinez et al, 2018] (Figure 3, LSQ SR-C and SR-D, ~28; Table 4 corresponds to another experimental setting).\\n\\n\\u201c(2) Figure 5: similarly, how did you get the results of PQ and OPQ?\\u201d\\nWe used the open-source Faiss library [2] to obtain the results of PQ and OPQ. This library is used as a reference implementation in recent papers like [D, E]. There is a comparison point with [F] on Deep1M at 64 bit: the Faiss implementation of OPQ obtains recall@1 = 15.6 vs 16.1 in [F] (table 1). \\n\\n\\u201c(3) There are some other advanced algorithms: e.g., additive quantization [B] and composite quantization [C]\\u201d\\nWe did not compare directly to AQ and CQ as it was shown that they underperform LSQ by some margin (Table 1 in [A]). Besides, in general, we insist in the paper that the encoding time for additive quantization methods is at least an order of magnitude slower than product quantization and our catalyzer + lattice (122s for LSQ vs < 10s for PQ/Lattice, cf. Table 1).\\n\\nReferences\\n[1] https://github.com/una-dinosauria/local-search-quantization\\n[2] https://github.com/facebookresearch/faiss\\n\\n[A] Revisiting additive quantization, Martinez et al., ECCV'2016\\n[B] Additive Quantization for Extreme Vector Compression, Babenko & Lempitsky, CVPR'2014\\n[C] Composite Quantization, J Wang et al. ICML'14\\n[D] Link and code: Fast indexing with graphs and compact regression codes, Douze et al, CVPR'18\\n[E] Revisiting the Inverted Indices for Billion-Scale Approximate Nearest Neighbors, Baranchuk et al, ECCV'18\\n[F] AnnArbor: Approximate Nearest Neighbors Using Arborescence Coding, Babenko & Lempitsky, ICCV'17\"}", "{\"title\": \"Response to Review #3\", \"comment\": \"We thank the reviewer for their comments.\\n\\n\\u201cThe training times for the catalyzer is never discussed in this manuscript. [...] Moreover, it is not clear if the inference time of the catalyzer is included in the results such as Table 1.\\u201d \\nTraining takes between 2 and 3 hours on a CPU machine using 20 cores, and the reported query timings take into account inference. \\n\\n\\u201cOne important point not discussed in this manuscript is the choice of the structure (architecture) of the catalyzer. Is the catalyzer architecture dependent on the data?\\u201d\\nGenerally, we observe that beyond 3 layers there is no improvement in accuracy. The performance improves when augmenting the width of the network, but with diminishing returns. We use the same architecture across datasets. We successfully used the same architecture on other datasets, but we report results here on the standard datasets of the field.\\n\\n\\u201cWhat is it about the proposed architecture that makes it sufficient for all data sets?\\u201d\\nWe have observed that the dimensions of the hidden layers in our architecture provide enough representation power for the model to be performant across all the datasets we have tested (those of the paper plus some internal datasets).\\n\\n\\u201cIs the parameter r in the rank loss same as the norm r in the lattice quantizer?\\u201d\\nThe parameter r is not the same as the norm r of the lattice quantizer, we thank the reviewer for spotting this, we will update the paper to lift this ambiguity.\"}", "{\"title\": \"Response to Review #2\", \"comment\": \"We thank the reviewer for their review. Upon publication of the paper, we will open-source the code that replicates the experiments. In the meantime, we provide more details below:\\n\\n1) \\u201cin the related work overview it would be good to also check possible connections with optimal transport methods using entropy regularization.\\u201c \\nIn the related work, we mention Bojanowski & Joulin (2017), who use optimal transport (without entropy regularization) to match images with random points on the sphere. The entropy regularization in optimal transport is a bit different than our entropy regularization as it is used mainly for speed purposes, whereas our entropy regularization provides a trade-off between the quality of nearest neighbors and how spread-out the output of the neural network is.\\n\\n2) \\u201cat some points in the paper, e.g. section 3.3, the authors mention Voronoi cells. However, in the related work in section 2 vector quantization and self-organizing maps have not been mentioned.\\u201d\\nWe will update the related work with these references. In the context of section 3.3, Voronoi cells correspond to the quantization cells of the lattice.\\n\\n3) \\u201cmore details on the optimization or learning algorithms for eq (3)(4) should be given. The loss function is non-smooth and rather complicated. What are the implications on the learning algorithm when training neural networks? Is it important to have a good initialization or not?\\u201d\\nWe found that standard practice for training neural networks worked quite well in our setting (even though we have no guarantee of getting to the global minimum of the objective function). More specifically, we train our networks with Stochastic Gradient descent with an initial learning rate of 0.5, momentum of 0.9, and decay the learning rate when the validation accuracy does not go up for an epoch. We did not need specific initialization to make the networks converge.\\n\\n4) \\u201cHow reproducible are the results? In Table 1 only one number in each column is shown while eqs (3)(4) are non-convex problems. Is it the best result of several runs or an average that is reported in the Table? \\u201c\\nOur preliminary experiments have shown that the difference in performance between different trainings is very small (despite the problems being non-convex). Therefore we train only once per set of hyper-parameters (d_out and lambda), and report the corresponding result. Our open-source code will reproduce these results up to the (very small) variations due to random initialization and mini-batch sampling.\"}", "{\"title\": \"Spreading vectors for similarity search\", \"review\": \"The authors propose a method to adapt the data to the quantizer, instead of having to work with a difficult to optimize discretization function. The contribution is interesting.\", \"additional_comments_and_suggestions\": [\"in the related work overview it would be good to also check possible connections with optimal transport methods using entropy regularization.\", \"at some points in the paper, e.g. section 3.3, the authors mention Voronoi cells. However, in the related work in section 2 vector quantization and self-organizing maps have not been mentioned.\", \"more details on the optimization or learning algorithms for eq (3)(4) should be given. The loss function is non-smooth and rather complicated. What are the implications on the learning algorithm when training neural networks? Is it important to have a good initialization or not?\", \"How reproducible are the results? In Table 1 only one number in each column is shown while eqs (3)(4) are non-convex problems. Is it the best result of several runs or an average that is reported in the Table?\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Well motivated novel idea; excellent results\", \"review\": \"Pros\\n----\\n\\n[Originality]\\nThe authors propose a novel idea of learning representations that improves the performance of the subsequent fixed discretization method.\\n\\n[Clarity]\\nThe authors clearly motivate their solution and explain the different ideas and enhancements introduced. The manuscript is fairly easy to follow. The different terms in the optimization problem are clearly explained and their individual behaviour are presented for the better understanding.\\n\\n[Significance]\\nThe empirical results for the proposed scheme are compared against various baselines under various scenarios and the results demonstrate the significant utility of the proposed scheme.\\n\\nLimitations\\n-----------\\n\\n[Clarity]\\nThe training times for the catalyzer is never discussed in this manuscript (even relative to the training times of the considered baselines). Moreover, it is not clear if the inference time of the catalyzer is included in the results such as Table 1. Even if, PQ and the catalyzer+lattice might have comparable search recalls, it would be good to understand the relative search times to get similar accuracy especially since the inference time for the catalyzer (which is part of the search time) can be fairly significant.\\n\\n[Clarity/Significance]\\nOne important point not discussed in this manuscript is the choice of the structure (architechture) of the catalyzer. Is the catalyzer architecture dependent on the data?\\n - If yes, how to find an appropriate architecture?\\n - If no, what is it about the proposed architecture that makes it sufficient for all data sets?\\nIn my opinion, this is extremely important since this drives the applicability of the proposed scheme beyond the presented examples.\\n\\n[Minor question]\\n- Is the parameter r in the rank loss same as the norm r in the lattice quantizer? This is a bit confusing.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Problematic experimental results\", \"review\": \"The idea, transforming the input data to an output space in which the data is distributed uniformly and thus indexing is easier, is interesting.\\n\\nMy main concerns come from experimental results.\\n\\n(1) Table 1: where are the results of OPQ and LSQ from? run the codes by the authors of this paper? or from the original paper?\\n\\nIt is not consistent to the LSQ paper (https://www.cs.ubc.ca/~julm/papers/eccv16.pdf). For BigANN1M, from the LSQ paper, the result is >29 recall at 1 for 64 bits. \\n\\n(2) Figure 5: similarly, how did you get the results of PQ and OPQ?\\n\\n(3) There are some other advanced algorithms: e.g., additive quantization (Babenko & Lempitsky, 2014) and composite quantization (https://arxiv.org/abs/1712.00955)\\n\\nThe above points make it hard to judge this paper.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
BkedznAqKQ
LanczosNet: Multi-Scale Deep Graph Convolutional Networks
[ "Renjie Liao", "Zhizhen Zhao", "Raquel Urtasun", "Richard Zemel" ]
We propose Lanczos network (LanczosNet) which uses the Lanczos algorithm to construct low rank approximations of the graph Laplacian for graph convolution. Relying on the tridiagonal decomposition of the Lanczos algorithm, we not only efficiently exploit multi-scale information via fast approximated computation of matrix power but also design learnable spectral filters. Being fully differentiable, LanczosNet facilitates both graph kernel learning as well as learning node embeddings. We show the connection between our LanczosNet and graph based manifold learning, especially diffusion maps. We benchmark our model against $8$ recent deep graph networks on citation datasets and QM8 quantum chemistry dataset. Experimental results show that our model achieves the state-of-the-art performance in most tasks.
[ "Lanczos Network", "Graph Neural Networks", "Deep Graph Convolutional Networks", "Deep Learning on Graph Structured Data", "QM8 Quantum Chemistry Benchmark" ]
https://openreview.net/pdf?id=BkedznAqKQ
https://openreview.net/forum?id=BkedznAqKQ
ICLR.cc/2019/Conference
2019
{ "note_id": [ "S1gOJ3wWxV", "SklJrmGZA7", "BJeBk7f-0Q", "SJghoMfW0Q", "r1eh_ffWAm", "S1lEn5RRhQ", "ryxJEZ4Rhm", "r1llrOIv2Q" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544809439565, 1542689591127, 1542689500993, 1542689443733, 1542689396302, 1541495467545, 1541452070746, 1541003320316 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1281/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1281/Authors" ], [ "ICLR.cc/2019/Conference/Paper1281/Authors" ], [ "ICLR.cc/2019/Conference/Paper1281/Authors" ], [ "ICLR.cc/2019/Conference/Paper1281/Authors" ], [ "ICLR.cc/2019/Conference/Paper1281/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1281/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1281/AnonReviewer2" ] ], "structured_content_str": [ "{\"metareview\": \"The reviewers unanimously agreed that the paper was a significant advance in the field of machine learning on graph-structured inputs. They commented particularly on the quality of the research idea, and its depth of development. The results shared by the researchers are compelling, and they also report optimal hyperparameters, a welcome practice when describing experiments and results.\\n\\nA small drawback the reviewers highlighted is the breadth of the content in the paper, which gave the impression of a slight lack of focus. Overall, the paper is a clear advance, and I recommend it for acceptance.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Good paper, recommend for acceptance.\"}", "{\"title\": \"Response\", \"comment\": \"Thanks for the comments! We will improve the writing and make the main contributions more clear.\"}", "{\"title\": \"Response\", \"comment\": \"Thanks for the careful reading and the constructive comments! We will improve the writing and make the paper more accessible in terms of main contributions. Additionally, we would like to clarify a few raised questions as below.\", \"q1\": \"What gets fundamentally different from polynomial filters proposed in other graph convnets architectures?\", \"a1\": \"We mainly compare with the Chebyshev polynomial filter since it is the most frequently used and also has the nice orthogonality property.\\n\\nFirst, Chebyshev polynomial filters can be regarded as a special case of our learnable spectral filters. The expansion of the Chebyshev recursion manifests that the filtering lies in a Krylov subspace of which the eigenbasis can be achieved by Lanczos algorithm. Therefore, recovering Chebyshev polynomial filters reduces to recovering the specific coefficients of polynomials which can be achieved by a multi-layer perceptron (MLP) due to its universal approximation power.\\n\\nSecond, we decouple the order of polynomial and the number of eigenbasis which is not the case for Chebyshev polynomial. Recall that computing K-th order Chebyshev polynomial, i.e., finding K basis vectors, requires running the recursion K times. However, we can run the Lanczos algorithm for M steps, e.g., M < K, to get M basis vectors. Then we can easily get the K-th order polynomial by directly raising the K-th power of Ritz values.\\n\\nWe will discuss more on this difference in our later version.\", \"q2\": \"What happens when the graph change? Do the learned features make sense on different graphs? And if yes, why? If not, the authors should be more explicit in their presentation.\", \"a2\": \"Like many other graph convolutional networks, learnable parameters of our model do not depend on any graph specific quantities, like the number of nodes or edges, thus permitting generalization over different graphs. Moreover, in our QM8 experiments, different molecules are indeed different graphs. Therefore, the experimental results empirically verify that our learned features can generalize to different graphs. In terms of why they generalize, we currently do not have a satisfying answer as it requires deep understanding of the data distribution, model expressiveness and non-trivial inequality techniques for proving a useful generalization bound. Intuitively, the successful generalization may be due to the fact that our model does capture some patterns of sub-graphs within the molecules. These patterns may frequently appear in different molecules and determine the physical and chemical properties which link to the final predicted energy. We will improve our presentation regarding to this point.\", \"q3\": \"What is the complexity of the proposed methods? that should be minimally discussed (at least), as it is part of the key motivations for the proposed algorithms.\", \"a3\": \"It is hard to describe the overall time complexity in a concise manner as it requires lengthy notation. For the Lanczos algorithm alone, assuming the graph has N nodes, the most computationally expensive operation of our Algorithm 1 is the matrix vector product in line 4 which generally costs O(N^2) per step. If we further assume the algorithm runs for K steps, then the overall time complexity is O(K(N^2)). It is economical since a single graph convolution operation in any graph convnets is also generally O(N^2). In contrast, the eigen decomposition is generally O(N^3). We will discuss this in the later version.\", \"q4\": \"How is the learning done in 3.2? If there is any learning at all? (btw, S below Eq (6) is a poor notation choice, as S is used earlier for something else).\", \"a4\": \"For the spectral filter, the learning is done via learning the MLP which maps Ritz values R to R_hat, i.e., f as described above Eq. (5). S below Eq (6) is actually in different font style. We will change the notation to improve the presentation.\", \"q5\": \"The results are not very impressive - they are good, but not stellar, and could benefit from showing an explicit tradeoff in terms of complexity too?\", \"a5\": \"We have partially updated experimental results by adding spectral filters in a layer-wise manner. Please refer to our common response. We will also show the run-time in the later version to contrast these methods.\"}", "{\"title\": \"Response\", \"comment\": \"Thanks for the comments! We have not tried Arnoldi algorithm since we only deal with undirected graphs in the current applications which have symmetric graph Laplacians. Unlike Lanczos algorithm which has error bounds and monotonic convergence properties, Arnoldi algorithm is not well understood since eigenvalues of non-symmetric matrix may be complex and/or badly conditioned. Nonetheless, efficient implementation of Arnoldi algorithm exists. We will explore it in the future.\"}", "{\"title\": \"Common Response\", \"comment\": \"We thank all the reviewers for the careful reading and the constructive comments. During the rebuttal period, we extended our current model by adding spectral filters for multiple layers, whereas only the first layer contains spectral filters in the submitted version. We show the average results over 3 runs with different random initializations on QM8 as below. Note that experiments of our AdaLanczosNet are still ongoing. We will update this in the later version of our paper.\\n\\n----------------------------------------------------------------\\nMethods | Validation MAE | Test MAE |\\n----------------------------------------------------------------\\nGCN-FP | 15.06 +- 0.04 | 14.80 +- 0.09 |\\n----------------------------------------------------------------\\nGGNN | 12.94 +- 0.05 | 12.67 +- 0.22 |\\n----------------------------------------------------------------\\nDCNN | 10.14 +- 0.05 | 9.97 +- 0.09 |\\n----------------------------------------------------------------\\nChebyNet | 10.24 +- 0.06 | 10.07 +- 0.09 |\\n----------------------------------------------------------------\\nGCN | 11.68 +- 0.09 |11.41 +- 0.10 |\\n----------------------------------------------------------------\\nMPNN | 11.16 +- 0.13 | 11.08 +- 0.11 |\\n----------------------------------------------------------------\\nGraphSAGE | 13.19 +- 0.04 | 12.95 +- 0.11 |\\n----------------------------------------------------------------\\nGAT | 11.39 +- 0.09 | 11.02 +- 0.06 |\\n----------------------------------------------------------------\\nLanczosNet | 9.65 +- 0.19 | 9.58 +- 0.14 |\\n----------------------------------------------------------------\"}", "{\"title\": \"Paper brings insights and develops novel techniques for graph convolutional networks based on the Lanczos algorithm.\", \"review\": \"The paper under review builds useful insights and novel methods for graph convolutional networks, based on the Lanczos algorithm for efficient computations involving the graph Laplacian matrices induced by the neighbor edge structure of graph networks.\\n\\nWhile previous work [35] has explored the Lanczos algorithm from numerical linear algebra as a means to accelerate computations in graph convolutional networks, the current paper goes further by:\\n(1) exploring in significant more depth the low rank decomposition underlying the Lanczos algorithm.\\n(2) learning the spectral filter (beyond the Chebychev design) and potentially also the graph kernel and node embedding.\\n(3) drawing interesting connections with graph diffusion methods which naturally arise from the matrix power computation inherent to the Lanczos iteration.\", \"the_paper_includes_a_systematic_evaluation_of_the_proposed_approach_and_comparison_with_existing_methods_on_two_tasks\": \"semi-supervised learning in citation networks and molecule property prediction from interactions in atom networks. The main advantage of the proposed method as illustrated in particular by the experimental results in the citation network domain is its ability to generalize well in the presence of a small amount of training data, which the authors attribute to its efficient capturing of both short- and long-range interactions.\\n\\nIn terms of presentation quality, the paper is clearly written, the proposed methods are well explained, and the notation is consistent.\\n\\nOverall, a good paper.\", \"minor_comment\": \"page 3, footnote: \\\"When faced with a non-symmetric matrix, one can resort to the Arnoldi algorithm.\\\": I was wondering if the authors have tried that? I think that the Arnoldi algorithm for non-symmetric matrices are significantly less stable than their Lanczos counterparts for symmetric matrices.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"interesting ideas, but sometimes all over the place\", \"review\": [\"This paper proposes to use a Lanczos alogrithm, to get approximate decompositions of the graph Laplacian, which would facilitate the computation and learning of spectral features in graph convnets. It further proposes an extension with back propagation through the Lanczos algorithm, in order to train end to end models.\", \"Overall, the idea of using Lanczos algorithm to bypass the computation of the eigendecomposition, and thus simplify filtering operations in graph signal processing is not new [e.g., 35]. However, using this algorithm in the framework of graph convents is new, and certainly interesting. The authors seem to claim that their method permits to learn spectral filters, what other methods could not do - this is not completely true and should probably be rephrased more clearly: many graph convnets, actually learn features.\", \"The general construction and presentation of the algorithms are generally clear, and pretty complete. A few things that could be clarified are the following:\", \"in the spectral filters of Eq (4), what gets fundamentally different from polynomial filters proposed in other graph convnets architectures?\", \"what happens when the graph change? Do the learned features make sense on different graphs? And if yes, why? If not, the authors should be more explicit in their presentation\", \"what is the complexity of the proposed methods? that should be minimally discussed (at least), as it is part of the key motivations for the proposed algorithms\", \"how is the learning done in 3.2? If there is any learning at all? (btw, S below Eq (6) is a poor notation choice, as S is used earlier for something else)\", \"the results are not very impressive - they are good, but not stellar, and could benefit from showing an explicit tradeoff in terms of complexity too?\", \"The discussion in the related work, and the analogy with manifold learning are interesting. However, that brings probably to one of the main issues with the papers - the authors are obviously very knowledgeable in graph convnets, graph signal processing, and optimisation. However, there are really too many things in this paper, which leads to numerous shortcuts, and some time confusion. Given the page limits, not everything can be treated with the level of details that it would deserve. It might be good to consider trimming down the paper to its main and core aspects for the next version.\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Novel approach to graph neural networks with strong empirical evaluation\", \"review\": \"The authors propose a novel method for learning graph convolutional networks. The core idea is to use the Lanczos algorithm to obtain a low-rank approximation of the graph Laplacian. The authors propose two ways to include the Lanczos algorithm. First, as a preprocessing step where the algorithm is applied once on the input graph and the resulting approximation is fixed during learning. Second, by including a differentiable version of the algorithm into an end-to-end trainable model.\\n\\nThe proposed method is novel and achieves good results on a set of experiments. \\n\\nThe authors discuss related work in a thorough and meaningful manner. \\n\\nThere is not much to criticize. This is a very good paper. The almost 10 pages are perhaps a bit excessive considering there was an (informal) 8 page limit. It might make sense to provide a more accessible discussion of the method and Theorem 1, and move some more detailed/technical parts in pages 4, 5, and 6 to an appendix.\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }
Bkeuz20cYm
Double Neural Counterfactual Regret Minimization
[ "Hui Li", "Kailiang Hu", "Zhibang Ge", "Tao Jiang", "Yuan Qi", "Le Song" ]
Counterfactual regret minimization (CRF) is a fundamental and effective technique for solving imperfect information games. However, the original CRF algorithm only works for discrete state and action spaces, and the resulting strategy is maintained as a tabular representation. Such tabular representation limits the method from being directly applied to large games and continuing to improve from a poor strategy profile. In this paper, we propose a double neural representation for the Imperfect Information Games, where one neural network represents the cumulative regret, and the other represents the average strategy. Furthermore, we adopt the counterfactual regret minimization algorithm to optimize this double neural representation. To make neural learning efficient, we also developed several novel techniques including a robust sampling method, mini-batch Monte Carlo counterfactual regret minimization (MCCFR) and Monte Carlo counterfactual regret minimization plus (MCCFR+) which may be of independent interests. Experimentally, we demonstrate that the proposed double neural algorithm converges significantly better than the reinforcement learning counterpart.
[ "Counterfactual Regret Minimization", "Imperfect Information game" ]
https://openreview.net/pdf?id=Bkeuz20cYm
https://openreview.net/forum?id=Bkeuz20cYm
ICLR.cc/2019/Conference
2019
{ "note_id": [ "Bkld_Jukg4", "BkldrwZYAQ", "BJgrV-lPCX", "rJeJ5EIBA7", "H1gYO-8SC7", "rJx3ZW8HRm", "rJghPlLrRQ", "Syg2MlIBAQ", "BJx98CBB0Q", "Hklezu2Z6Q", "BJllmKgjn7", "BkxiXMw927", "B1lHz-hstQ", "B1ginbdjY7", "SJeKcZdsYX", "SkgWej4oYX", "H1g5hL4jK7" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review", "comment", "official_comment", "official_comment", "comment", "comment" ], "note_created": [ 1544679280125, 1543210816218, 1543074092645, 1542968454672, 1542967665452, 1542967555663, 1542967396013, 1542967316425, 1542966866132, 1541683207952, 1541241112203, 1541202466590, 1538142477058, 1538126258790, 1538126225117, 1538112233040, 1538111154051 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1280/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1280/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1280/Authors" ], [ "ICLR.cc/2019/Conference/Paper1280/Authors" ], [ "ICLR.cc/2019/Conference/Paper1280/Authors" ], [ "ICLR.cc/2019/Conference/Paper1280/Authors" ], [ "ICLR.cc/2019/Conference/Paper1280/Authors" ], [ "ICLR.cc/2019/Conference/Paper1280/Authors" ], [ "ICLR.cc/2019/Conference/Paper1280/Authors" ], [ "ICLR.cc/2019/Conference/Paper1280/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1280/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1280/AnonReviewer1" ], [ "~Marc_Lanctot1" ], [ "ICLR.cc/2019/Conference/Paper1280/Authors" ], [ "ICLR.cc/2019/Conference/Paper1280/Authors" ], [ "~Marc_Lanctot1" ], [ "~Marc_Lanctot1" ] ], "structured_content_str": [ "{\"metareview\": \"The reviewers agreed that there are some promising ideas in this work, and useful empirical analysis to motivate the approach. The main concern is in the soundness of the approach (for example, comments about cumulative learning and negative samples). The authors provided some justification about using previous networks as initialization, but this is an insufficient discussion to understand the soundness of the strategy. The paper should better discuss this more, even if it is not possible to provide theory. The paper could also be improved with the addition of a baseline (though not necessarily something like DeepStack, which is not publicly available and potentially onerous to reimplement).\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Interesting ideas with better motivation needed for soundness\"}", "{\"title\": \"Fig 5 is nice addition, but still missing comparison in large games\", \"comment\": \"The authors have provided a welcome new analysis in Fig. 5, in which performance in larger games was investigated (up to stack of size 15) and the compression/generalization ability of the neural net is displayed.\\n\\nWhile the ablation analyses and empirical investigations of the proposed method itself are quite thorough, there is still no comparison of the proposed method against a baseline neural method (e.g. something along the lines of DeepStack) on a game\\u00a0of realistically large size.\\n\\nI will thus keep my score the same.\"}", "{\"title\": \"Paper revision 2\", \"comment\": \"In addition to our revision 1, which we perform the proposed double neural in a larger game containing more than 2*10^7 states with fewer parameters and sampling smaller subsets of information sets, we made an updated revision 2. In this version includes the following modification:\\n\\n(1) We make more clarification about how to optimize the proposed neural network (see remark 1~3 in section 3.1) and continual improvement (see remark in section 3.3). \\n\\n(2) We add a comparison for several different neural architectures (see Figure5(D)), in the experiments, sequential recurrent neural network converges faster than the fully connected neural network. The architecture of LSTM plus attention helps us obtain the fastest converging strategy profile.\"}", "{\"title\": \"new revised version\", \"comment\": \"Dear Marc,\\n\\nour paper is revised accordingly. Some common questions are summarized in the comment \\\"Paper revision 1\\\". Further details please see the revised paper.\"}", "{\"title\": \"Reply to \\\"Isn\\u2019t it hard to learn cumulative quantities in a neural net?\\\"\", \"comment\": \"Thanks for your effort in providing this detailed and useful review!\", \"we_present_our_clarification_in_the_following\": \"\", \"q1\": \"the feasibility of using neural networks to learn cumulative quantities:\", \"a\": \"Generally, Eq.10 is an idea of behavior cloning algorithm. Clone a good initialization, and then continuously update the two neural networks using our method. In the large extensive game, the initial strategy is obtained from an abstracted game which has a manageable number of information sets. The abstracted game is generated by domain knowledge, such as clustering similar hand strength cards into the same buckets. (refer to section 3.3 in the revised paper.)\", \"q2\": \"It does not seem necessary to predict cumulative mixture policies (ASN network)?\", \"q3\": \"It would help to have a discussion about how to implement (7), for example do you use a target network to keep the target value R_t+r_t fixed for several steps?\", \"q4\": \"It is not clear how the initialisation (10) is implemented. Since you assume the number of information nodes is large, you cannot minimize the l2 loss over all states. Do you assume you generate states by following some policy? Which policy?\"}", "{\"title\": \"Reply to \\\"comparing on much larger IIG settings \\\"\", \"comment\": \"Thanks for your effort in providing this detailed and useful review!\", \"we_present_our_clarification_in_the_following\": \"\", \"q\": \"I believe the paper could benefit from more extensive comparisons in Figure 4A against other IIG methods such as Deep Stack, as well as comparing on much larger IIG settings with many more states to see how the neural CFR methods hold up in the regime where they are most needed.\", \"a\": \"To address this problem, we add three different kinds of experiments.\\nUse small batch size, only a small subset of infosets are sampled in each iteration. In this case, we can present the generalization ability of the neural network.\\nUse small embedding size and let the number of parameters is much fewer than the number of infosets of the whole game tree. In this case, we can present the compression ability of the neural network.\\nUse the larger stack to increase the size of the game tree. \\nIn all these three kinds of experiments, we find the neural CFR can still converge to a good strategy. Further details please see Figure 5(A), 5(B) and 5(C) in the revised paper.\\n\\nWe fixed the typos in the revised paper accordingly.\"}", "{\"title\": \"Reply to \\\"requires larger experiments and the questions\\\" (Q7-Q11)\", \"comment\": \"Q7: Is minimizing MSE loss sufficient to approximate the strategy well? (since the mapping from regrets -> strategy is non-linear)\", \"a\": \"Honestly, the proposed neural network architecture is not the most important part of our contributions. We reorganize the structure of this section accordingly. In the experiments, network architectures and optimization methods are very important. We find a carefully designed architecture will help us achieve a faster convergence rate. Further details please see Figure 5(D).\\n\\nWe fixed the typos in the revised paper accordingly.\", \"q8\": \"I believe there is also a theoretical problem with this algorithm. In Eq. 8, they minimize the loss of CF value predictions *over the distribution of infosets in the last CFR step (\\\"batch\\\")*. However, this distribution may change between CFR iterations, for example if the strat_t folds 2-3 on the preflop then flop infosets with 2-3 hole cards will never be observed on iteration t+1. As a result, the NN loss will only be minimized over infosets observed in the last iteration - so the network will \\\"forget\\\" the regrets for all other infosets. I think this issue does not arise in these toy games because all infosets are observed at each iteration, but this is certainly not the case in real games. There are a number of ways that this issue could be addressed (e.g. train on historical infosets rather than the current batch, etc.) These would need to be explored\", \"q9\": \"\\\"... the original CFR only works for discrete state and action spaces...\\\": Are the authors implying that DN-CFR addresses this limitation?\", \"q10\": \"\\\"...these methods do not explicitly take into account the hidden information in a game...\\\" Could you clarify? Is your point that these methods operate on the normal form rather than extensive form game?\", \"q11\": \"The paragraph starting \\\"In standard RNN...\\\" should be reduced or moved to appendix. The exact NN architecture is not central to the ideas, and there is no experimental comparison with other architectures so we have no evidence that the architecture is relevant.\"}", "{\"title\": \"Reply to \\\"requires larger experiments and the questions\\\" (Q1-Q6)\", \"comment\": \"Thanks for your effort in providing this detailed and useful review!\\n\\nWe present our clarification in the following and the common questions are answered in the public comment \\u201cPaper revision 1\\u201d:\", \"q1\": \"Can function approximation allow for *generalization across infosets* in order to reduce sample complexity of CFR (i.e. an unsupervised abstraction)? Q2: The games that are used for evaluation are very small, in fact I believe they have fewer states than the number of parameters in their network?\", \"a\": \"In our current experiments, the neural network update in each iteration is already solved in approximation fashion. Yet, the algorithm still converges. In some sense, there is already an accumulation of approximation errors. However, counterfactual regret minimization framework is very robust to such errors and still leads to converge algorithm.\", \"q3\": \"Are specific network architectures required for good generalization?\", \"q4\": \"The magnitude of counterfactual regrets in the support of the equilibrium decays to zero relative to dominated actions. Are NN models capable to estimating the regrets accurately enough to converge to a good strategy?\", \"q5\": \"Are optimization methods able to deal with the high variance in large IIGs?\", \"q6\": \"Since each successive approximation is being trained from the previous NN, does this cause errors to accumulate? Q: How do approximation errors accumulate across CFR iterations?\"}", "{\"title\": \"Paper revision 1\", \"comment\": \"To answer the common questions, we provide new experimental results and revise our paper accordingly. Besides the refinement of typos and reorganization the structure of the article, this version includes the following modifications:\\n\\n(1) Does neural network have generalization? \\nIn order to inspect the generalization ability, we perform the neural CFR with small mini-batch sizes (b=50, 100, 500), where only 3.08%, 5.59%, and 13.06% information sets are observed in each iteration. According to the results in Figure 5(A), all of these settings can arrive at exploitability less than 0.1 within only 1000 iterations.\\n\\n(2) Do fewer parameters still work well?\\nIn the proposed neural architecture, the embedding size 8 and 16 will leads to 1048 and 2608 parameters respectively in no-limited leduc hold\\u2019em with size 5, both of which are much less than the tabular memory, which saves about 10^4 values. Both these two embedding sizes can converge to a good strategy profile as shown in Figure 5(B). \\n\\n(3)Does neural method converge in the larger game?\\nFigure 5(C) presents the log-log convergence curve of different stack size 5, 10, 15. The largest game size contains over 2*10^7 states and 3.7*10^6 information sets. Let mini-batch size be 500, there are 13.06%, 2.39% and 0.53% information sets that are observed respectively in each iteration. Even though only a small subset of nodes are sampled, the double neural method can still achieve O(1/\\\\sqrt{T}) convergence rate. \\n\\n(4) Is attention in the neural architecture helpful?\\nFigure 5(D) presents the convergence curves of several different deep neural architectures (LSTM, LSTM plus attention, original RNN plus attention, and GRU plus attention). The recurrent neural network with LSTM cell plus attention helps us obtain a better convergence rate than the counterpart after hundreds of iterations.\"}", "{\"title\": \"Isn\\u2019t it hard to learn cumulative quantities in a neural net?\", \"review\": \"The paper proposes a neural net implementation of counterfactual regret minimization where 2 networks are learnt, one for estimating the cumulative regret (used to derive the immediate policy) and the other one for estimating a cumulative mixture policy. In addition the authors also propose an original MC sampling strategy which generalize outcome and external sampling strategies.\\n\\nThe paper is interesting and easy to read. My main concern is about the feasibility of using a neural networks to learn cumulative quantities.\", \"the_problem_of_learning_cumulative_quantities_in_a_neural_net_is_that_we_need_two_types_of_samples\": [\"the positive examples: samples from which we train our network to predict its own value plus the new quantity,\"], \"but_also\": \"- the negative examples: samples from which we should train the network to predict 0, or any desired initial value.\\n\\nHowever in the approach proposed here, the negative examples are missing. So the network is not trained to predict 0 (or any initial values) for a newly encountered state. And since neural networks generalize (very well...) to states that have not been sampled yet, the network would predict an arbitrary values in states that are visited for the first time. For example the network predicting the cumulative regret may generalize to large values at newly visited states, instead of predicting a value close to 0. The resulting policy can be arbitrarily different from an exploratory (close to uniform) policy, which would be required to minimize regret from a newly visited state. Then, even if that state is visited frequently in the future, this error in prediction will never be corrected because the target cumulative regret depends on the previous prediction. So there is no guarantee this algorithm will minimise the overall regret. \\nThis is a well known problem for exploration (regret minimization) in reinforcement learning as well (see e.g. the work on pseudo-counts [Bellemare et al., 2016, Unifying Count-Based Exploration and Intrinsic Motivation] as one possible approach based on learning a density model). \\n\\nHere, maybe a way to alleviate this problem would be to generate negative samples (where the network would be trained to predict low cumulative values) by following a different (possibly more exploratory) policy.\", \"other_comments\": [\"It does not seem necessary to predict cumulative mixture policies (ASN network). One could train a mixture policy network to directly predict the current policy along trajectories generated by MC. Since the samples would be generated according to the current policy \\\\sigma_t, any information nodes I_i would be sampled proportionally to \\\\pi^{\\\\sigma^t}_i(I_i), which is the same probability as in the definition of the mixture policy (4). This would remove the need to learn a cumulative quantity.\", \"It would help to have a discussion about how to implement (7), for example do you use a target network to keep the target value R_t+r_t fixed for several steps?\", \"It is not clear how the initialisation (10) is implemented. Since you assume the number of information nodes is large, you cannot minimize the l2 loss over all states. Do you assume you generate states by following some policy? Which policy?\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Appears to be a solid advance in NN applied to IIG, but I'm not an expert in this area\", \"review\": \"This paper proposes a pair of LSTM networks, one of which estimates the current strategy at iteration t+1 and the other estimates the average strategy after t iterations. By using these networks within a CFR framework, the authors manage to avoid huge memory requirements traditionally needed to save cumulative regret and average strategy values for all information sets across many iterations.\\nThe neural networks are trained via a novel sampling method with lower variance/memory-requirements that outcome/external sampling, and are amendable to continual improvement by warm-starting the networks based on cloned tabular regret values.\\n\\nOverall, the paper is well-written with clear definitions/explanations plus comprehensive ablation-analyses throughout, and thus constitutes a nice addition to the recent literature on leveraging neural networks for IIG. \\n\\nI did not find many flaws to point out, except I believe the paper could benefit from more extensive comparisons in Figure 4A against other IIG methods such as Deep Stack, as well as comparing on much larger IIG settings with many more states to see how the neural CFR methods hold up in the regime where they are most needed.\", \"typo\": \"\\\"care algorithm design\\\" -> \\\"careful algorithm design\\\"\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}", "{\"title\": \"Review: Interesting application of function approximation to CFR but requires larger experiments\", \"review\": \"========= Summary =========\\n\\nThe authors propose \\\"Double Neural CFR\\\", which uses neural network function approximation in place of the tabular update in CFR. CFR is the leading method for finding equilibria in imperfect information games. However it is typically employed with a tabular policy, limiting its applicability large games. Typically, hand-crafted abstractions are employed for games that are too large for exact tabular solutions. Function approximation could remove the necessity for hand-crafted abstractions and allow CFR to scale to larger problems.\", \"the_dn_cfr_algorithm_roughly_consists_of\": [\"start with an arbitrary vmodel_0, smodel_0\", \"for t = 0,1,2,3:\", \"collect a \\\"batch\\\" of (infoset I, immediate counterfactual value v_I) samples by traversal against vmodel_t (as well as I, strategy samples)\", \"train a new network vmodel_{t+1} on this \\\"batch\\\" with y=(v_I + vmodel_t(I)) and MSE loss\", \"similarly for (I, strategy)\", \"return smodel_t\", \"The authors also propose a novel MC samping strategy this is a mixture between outcome and external sampling.\"], \"dn_cfr_is_evaluated_on_two_games\": \"a variant of Leduc hold-em with stack size 5, and one-card poker with 5 cards. If I understand correctly, these games have <10,000 and <100 infosets, respectively. The authors show that DN-CFR achieves similar convergence rates to tabular CFR, and outperform NFSP variants.\\n\\n========== Comments ========\\n\\nThe authors are exploring an important problem that is of great interest to the IIG community. Their application of NN function approximation is reasonable and mostly theoretically well-grounded (but see below), I think it's on the right track. However, the games that are used for evaluation are very small, in fact I believe they have fewer states than the number of parameters in their network (the number of network parameters is not provided but I assume >1000). As a result, the NN is not providing any compression or generalization, and I would expect that the network can memorize the training set data exactly, i.e. predict the exact mean counterfactual value for each infoset over the data. If that's true, then DN-CFR is essentially exactly replicating tabular CFR (the approximation serves no purpose). \\n\\nAs a result, in my opinion this work fails to address the important challenges for function approximation in CFR, namely:\\n\\n- Can function approximation allow for *generalization across infosets* in order to reduce sample complexity of CFR (i.e. an unsupervised abstraction)? Are specific network architectures required for good generalization?\\n- The magnitude of counterfactual regrets in the support of the equilibrium decays to zero relative to dominated actions. Are NN models capable to estimating the regrets accurately enough to converge to a good strategy?\\n- Are optimization methods able to deal with the high variance in large IIGs?\\n- Since each successive approximation is being trained from the previous NN, does this cause errors to accumulate?\\n- How do approximation errors accumulate across CFR iterations?\\n- Is minimizing MSE loss sufficient to approximate the strategy well? (since the mapping from regrets -> strategy is non-linear)\\n\\nI believe there is also a theoretical problem with this algorithm. In Eq. 8, they minimize the loss of CF value predictions *over the distribution of infosets in the last CFR step (\\\"batch\\\")*. However, this distribution may change between CFR iterations, for example if the strat_t folds 2-3 on the preflop then flop infosets with 2-3 hole cards will never be observed on iteration t+1. As a result, the NN loss will only be minimized over infosets observed in the last iteration - so the network will \\\"forget\\\" the regrets for all other infosets. I think this issue does not arise in these toy games because all infosets are observed at each iteration, but this is certainly not the case in real games. \\nThere are a number of ways that this issue could be addressed (e.g. train on historical infosets rather than the current batch, etc.) These would need to be explored.\\n\\nI would recommend that the authors evaluate on more complex games to answer the important questions stated above and resubmit. I think this would also make an excellent workshop submission in its current state as it contains many interesting ideas.\", \"detailed_comments\": \"\\\"... the original CFR only works for discrete stand and action spaces...\\\": Are the authors implying that DN-CFR addresses this limitation? \\n\\\"Moravk\\\" -> \\\"Moravcik\\\"\\n\\\"...these methods do not explicitly take into account the hidden information in a game...\\\" Could you clarify? Is your point that these methods operate on the normal form rather than extensive form game?\\n\\\"care algorithm design\\\" -> \\\"careful algorithm design\\\"\\nThe paragraph starting \\\"In standard RNN...\\\" should be reduced or moved to appendix. The exact NN architecture is not central to the ideas, and there are no experimental comparison with other architectures so we have no evidence that the architecture is relevant.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"Reply to Reply about Lemma 2 citation\", \"comment\": \"Great, thanks. The citation in the background makes sense. However, I recommend you also add \\\"(Lemma 1 of Lanctot et al. 2009)\\\" after \\\"Proof: \\\" in Section E.2. As currently written, the casual reader might misinterpret the proof in E.2 as novel.\"}", "{\"title\": \"Reply to the question: missing reference \\\"Solving Games with Functional Regret Estimation\\\"\", \"comment\": \"Thanks for pointing out the missing reference, we will cite the paper in the revised version.\"}", "{\"title\": \"Reply to the question: Lemma 2 = Lemma 1 of from MCCFR paper?\", \"comment\": \"Thanks for your suggestion. The Lemma 2 is the same with the Lemma 1 from Lanctot et al. 2009, which is cited in the background. This Lemma will be used to prove the unbiased estimation of counterfactual value for mini-batch MCCFR. We will cite Lemma 1 from Lanctot et al. 2009 in the revised version.\"}", "{\"title\": \"Lemma 2 = Lemma 1 of from MCCFR paper?\", \"comment\": \"One more thing. Is your Lemma 2 any different than Lemma 1 from Lanctot et al. 2009? If not, that fact should be cited somewhere as well (that it's a restatement of the main MCCFR lemma).\"}", "{\"title\": \"Neat paper!\", \"comment\": \"I have not read it fully, but this paper looks super interesting!\\n\\nJust wanted to quickly mention that you should be citing the Regression CFR paper (Waugh et al., 2015, \\\"Solving Games with Functional Regret Estimation\\\", AAAI) as it is clearly related work. It was the first to propose function approximation in CFR. Like one of the parts of your double network, it proposed building a regressor to predict cumulative regrets. Here is the link: https://arxiv.org/abs/1411.7974\"}" ] }
HJldzhA5tQ
Learning powerful policies and better dynamics models by encouraging consistency
[ "Shagun Sodhani", "Anirudh Goyal", "Tristan Deleu", "Yoshua Bengio", "Jian Tang" ]
Model-based reinforcement learning approaches have the promise of being sample efficient. Much of the progress in learning dynamics models in RL has been made by learning models via supervised learning. There is enough evidence that humans build a model of the environment, not only by observing the environment but also by interacting with the environment. Interaction with the environment allows humans to carry out "experiments": taking actions that help uncover true causal relationships which can be used for building better dynamics models. Analogously, we would expect such interaction to be helpful for a learning agent while learning to model the environment dynamics. In this paper, we build upon this intuition, by using an auxiliary cost function to ensure consistency between what the agent observes (by acting in the real world) and what it imagines (by acting in the ``learned'' world). Our empirical analysis shows that the proposed approach helps to train powerful policies as well as better dynamics models.
[ "model-based reinforcement learning", "deep learning", "generative agents", "policy gradient", "imitation learning" ]
https://openreview.net/pdf?id=HJldzhA5tQ
https://openreview.net/forum?id=HJldzhA5tQ
ICLR.cc/2019/Conference
2019
{ "note_id": [ "BkejS5O-xN", "rJeeY-HykN", "HJxTwbH1J4", "rJxFPeB114", "BkePylr1yV", "rJxKcTwNCm", "rklaMeJNAm", "rJegedZ-AQ", "r1x0avWWRQ", "S1gajw-ZRm", "HyeCIcqxA7", "ryxVBatgA7", "B1xXVSKxC7", "B1lD3tUspX", "rkeKFYUj6Q", "BJgXUFLjaQ", "B1lRzFUiTm", "B1gtdSIspX", "SkejX7IipQ", "H1lBnfLi67", "Syg8YkUspX", "S1e3zRripX", "SklWCaBipm", "B1xtb3Sjp7", "SJlm8XRxpX", "S1xc36r03Q", "HJgB1w6Ynm" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544813123330, 1543618935796, 1543618917104, 1543618657275, 1543618527484, 1542909328667, 1542873109381, 1542686696208, 1542686661913, 1542686629269, 1542658645972, 1542655292130, 1542653227254, 1542314414873, 1542314368563, 1542314315079, 1542314262405, 1542313329330, 1542312738880, 1542312620748, 1542311805772, 1542311443855, 1542311368913, 1542310912629, 1541624650937, 1541459377889, 1541162717147 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1278/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1278/Authors" ], [ "ICLR.cc/2019/Conference/Paper1278/Authors" ], [ "ICLR.cc/2019/Conference/Paper1278/Authors" ], [ "ICLR.cc/2019/Conference/Paper1278/Authors" ], [ "ICLR.cc/2019/Conference/Paper1278/Authors" ], [ "ICLR.cc/2019/Conference/Paper1278/Authors" ], [ "ICLR.cc/2019/Conference/Paper1278/Authors" ], [ "ICLR.cc/2019/Conference/Paper1278/Authors" ], [ "ICLR.cc/2019/Conference/Paper1278/Authors" ], [ "ICLR.cc/2019/Conference/Paper1278/Authors" ], [ "ICLR.cc/2019/Conference/Paper1278/Authors" ], [ "ICLR.cc/2019/Conference/Paper1278/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1278/Authors" ], [ "ICLR.cc/2019/Conference/Paper1278/Authors" ], [ "ICLR.cc/2019/Conference/Paper1278/Authors" ], [ "ICLR.cc/2019/Conference/Paper1278/Authors" ], [ "ICLR.cc/2019/Conference/Paper1278/Authors" ], [ "ICLR.cc/2019/Conference/Paper1278/Authors" ], [ "ICLR.cc/2019/Conference/Paper1278/Authors" ], [ "ICLR.cc/2019/Conference/Paper1278/Authors" ], [ "ICLR.cc/2019/Conference/Paper1278/Authors" ], [ "ICLR.cc/2019/Conference/Paper1278/Authors" ], [ "ICLR.cc/2019/Conference/Paper1278/Authors" ], [ "ICLR.cc/2019/Conference/Paper1278/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1278/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1278/AnonReviewer2" ] ], "structured_content_str": [ "{\"metareview\": \"The paper proposes and approach for model-based reinforcement learning that adds a constraint to encourage the predictions from the model to be consistent with the observations from the environment. The reviewers had substantial concerns about the clarify of the initial submission, which has been significantly improved in revisions of the paper. The experiments have also been improved.\", \"strengths\": \"The method is simple, the performance is competitive with state-of-the-art approaches, and the experiments are thorough including comparisons on seven different environments.\", \"weaknesses\": \"The main concern of the reviewers is the lack of concrete discussion about how the method compares to prior work. While the paper cites many different prior methods, the paper would be significantly improved by explicitly comparing and contrasting the ideas presented in this paper and those presented in prior work. A secondary weakness is that, while the results appear to be statistically significant, the improvement over prior methods is still relatively small.\\nI do not think that this paper meets the bar for publication without an improved discussion of how this work is placed among the existing literature and without more convincing results.\\n\\nAs a side note, the authors should consider comparing to the below NeurIPS '18 paper, which significantly exceeds the performance of Nagabandi et al '17: https://arxiv.org/abs/1805.12114\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"meta review\"}", "{\"title\": \"More feedback ?\", \"comment\": \"Thank you again for the thoughtful review. We would like to know if our rebuttal adequately addressed your concerns. We would also appreciate any additional feedback on the revised paper. Are there any other aspects of the paper that you think could be improved?\"}", "{\"title\": \"More Feedback ?\", \"comment\": \"Thank you again for the thoughtful review. We would like to know if our rebuttal adequately addressed your concerns. We would also appreciate any additional feedback on the revised paper. Are there any other aspects of the paper that you think could be improved?\"}", "{\"title\": \"A simple baseline works very well\", \"comment\": \"We thank the reviewers and the ACs for taking the time to go through our work. The initial reviews highlighted the need to improve the clarity of the paper (a better description of the proposed model, experiments etc). It also led to some confusion about how useful and relevant our baselines were. We acknowledge that the paper was certainly lacking polish and accept that this may have made the paper difficult to read in places. We updated the paper, improved the description of the model and the experiments and addressed the concerns raised, We also performed additional experiments to highlight the robustness of our model for multi-step prediction. We briefly summarize the key idea of the paper and note how is it different from existing works\\n\\nWhat is the idea?\\n========\\nUsing the consistency loss which helps to learn more powerful policy AND better dynamics model (as demonstrated over different *7* tasks) while being very easy to integrate with existing model-based RL approaches. \\n\\nIsnt it too simple?\\n===========\\n\\nAt a higher level, the fact that a simple model-based approaches work better than somewhat complex model-free approaches actually is the point of the paper. We compare multiple approaches across more than 5 simulated tasks to the state of the art methods and our experiment demonstrates the effectiveness of the approach and we believe such a simple baseline would be useful for anyone who's working on model-based RL.\\n\\nWhy does it work?\\n========\\nTraining the policy on both the RL loss and the consistency loss provides a mechanism where learning a model, can itself change the policy thus leading to a much closer interplay between the dynamics model and the policy.\\n\\n\\nHow is it different from just learning a model based on k-step prediction?\\n========\\nIin our case, the agent's behavior (i.e the agent's policy) is dependent on its internal model too. In the baseline case, the state transition pairs (collected as the agent acts in the environment) become the supervising dataset for learning the model, and the process of learning the model has no control over what kind of data is produced for its training.\\n\\n\\nHow to implement it?\\n========\\nImpose a consistency loss to ensure consistency between the predictions from the dynamics model and the observations from the enviornment. Train both the policy and the model simultaneously during the open loop.\\n\\n\\nHow good are the results?\\n========\\n\\nOur evaluation protocol consists of 7 environments (Ant, Half Cheetah, Humanoid etc) and both observation space and state space models. Solving Half Cheetah environment, when observations are in the pixel space (images), is very challenging as useful information like velocity is not available. \\n\\nFor the observation space model, we compare against the \\\"Hybrid model-based and model-free (Mb-Mf) algorithm\\\" (Nagabandi et al). and for the state space models, we compare against \\\"Learning and Querying Fast Generative Models for Reinforcement Learning\\\" (Buesing et al [2]) (SOTA for state space models). As shown by our experiments (section 5), by having this consistency constraint we outperform both these baselines.\\n\\nWe focus on evaluating the agent for both dynamics models (in terms of imagination log likelihood, figure 4) and policy (in terms of average episodic returns and loss, figure 2, 3, 5). We show that adding the consistency constraint to the baseline models results in improvements to both the dynamics models and the policy for all the environments that we consider. All the experiments are averaged over 3 random seeds and are plotted with 1 standard deviation interval.\"}", "{\"title\": \"Rebuttal Summary\", \"comment\": [\"We briefly summarize the reviewers' comments and describe how we address that:\", \"Reviewer 1 pointed we are missing some comparisons (which we have highlighted while editing the paper). The clarity issues have been addressed as well. Regarding the idea being \\\"significant\\\", we highlight that at a higher level, the fact that a simple model-based approaches work better than somewhat complex model-free approaches actually is the point of the paper. We compare multiple approaches across more than 5 simulated tasks to the state of the art methods and our experiment demonstrates the effectiveness of the approach and we believe such a simple baseline would be useful for anyone who's working on model-based RL.\", \"Reviewer 2 provided a very thorough review which we incorporated in our updated version and the reviewer increased our scores to 5. The reviewer had some reservations based on the significance of the results. To that end, we conducted extra experiments to evaluate the robustness of the k-step unrolled model as well.\", \"Reviewer 3 highlighted that baselines methods need to be elaborated and to that end, we highlight that for our method, the policy is updated using two learning signals - the RL loss, and the loss from the consistency constraint. This is the key for why the method works. The dynamics model is not used for action selection, but only as an additional learning signal for the policy, and hence learning a good dynamics model is a nice side product, but this model is not used at test time. The other issues related to writing have been addressed.\"]}", "{\"title\": \"More clarifications\", \"comment\": \"We apologize for the confusion.\\n\\nTo clarify, you want how the proposed method is difference from the state of the art baselines ? i.e the state space model ?\", \"short_answer\": \"We propose an auxiliary cost (consistency constraint), and the policy is updated using two learning signals - the RL loss, and the loss from the consistency constraint. This is the ONLY difference from the baseline.\\n\\nPlease let us know if our understanding is correct. And we would be happy to add relevant details.\"}", "{\"title\": \"Thanks!\", \"comment\": \"\\\"Baseline methods are referenced, but need to be explained in sufficient detail\\\"\\n\\n For our method, the policy is updated using two learning signals - the RL loss, and the loss from the consistency constraint. This is the key for why the method works . The dynamics model is not used for action selection, but only as an additional learning signal for the policy, and hence learning a good dynamics model is a nice side product, but this model is not used at test time.\\n\\n\\\"Figure 4 is never referenced in the text. \\\"\\n\\nWe apologize for the confusion caused. We moved the figure in the appendix, but forgot to update the references to the figure in the rebuttal. We have made the updates now.\\n\\nFigure 4 - Imitation learning loss (lower is better). Proposed method gets better result as compared to the baseline. (which is state of the art state space model)\\nFigure 5 - Imagination log likelihood (higher is better) Proposed method gets better result as compared to the baseline. (which is state of the art state space model)\\n\\n\\nFigure 9 - Log likelihood evaluated on longer sequences, then it is trained for. Again, higher is better. Proposed method gets better result as compared to the baseline. (which is state of the art state space model)\\n\\nThanks again for taking your time! :)\"}", "{\"title\": \"Added results to evaluate the robustness of the model\", \"comment\": \"Dear Reviewer\\n\\nWe have added new evaluation results to investigate the robustness of the proposed approach in terms of compounding errors. When we use the recurrent dynamics model for prediction, the ground-truth sequence is not available for conditioning. This leads to problems during sampling as even small prediction errors can compound when sampling for a large number of steps. We evaluate the proposed model for robustness by predicting the future for much longer timesteps (50 timesteps) than it was trained on (10 timesteps). More generally, in figure 9 (section 7.3 in appendix), we demonstrate that this auxiliary cost helps to learn a better model with improved long-term dependencies by using a training objective that is not solely focused on predicting the next observation, one step at a time.\\n\\nThank you for your time! The authors appreciate the time reviewers have taken for providing feedback. which resulted in improving the presentation of our paper. Hence, we would appreciate it if the reviewers could take a look at our changes and additional results, and let us know if they would like to either revise their rating of the paper or request additional changes that would alleviate their concerns.\"}", "{\"title\": \"Added results to evaluate the robustness of the model\", \"comment\": \"Dear Reviewer\\n\\nWe have added new evaluation results to investigate the robustness of the proposed approach in terms of compounding errors. When we use the recurrent dynamics model for prediction, the ground-truth sequence is not available for conditioning. This leads to problems during sampling as even small prediction errors can compound when sampling for a large number of steps. We evaluate the proposed model for robustness by predicting the future for much longer timesteps (50 timesteps) than it was trained on (10 timesteps). More generally, in figure 9 (section 7.3 in appendix), we demonstrate that this auxiliary cost helps to learn a better model with improved long-term dependencies by using a training objective that is not solely focused on predicting the next observation, one step at a time.\\n\\nThank you for your time! The authors appreciate the time reviewers have taken for providing feedback. which resulted in improving the presentation of our paper. Hence, we would appreciate it if the reviewers could take a look at our changes and additional results, and let us know if they would like to either revise their rating of the paper or request additional changes that would alleviate their concerns.\"}", "{\"title\": \"Added results to evaluate the robustness of the model\", \"comment\": \"Dear Reviewer\\n\\nWe have added new evaluation results to investigate the robustness of the proposed approach in terms of compounding errors. When we use the recurrent dynamics model for prediction, the ground-truth sequence is not available for conditioning. This leads to problems during sampling as even small prediction errors can compound when sampling for a large number of steps. We evaluate the proposed model for robustness by predicting the future for much longer timesteps (50 timesteps) than it was trained on (10 timesteps). More generally, in figure 9 (section 7.3 in appendix), we demonstrate that this auxiliary cost helps to learn a better model with improved long-term dependencies by using a training objective that is not solely focused on predicting the next observation, one step at a time.\\n\\nThank you for your time! The authors appreciate the time reviewers have taken for providing feedback. which resulted in improving the presentation of our paper. Hence, we would appreciate it if the reviewers could take a look at our changes and additional results, and let us know if they would like to either revise their rating of the paper or request additional changes that would alleviate their concerns.\"}", "{\"title\": \"Motivation\", \"comment\": \"We have updated the paper, and the basic motivation is that now (as the reviewer 2 points out) the policy is updated using two learning signals - the RL loss, and the loss from the consistency constraint. The dynamics model is not used for action selection, but only as an additional learning signal for the policy. Learning a good dynamics model is a nice side product, but this model is not used at test time.\\n\\nWe would appreciate it if the reviewer could take another look at our changes and additional results, and let us know if the reviewer would like to request additional changes that would alleviate reviewers concerns. We hope that our updates to the manuscript address the reviewer's concerns about clarity, and we hope that the discussion above addresses the reviewer's concerns about empirical significance. We once again thank the reviewer for the feedback of our work.\"}", "{\"title\": \"Thanks for reply\", \"comment\": \"We appreciate that the reviewer took time to read our lengthy rebuttal, and increased their score. :)\\n\\n\\\"It\\u2019s difficult for me to assess how significant the idea is\\\" \\n\\nIt's a simple addition which improves the quality of the dynamics model, as well as the policy. Now, we acknowledge that the paper was certainly lacking polish and accept that this may have made the paper difficult to read in places (for you as well as for other reviewers). And hence it might have been confusing for the other reviewers as well. Otherwise, we compare (and outperform) the proposed method to state of the art methods like MB-MF, dyna and Learning to query. (As the Reviewer 1 has suggested ONLY these baselines too). \\n\\n[1] MB-MF https://arxiv.org/abs/1708.02596\\n[2] Dyna, https://www.sciencedirect.com/science/article/pii/B9781558601413500304\\n[3] Learning to query https://arxiv.org/abs/1802.03006\\n\\n\\n\\\"I am not sure if introducing an auxiliary loss that boosts performance on a selected group of MuJoCo environments warrants acceptance,\\\" \\n\\nWe believe that comparing to [1] provides a good baseline. Out of all these [1], [2] tested the ideas on Mujoco envs, whereas [3] tested the env on Atari games where they first pretrain the dynamics model, and then use that model. We compared against [3] on a challenging Mujoco task, where we learn the model directly from the *pixels*. We also note that [3] does not have an open-source implementation, and they ([3]) only evaluated on few atari games using millions of samples per game. We believe that comparing to such a strong baseline is very important and hence we compared to this on a challenging image based mujoco env. Since, one cant infer the velocity of the cheetah just using a particular frame, and hence learning from images make this task a partially observable task, and more challenging.\\n\\n\\\" if the authors cannot give more insight and analysis for why this works so well (and in which cases it might actually not work well?)\\\" \\n\\nWe think that having an an auxiliary cost could always improve the performance. As We evaluate it on a large number of very different domains (when learning the model directly from pixels as well as learning the model using the state representation directly) and find that in all cases it improves performance. For some problems, learning a model of the environment is difficult so those problems would be hard as well. This applies to any complex environment and especially partially observed ones. Our method would help the most when the dynamics are relatively simple but the problems are still relatively hard.\\n\\n\\\"if the authors cannot give more insight and analysis for why this works so well\\\" \\n\\nWe would be happy to add comparison to any another baseline which the reviewer has in mind. We want to make sure, that we can do everything possible to make sure that the researchers in the future would try against such a simple baseline.\"}", "{\"title\": \"Reply\", \"comment\": \"Thank you for the detailed answer. I seem to have misunderstood parts of the paper. In particular, the section in the author\\u2019s answer on \\u201cWhy is different from just learning a model based on k-step prediction?\\u201d clarified that the policy is updated using two learning signals - the RL loss, and the loss from the consistency constraint. This is the key for why the method works (and now the results make much more sense to me). The dynamics model is not used for action selection, but only as an additional learning signal for the policy. Learning a good dynamics model is a nice side product, but this model is not used at test time.\\n\\nKudos to the authors for being so open for feedback, and updating the paper title and adding more formal explanations of the concepts discussed in the paper.\\n\\nOverall, I think the idea is simple but works well, and the paper is much improved. Hence I am willing to increase my score to 5. \\n\\nIt\\u2019s difficult for me to assess how significant the idea is - the authors say their results are state of the art, but the other reviewers say that it lacks important comparisons, and I am not well versed enough in the literature to make an informed evaluation on this point. While the main intuition is clearer to me now, I feel like the introduction is still a little convoluted (e.g., again my point that in the introduction, the authors motivate that the agent tries to uncover causal relationships in the environment - but this is not followed up on in the rest of the paper). \\n\\nIn general, I am not sure if introducing an auxiliary loss that boosts performance on a selected group of MuJoCo environments warrants acceptance, if the authors cannot give more insight and analysis for why this works so well (and in which cases it might actually not work well?). Such insight (both the intuition of the authors, and in experiments) would be very interesting for other researchers that might want to use this auxiliary loss.\", \"notes\": [\"It might be interesting to discuss connections to other works in RL that use auxiliary losses, e.g. \\u201cReinforcement Learning with Unsupervised Auxiliary Tasks\\u201d (Jaderberg et al. 2016)\", \"Sec 2., write \\u201cdiscounted sum of rewards\\u201d instead of \\u201csum of rewards\\u201d.\", \"Sec 2 uses $/phi$ for the learned model, Sec 3 uses $\\\\theta$ for the model and $\\\\phi$ for the policy - please make this consistent. It might help to already introduce the parameterised policy in Sec 2 as well.\", \"Sec 3.1, last paragraph, should be \\u201cexpected return\\u201d instead of \\u201cexpected reward\\u201d\"]}", "{\"title\": \"Response to Reviewer 2 - Part 5\", \"comment\": \"\\\"16) Could the ... RL loss?\\\"\\n\\n\\nWe have updated the section on consistency constraint (3.1) to include an equation describing the different components of the loss function. Both the dynamics model and the policy pi are trained on the total loss (which is a combination of the RL loss and the consistency loss)\\n\\n\\\"17) In 2.3 ... represent?\\\"\\n\\nWe have updated the relevant section to define z_t. It refers to the latent variable introduced per timestep to introduce stochasticity in state transition function.\\n\\n\\\"18) I find ... hallucinating.\\\"\\n\\nWe have addressed this point by replacing the word hallucination with \\u201cimagination\\u201d and \\u201cprediction\\u201d as per the context.\\n\\n\\nWe would appreciate it if the reviewer could take another look at our changes and additional results, and let us know if the reviewer has request for additional changes that would alleviate the reviewer's concerns.\\n\\n[1]: Neural Network Dynamics for Model-Based Deep Reinforcement Learning with Model-Free Fine-Tuning - https://arxiv.org/pdf/1708.02596.pdf\\n\\n[2]: Learning and Querying Fast Generative Models for Reinforcement Learning - https://arxiv.org/pdf/1802.03006.pdf\"}", "{\"title\": \"Response to Reviewer 2 - Part 4\", \"comment\": \"\\\"11) As the ... sample complexity?\\\"\", \"we_address_this_issue_from_two_perspectives\": \"Qualitatively - We propose to train both the policy and the model during the open loop. Hence the k-step predictions are used for training both the model and the policy simultaneously. Training the policy on both the RL loss and the consistency loss provides a mechanism where learning a model, can itself change the policy thus leading to a much closer interplay between the dynamics model and the policy. This approach is different from other works focusing on learning k-step prediction models. In those cases, the policy is learned solely focussing on the reward structure and the state transition trajectories (collected as the agent acts in the environment) become the supervising dataset for learning the model. There is no feedback from the model learning process to the policy learning process. So the process of learning the model has no control over what kind of data is produced (by the policy) for its training.\\n\\n\\nEmpirical Evaluation - We show that this relatively simple approach improves the performance for both the dynamics model and the policy when compared to very strong baselines for both observation space and state space models for all the 7 environments we considered. Our evaluation protocol consists of 7 environments (Ant, Half Cheetah, Humanoid etc) and both observation space and state space models. For the observation space model, we use the \\u201cHybrid model-based and model-free (Mb-Mf) algorithm\\u201d (Nagabandi et al [1]) which is a very strong baseline and for the state space models, we use the \\u201cLearning and Querying Fast Generative Models for Reinforcement Learning\\u201d (Buesing et al [2]) as the baseline. This model is a state-of-the-art model for state space models. \\n\\n\\n\\n\\\"12) In the ... k=20?\\\"\\n\\nThis comment refers to figure 3. Here the proposed agents are trained with 2 different values of k, that is 5 and 20. Since the agent with k=20 is trained for longer sequences, it performs better than the other agent.\\n\\n\\n\\\"13) The authors ...different?\\\"\\n\\nThe key difference between our approach and existing approaches for learning the dynamics model is that in our case, the process of learning the model can change the policy. \\n\\nIn the standard cases, the policy is learned solely focussing on the reward structure and the state transition trajectories (collected as the agent acts in the environment) become the supervising dataset for learning the model. In that setup, the policy is not updated when the model is being updated and there is no feedback from the model learning process to the policy learning process. Hence, the data used for training the model is coming from a policy which is trained independently of how well the model performs on the collected trajectories. So the process of learning the model has no control over what kind of data is produced for its training. This is what we mean by \\u201clearning the dynamics model by just observing the data\\u201d.\\n\\nWe propose to train both the policy and the model during the open loop. Hence the k-step predictions are used for training both the model and the policy simultaneously. Training the policy on both the RL loss and the consistency loss provides a mechanism where learning a model, can itself change the policy. This is what we mean by \\u201clearning the dynamics model via interaction\\u201d. This close interplay between the dynamics model and the policy provides a pathway to the model to interact with the environment instead of just using the sampled trajectories. The resulting consistency loss helps to learn more powerful policy and better dynamics model (as demonstrated over different tasks) while being very easy to integrate with existing model-based RL approaches. It is important to note that our proposed approach improves over the state of the art results despite being relatively simple. We are not aware of work in RL which describes and validates the benefits of imposing the consistency constraint and would be happy to include references to such work.\\n\\n\\n\\n\\\"14) In Section ... return)\\\"\\n\\nWe apologize for the mistake. Thanks for pointing it out. We are indeed optimizing the expected return. We have also updated section 2 to describe the MDP and the related terms in a formal manner.\\n\\n\\n\\\"15) The loss ... trajectory-wise loss\\\"\\n\\nWe have improved the section on consistency constraint (Section 3.1) to describe the consistency loss in detail. We do not use any stepwise loss. A recurrent model is used to encode the trajectory into a fixed-sized vector and the l2 loss is applied between the encoding for the trajectory of observed states and the imagined states. This has been formalized in equation 1. Further, Section 3.2 and 3.3 describe how to modify the baselines to support the consistency constraint for observation space and state space models respectively.\\n\\nContinue\"}", "{\"title\": \"Response to Reviewer 2 - Part 3\", \"comment\": \"\\\"2) Is the dynamics ... for example)\\\"\\n\\n\\nThe dynamics model is indeed being used like in other model-based approaches. k=20 works better than k=10 because now the model\\u2019s predictions are being grounded in \\u201creal observations\\u201d for a much longer time span.\\n\\n\\\"3) Is the dynamics ... sensible yet)\\\"\\n\\nWe clarify that the agent is not using the dynamics model for action selection. The role of the dynamics model is the following - The policy is trained using both the RL loss as well as the loss from the dynamics model.\\n\\n\\n\\\"4) How exactly ... k=1?\\\"\\n\\nThe case of training without the consistency loss is the standard reward-based training of RL agents, without any consistency constraint. K=1 would correspond to the case where the consistency loss is applied on per step predictions. \\n\\n\\\"5) Could the ... results.\\\"\\n\\nWe have updated the paper to improve the experimental section - both in terms of description of baselines and in terms of the evaluation protocol. Further, Section 3.1 describes the different loss components and how the consistency constraint can be applied in the general. Section 3.2 and 3.3 describes the baselines and how these baselines were modified to support the consistency constraint for the observation space and the state space models respectively. All the experiments are averaged over 3 random seeds (along with 1 standard deviation interval) are plotted.\", \"we_summarize_the_baselines_and_the_evaluation_protocol_here\": \"Our evaluation protocol consists of 7 environments (Ant, Half Cheetah, Humanoid etc) and both observation space and state space models. For the observation space model, we use the \\u201cHybrid model-based and model-free (Mb-Mf) algorithm\\u201d (Nagabandi et al [1]) which is a very strong baseline and for the state space models, we use the \\u201cLearning and Querying Fast Generative Models for Reinforcement Learning\\u201d (Buesing et al [2]) as the baseline. This model is a state-of-the-art model for state space models. \\n\\nWe focus on evaluating the agent for both dynamics models (in terms of imagination log likelihood) and policy (in terms of average episodic returns and loss). We show that adding the consistency constraint to the baseline models results in improvements to both the dynamics models and the policy for all the environments that we consider. All the experiments are averaged over 3 random seeds and are plotted with 1 standard deviation interval.\\n\\nOur key contribution is the proposal of using the consistency loss which helps to learn more powerful policy and better dynamics model (as demonstrated over different tasks) while being very easy to integrate with existing model-based RL approaches. While the proposed approach looks relatively simple, we are not aware of work in RL which describes and validates the benefits of imposing the consistency constraint.\\n\\n\\n\\\"For the ... it in the graph)?\\\"\\\"\\n\\n\\nWe believe that the reason is the swimmer plot is averaged over 10 batches. We have added this information in the caption of the plot. \\n\\n\\n\\\"8) ... instead\\\"\\n\\nWe agree that the title could sound a little misleading. Based on the suggestion, we have updated the title to \\u201cLearning powerful policies and better dynamics models by encouraging consistency\\u201d \\n\\n\\\"9) by interaction ... consistency constraint\\\"\\n\\nWe acknowledge that the use of \\u201cby interaction\\u201d sounds a little vague and have incorporated this feedback into the draft. \\n\\n\\\"10) The authors ... constraint\\\"\\n\\nOur broad goal is to provide a mechanism for the agent to interact with the environment while it is learning the dynamics model as this could be helpful in learning a more powerful policy and better dynamics model. We discuss several possible manifestations of this idea in the introduction/motivation and focus on one specific instantiation - ensuring consistency between the predictions from the dynamics model and the actual observations from the environment. We show that adding the proposed consistency constraint helps the agent to learn better dynamics model and better policy for both observation space models and state space models. It is both interesting and surprising to see that our proposed approach improves over the state of the art results despite being relatively simple thus highlighting the usefulness of the \\u2018interaction\\u201d with the environment.\\n\\nContinue\"}", "{\"title\": \"Response to Reviewer 2 - Part 2\", \"comment\": \"Why is different from just learning a model based on k-step prediction?\\n========\\n\\nOur approach is different from just learning a k-step prediction model as in our case, the agent\\u2019s behavior (i.e the agent's policy) is dependent on its internal model too. In the standard case, the policy is optimized only using the RL gradient i.e maximizing expected reward and the state transition pairs (collected as the agent acts in the environment) become the supervising dataset for learning the model, and hence the policy is not affected when the model is being updated and there is no feedback from the model learning process to the policy. Hence, the data used for training the model is coming from a policy which is trained independently of how well the model performs on the collected trajectories. So the process of learning the model has no control over what kind of data is produced for its training.\\n\\nWe propose to train both the policy and the model during the open loop. Hence the k-step predictions are used for training both the model and the policy simultaneously. Training the policy on both the RL loss and the consistency loss provides a mechanism where learning a model, can itself change the policy thus leading to a much closer interplay between the dynamics model and the policy. We show that this relatively simple approach leads to much better performance when compared to very strong baselines for both observation space and state space models for all the 7 environments we considered.\\n\\n\\nWhat are the empirical results?\\n========\\n\\nOur evaluation protocol consists of 7 environments (Ant, Half Cheetah, Humanoid etc) and both observation space and state space models. Solving Half Cheetah environment, when observations are in the pixel space (images), is very challenging as useful information like velocity is not available. \\n\\nFor the observation space model, we use the \\u201cHybrid model-based and model-free (Mb-Mf) algorithm\\u201d (Nagabandi et al [1]). It is a strong baseline where the authors proposed to use a trained, deep neural network based dynamics model to initialize a model-free learning agent to combine the sample efficiency of model-based approaches with the high task-specific performance of model-free methods. For the state space models, we use the \\u201cLearning and Querying Fast Generative Models for Reinforcement Learning\\u201d (Buesing et al [2]) as the baseline. This is a state-of-the-art model for state space models. As shown by our experiments (section 5), by having this consistency constraint we outperform both these baselines.\\n\\nWe focus on evaluating the agent for both dynamics models (in terms of imagination log likelihood, figure 4) and policy (in terms of average episodic returns and loss, figure 2, 3, 5). We show that adding the consistency constraint to the baseline models results in improvements to both the dynamics models and the policy for all the environments that we consider. All the experiments are averaged over 3 random seeds and are plotted with 1 standard deviation interval.\\n\\n===============================================================================================\\n\\nWe now refer to the specific aspects of the reviews. \\n\\n\\n\\\"The authors ... as an input.\\n\\nThere seems to be a small discrepancy in the summary. The actions are always selected using the true state of the environment. When the agent is performing the open-loop, the agent transitions from one \\u201cimagined\\u201d state to another \\u201cimagined\\u201d state, unlike the closed loop state where the agent transitions between actually observed states (coming from the environment). The consistency loss ensures that the sequence of imagined states behaves similarly to the sequence of actually observed states. This aspect has been clarified in the paper in section 3.1 which talks about consistency constraint in general and describes how the consistency loss is to be computed (eq 1). Section 3.2 and section 3.3 go into the specific cases of observation space models and state space models respectively.\\n\\n==============================\\n\\n\\\"1) At the beginning ... environment?\\\"\\n\\nThank you for pointing out this out. We have improved the writing in the paper to make it more explicit (equation 1). We briefly summarise this aspect here for completion: \\n\\nLet us say that at time t, the agent is in some state s_t while it \\u201cimagines\\u201d to be in state s_t^I. In the closed loop, it samples an action a_t using the policy \\\\pi and transitions to a new state s_{t+1} by performing the action in the environment. In the open loop, the agent performs the action a_t in its dynamic model and transitions from state s_t^I to s_{t+1} ^ I .\\n\\nFor the closed loop, the loss comes from the reward signal. For the open loop, the loss comes in form of consistency constraint imposed on the sequence of actual state transitions and the predicted state transitions. This is described by equation 1 in section 3.1. During the open loop, both the policy and the model are updated using the consistency loss.\\n\\nCont..\"}", "{\"title\": \"Response to Reviewer 2 - Part 1\", \"comment\": \"We thank the reviewer for such a detailed feedback. We have conducted additional experiments to address the concerns raised about the evaluation, and we clarify specific points below. We believe that these additions address all of your concerns about the work, though we would appreciate any additional comments or feedback that you might have. We acknowledge that the paper was certainly lacking polish and accept that this may have made the paper difficult to read in places. We have uploaded a revised version in which we have revised the problem statement and writing as per the reviewer's suggestions. We briefly summarize the key idea of the paper and then address the specific concerns.\\n\\nWhat is the idea?\\n========\\n\\nOur goal is to provide a mechanism for the agent to lean a better dynamics model as well as more powerful policy by ensuring the consistency in their predictions (such that predictions from the model are grounded in the real environment). \\n\\nThis mechanism enables the agent to have a direct \\u201cinteraction\\u201d b/w the agent\\u2019s policy and its dynamics model. This interaction is different from the standard approaches in reinforcement learning where the agent uses the policy to sample trajectories over which the agent is trained, and then use these sampled trajectories to learn the dynamics model. In those cases, there is no (direct) mechanism for the dynamics model to affect the policy, and hence there is no \\u201cdirect interaction\\u201d between the policy and the dynamics model. In our case, both the policy and the model are trained jointly while making sure that the predictions from the dynamics model are consistent with the observation from the environment. This provides a mechanism where learning a model can itself change the policy (thus \\u201cinteracting\\u201d with the environment) instead of just training on the data coming from a policy which is trained independently of how well the model performs on the collected.\\n\\nA practical instantiation of this idea is the consistency loss where we ensure consistency between the predictions from the dynamics model and the actual observations from the environment and this simple baseline works surprisingly well compared to the state of the art methods (as demonstrated by our experiments) and that others have not tried it before. Applying consistency constraint means we have two learning signals for the policy: The one from the reinforcement learning loss (i.e maximize return) and the other due to consistency constraint. We show that adding the proposed consistency constraint helps the agent to learn better dynamics model and as well as better policy for both observation space models and state space models. We compare against strong baselines: \\n\\nHybrid model-based and model-free (Mb-Mf) algorithm (Nagabandi et al [1]) \\nLearning and Querying Fast Generative Models for Reinforcement Learning (Buesing et al [2]) - This is a state-of-the-art model for state space models.\\n\\nOur evaluation protocol considers a total of 7 environments and we show that using the consistency constraint leads to better generative models (in terms of log likelihood) and more powerful policy (average return) for all the cases. All the experiments are averaged over 3 random seeds and are plotted with 1 standard deviation interval.\\n\\nOur key contribution is the proposal of using the consistency loss which helps to learn more powerful policy and better dynamics model (as demonstrated over different tasks) while being very easy to integrate with existing model-based RL approaches. While our method is relatively simple, we are not aware of prior works that show something similar, and we believe such a simple baseline would be useful for anyone who\\u2019s working on model-based RL. Further, our experiment demonstrates the effectiveness of the approach. If we are mistaken regarding prior works, please let us know!\\n\\nWe would like to emphasize that our work presents an extensive comparative evaluation, and we believe that these results should be taken into consideration in evaluating our work. We compare multiple approaches across more than 5 simulated tasks to the state of the art methods. Hopefully, our clarifications are convincing in terms of explaining why the evaluation is fair and rigorous, and we would, of course, be happy to modify it as needed. But at a higher level, the fact that such simple model-based approaches work better than somewhat complex model-free approaches actually is the point of the paper to me.\\n\\nContinued...\"}", "{\"title\": \"Response to Reviewer 3 - Part 3\", \"comment\": \"\\\"All of their experiments merely take actions that are best according to the usual model-based or model-free methods and show that their consistency constraint allows them to learn a better dynamics model, which is not at all surprising.\\\"\\n\\nOur key contribution is the proposal of using the consistency loss which helps to learn more powerful policy AND better dynamics model (as demonstrated over different tasks) while being very easy to integrate with existing model-based RL approaches. It is important to note that our proposed approach improves over the state of the art results despite being relatively simple. We are not aware of work in RL which describes and validates the benefits of imposing the consistency constraint and would be happy to include references to such work.\\n\\nWe would like to highlight that our evaluation shows that the agent learns both better dynamics models AND more powerful policy (figure 2, 3, 5). There seems to be some confusion about our evaluation protocol. We have updated the paper to improve that. Section 3.1 describes the different loss components and how the consistency constraint can be applied in the general. Section 3.2 and 3.3 describes the baselines and how these baselines were modified to support the consistency constraint for the observation space and the state space models respectively.\", \"we_summarize_the_baselines_and_the_evaluation_protocol_here\": \"Our evaluation protocol consists of 7 environments (Ant, Half Cheetah, Humanoid etc) and both observation space and state space models. Solving Half Cheetah environment, when observations are in the pixel space (images), is very challenging as useful information like velocity is not available. \\n\\nFor the observation space model, we use the \\u201cHybrid model-based and model-free (Mb-Mf) algorithm\\u201d (Nagabandi et al [1]). It is a strong baseline where the authors proposed to use a trained, deep neural network based dynamics model to initialize a model-free learning agent to combine the sample efficiency of model-based approaches with the high task-specific performance of model-free methods. For the state space models, we use the \\u201cLearning and Querying Fast Generative Models for Reinforcement Learning\\u201d (Buesing et al [2]) as the baseline. This is a state-of-the-art model for state space models. As shown by our experiments (section 5), by having this consistency constraint we outperform both these baselines.\\n\\nWe focus on evaluating the agent for both dynamics models (in terms of imagination log likelihood) and policy (in terms of average episodic returns and loss, figure 2, 3, 5). We show that adding the consistency constraint to the baseline models results in improvements to both the dynamics models and the policy for all the environments that we consider. All the experiments are averaged over 3 random seeds and are plotted with 1 standard deviation interval.\\n\\nOur key contribution is the proposal of using the consistency loss which helps to learn more powerful policy and better dynamics model (as demonstrated over different tasks) while being very easy to integrate with existing model-based RL approaches. While the proposed approach looks relatively simple, we are not aware of work in RL which describes and validates the benefits of imposing the consistency constraint.\\n\\nWe would appreciate it if the reviewer could take another look at our changes and additional results, and let us know if the reviewer has request for additional changes that would alleviate the reviewer's concerns.\\n\\n[1]: Neural Network Dynamics for Model-Based Deep Reinforcement Learning with Model-Free Fine-Tuning - https://arxiv.org/pdf/1708.02596.pdf\\n\\n[2]: Learning and Querying Fast Generative Models for Reinforcement Learning - https://arxiv.org/pdf/1802.03006.pdf\"}", "{\"title\": \"Response to Reviewer 3 - Part 2\", \"comment\": \"How is it different from just learning a model based on k-step prediction?\\n========\\n\\nOur approach is different from just learning a k-step prediction model as in our case, the agent\\u2019s behavior (i.e the agent's policy) is dependent on its internal model too. In the standard case, the policy is optimized only using the RL gradient i.e maximizing expected reward and the state transition pairs (collected as the agent acts in the environment) become the supervising dataset for learning the model, and hence the policy is not affected when the model is being updated and there is no feedback from the model learning process to the policy. Hence, the data used for training the model is coming from a policy which is trained independently of how well the model performs on the collected trajectories. So the process of learning the model has no control over what kind of data is produced for its training.\\n\\nWe propose to train both the policy and the model during the open loop. Hence the k-step predictions are used for training both the model and the policy simultaneously. Training the policy on both the RL loss and the consistency loss provides a mechanism where learning a model, can itself change the policy thus leading to a much closer interplay between the dynamics model and the policy. We show that this relatively simple approach leads to much better performance when compared to very strong baselines for both observation space and state space models for all the 7 environments we considered.\\n\\n\\n\\n===================================================================================================\", \"we_now_refer_to_the_specific_aspects_of_the_reviews\": \"\\\"but never follows through on the motivation of experimentation---taking actions mainly for the purpose of learning an improved dynamics model. AND Failed to connect presented work with the motivation.\\\"\\n\\nOur goal and motivation is to provide a mechanism for the agent to learn a better dynamics model as well as more powerful policy by ensuring the consistency in their predictions (such that predictions from the model are grounded in the real environment). \\n\\nThis mechanism enables the agent to have a direct \\u201cinteraction\\u201d b/w the agent\\u2019s policy and its dynamics model. This interaction is different from the standard approaches in reinforcement learning where the agent uses the policy to sample trajectories over which the agent is trained, and then use these sampled trajectories to learn the dynamics model. In those cases, there is no (direct) mechanism for the dynamics model to affect the policy, and hence there is no \\u201cdirect interaction\\u201d between the policy and the dynamics model. In our case, both the policy and the model are trained jointly while making sure that the predictions from the dynamics model are consistent with the observation from the environment. This provides a mechanism where learning a model can itself change the policy (thus \\u201cinteracting\\u201d with the environment) instead of just training on the data coming from a policy which is trained independently of how well the model performs on the collected.\\n\\nA practical instantiation of this idea is the consistency loss where we ensure consistency between the predictions from the dynamics model and the actual observations from the environment and this simple baseline works surprisingly well compared to the state of the art methods (as demonstrated by our experiments) and that others have not tried it before. Applying consistency constraint means we have two learning signals for the policy: The one from the reinforcement learning loss (i.e maximize return) and the other due to consistency constraint. We show that adding the proposed consistency constraint helps the agent to learn better dynamics model and as well as better policy for both observation space models and state space models.\\n\\nOur evaluation protocol consists of 7 environments (Ant, Half Cheetah, Humanoid etc) and both observation space and state space models. For the observation space model, we use the \\u201cHybrid model-based and model-free (Mb-Mf) algorithm\\u201d (Nagabandi et al [1]) which is a very strong baseline and for the state space models, we use the \\u201cLearning and Querying Fast Generative Models for Reinforcement Learning\\u201d (Buesing et al [2]) as the baseline. This model is a state-of-the-art model for state space models. We show that adding the consistency constraint to the baseline models results in improvements to both the dynamics models and the policy for all the environments that we consider.\\n\\nContinued...\"}", "{\"title\": \"Response to Reviewer 3 - Part 1\", \"comment\": \"We thank the reviewer for the feedback. We have conducted additional experiments to address the concerns raised about the evaluation, and we clarify specific points below. We believe that these additions address all of your concerns about the work, though we would appreciate any additional comments or feedback that you might have. We acknowledge that the paper was certainly lacking polish and accept that this may have made the paper difficult to read in places. We have uploaded a revised version in which we have revised the problem statement and writing as per the reviewer's suggestions. We briefly summarize the key idea of the paper and then address the specific concerns.\\n\\nWhat is the idea?\\n========\\n\\nOur goal is to provide a mechanism for the agent to lean a better dynamics model as well as more powerful policy by ensuring the consistency in their predictions (such that predictions from the model are grounded in the real environment). \\n\\nThis mechanism enables the agent to have a direct \\u201cinteraction\\u201d b/w the agent\\u2019s policy and its dynamics model. This interaction is different from the standard approaches in reinforcement learning where the agent uses the policy to sample trajectories over which the agent is trained, and then use these sampled trajectories to learn the dynamics model. In those cases, there is no (direct) mechanism for the dynamics model to affect the policy, and hence there is no \\u201cdirect interaction\\u201d between the policy and the dynamics model. In our case, both the policy and the model are trained jointly while making sure that the predictions from the dynamics model are consistent with the observation from the environment. This provides a mechanism where learning a model can itself change the policy (thus \\u201cinteracting\\u201d with the environment) instead of just training on the data coming from a policy which is trained independently of how well the model performs on the collected.\\n\\nA practical instantiation of this idea is the consistency loss where we ensure consistency between the predictions from the dynamics model and the actual observations from the environment and this simple baseline works surprisingly well compared to the state of the art methods (as demonstrated by our experiments) and that others have not tried it before. Applying consistency constraint means we have two learning signals for the policy: The one from the reinforcement learning loss (i.e maximize return) and the other due to consistency constraint. We show that adding the proposed consistency constraint helps the agent to learn better dynamics model and as well as better policy for both observation space models and state space models. We compare against strong baselines: \\n\\nHybrid model-based and model-free (Mb-Mf) algorithm (Nagabandi et al [1]) \\nLearning and Querying Fast Generative Models for Reinforcement Learning (Buesing et al [2]) - This is a state-of-the-art model for state space models.\\n\\nOur evaluation protocol considers a total of 7 environments and we show that using the consistency constraint leads to better generative models (in terms of log likelihood) and more powerful policy (average return) for all the cases. All the experiments are averaged over 3 random seeds and are plotted with 1 standard deviation interval.\\n\\nOur key contribution is the proposal of using the consistency loss which helps to learn more powerful policy and better dynamics model (as demonstrated over different tasks) while being very easy to integrate with existing model-based RL approaches. While our method is relatively simple, we are not aware of prior works that show something similar, and we believe such a simple baseline would be useful for anyone who\\u2019s working on model-based RL. Further, our experiment demonstrates the effectiveness of the approach. If we are mistaken regarding prior works, please let us know!\\n\\nWe would like to emphasize that our work presents an extensive comparative evaluation, and we believe that these results should be taken into consideration in evaluating our work. We compare multiple approaches across more than 5 simulated tasks to the state of the art methods. Hopefully, our clarifications are convincing in terms of explaining why the evaluation is fair and rigorous, and we would, of course, be happy to modify it as needed. But at a higher level, the fact that such simple model-based approaches work better than somewhat complex model-free approaches actually is the point of the paper to me.\\n\\nContinued...\"}", "{\"title\": \"Response to Reviewer 1 - Part 3\", \"comment\": \"\\\"no serious effort to compare and contrast this idea with other efforts at model-based RL. \\u2026 it is unclear what model-based RL algorithm is being used, and how it was modified to support the consistency constraint.\\\"\\n\\nWe have updated the paper to address the concern about the baselines and the proposed approach not being described in detail. Section 3.1 describes the different loss components and how the consistency constraint can be applied in the general. Section 3.2 and 3.3 describes the baselines and how these baselines were modified to support the consistency constraint for the observation space and the state space models respectively.\", \"we_summarize_the_baselines_and_the_evaluation_protocol_here\": \"Our evaluation protocol consists of 7 environments (Ant, Half Cheetah, Humanoid etc) and both observation space and state space models. For the observation space model, we use the \\u201cHybrid model-based and model-free (Mb-Mf) algorithm\\u201d (Nagabandi et al [1]) which is a very strong baseline and for the state space models, we use the \\u201cLearning and Querying Fast Generative Models for Reinforcement Learning\\u201d (Buesing et al [2]) as the baseline. This model is a state-of-the-art model for state space models. \\n\\nWe focus on evaluating the agent for both dynamics models (in terms of imagination log likelihood, figure 4) and policy (in terms of average episodic returns and loss, figure 2, 3, 5). We show that adding the consistency constraint to the baseline models results in improvements to both the dynamics models and the policy for all the environments that we consider. All the experiments are averaged over 3 random seeds and are plotted with 1 standard deviation interval.\\n\\n=====================================\\n\\n\\n\\\"It is not clear how novel the central idea is.\\\"\\n\\nOur key contribution is the proposal of using the consistency loss which helps to learn more powerful policy and better dynamics model (as demonstrated over different tasks) while being very easy to integrate with existing model-based RL approaches. While our method is relatively simple, we are not aware of prior works that show something similar, and we believe such a simple baseline would be useful for anyone who\\u2019s working on model-based RL. Further, our experiment demonstrates the effectiveness of the approach. If we are mistaken regarding prior works, please let us know!\\n\\nWe would like to emphasize that our work presents an extensive comparative evaluation, and we believe that these results should be taken into consideration in evaluating our work. We compare multiple approaches across more than 5 simulated tasks to the state of the art methods. Hopefully, our clarifications are convincing in terms of explaining why the evaluation is fair and rigorous, and we would, of course, be happy to modify it as needed. But at a higher level, the fact that such simple model-based approaches work better than somewhat complex model-free approaches actually is the point of the paper to me.\\n\\n\\n=====================================\\n\\n\\n\\\"Statistical significance of improvements is unclear\\\"\\n\\nOur evaluation protocol (section 5) consists of 7 environments (Ant, Half Cheetah, Humanoid etc) and both observation space and state space models. Solving Half Cheetah environment, when observations are in the pixel space (images), is very challenging as useful information like velocity is not available. \\n\\n\\nFor the observation space model (section 5.1), we use the \\u201cHybrid model-based and model-free (Mb-Mf) algorithm\\u201d (Nagabandi et al [1]). It is a very strong baseline where the authors proposed to use a trained, deep neural network based dynamics model to initialize a model-free learning agent to combine the sample efficiency of model-based approaches with the high task-specific performance of model-free methods. For the state space models (section 5.2), we use the \\u201cLearning and Querying Fast Generative Models for Reinforcement Learning\\u201d (Buesing et al [2]) as the baseline. This is a state-of-the-art model for state space models. \\n\\nWe focus on evaluating the agent for both dynamics models (in terms of imagination log likelihood) (figure 4) and policy (in terms of average episodic returns) (figure 2, 3, 5). We show that adding the consistency constraint to the baseline models results in improvements to both the dynamics models and the policy for all the environments that we consider. All the experiments are averaged over 3 random seeds and are plotted with 1 standard deviation interval.\\n\\nWe would appreciate it if the reviewer could take another look at our changes and additional results, and let us know if the reviewer has a request for additional changes that would alleviate the reviewer's concerns.\\n\\n[1]: Neural Network Dynamics for Model-Based Deep Reinforcement Learning with Model-Free Fine-Tuning - https://arxiv.org/pdf/1708.02596.pdf\\n\\n[2]: Learning and Querying Fast Generative Models for Reinforcement Learning - https://arxiv.org/pdf/1802.03006.pdf\"}", "{\"title\": \"Response to Reviewer 1 - Part 2\", \"comment\": \"What are the empirical results?\\n=====================================\\n\\nOur evaluation protocol consists of 7 environments (Ant, Half Cheetah, Humanoid etc) and both observation space and state space models. Solving Half Cheetah environment, when observations are in the pixel space (images), is very challenging as useful information like velocity is not available. \\n\\nFor the observation space model, we use the \\u201cHybrid model-based and model-free (Mb-Mf) algorithm\\u201d (Nagabandi et al [1]). It is a strong baseline where the authors proposed to use a trained, deep neural network based dynamics model to initialize a model-free learning agent to combine the sample efficiency of model-based approaches with the high task-specific performance of model-free methods. For the state space models, we use the \\u201cLearning and Querying Fast Generative Models for Reinforcement Learning\\u201d (Buesing et al [2]) as the baseline. This is a state-of-the-art model for state space models. As shown by our experiments (section 5), by having this consistency constraint we outperform both these baselines.\\n\\nWe focus on evaluating the agent for both dynamics models (in terms of imagination log likelihood, figure 4) and policy (in terms of average episodic returns and loss, figure 2, 3, 5). We show that adding the consistency constraint to the baseline models results in improvements to both the dynamics models and the policy for all the environments that we consider. All the experiments are averaged over 3 random seeds and are plotted with 1 standard deviation interval.\\n\\n==================================================================================================\", \"we_now_refer_to_the_specific_aspects_of_the_reviews\": \"\\\"This paper presents a simple auxiliary loss term for model-based RL that attempts to enforce consistency between observed experience trajectories and hallucinated rollouts. Simple experiments demonstrate that the constraint slightly improves performance.\\\"\\n\\nThanks for the very useful feedback. We have conducted additional experiments to address the concerns raised about the evaluation, and we clarify specific points below. We believe that these additions address all of your concerns about the work, though we would appreciate any additional comments or feedback that you might have.\\n\\n\\n=====================================\\n\\n\\\"Why is different from just learning a model based on k-step prediction?\\\" \\n\\\"Unclear how this is significantly different from other related work (such as imagination agents)\\\"\\n\\nOur approach is different from just learning a k-step prediction model as in our case, the agent\\u2019s behavior (i.e the agent's policy) is dependent on its internal model too. In the standard case, the policy is optimized only using the RL gradient i.e maximizing expected reward and the state transition pairs (collected as the agent acts in the environment) become the supervising dataset for learning the model, and hence the policy is not affected when the model is being updated and there is no feedback from the model learning process to the policy. Hence, the data used for training the model is coming from a policy which is trained independently of how well the model performs on the collected trajectories. So the process of learning the model has no control over what kind of data is produced for its training.\\n\\nWe propose to train both the policy and the model during the open loop. Hence the k-step predictions are used for training both the model and the policy simultaneously. Training the policy on both the RL loss and the consistency loss provides a mechanism where learning a model, can itself change the policy thus leading to a much closer interplay between the dynamics model and the policy. We show that this relatively simple approach leads to much better performance when compared to very strong baselines for both observation space and state space models for all the 7 environments we considered.\\n\\nWe have updated the paper (section 3) to describe the baselines and how to modifiy the baselines for applying the consistency constraint for both the observation space models (section 3.2) and the state space models (section 3.3). \\nExperiments (Section 5) shows the improvement that result by the use of consistency constaint for both observation space models (figure 2, 3) and state space models (figure 4, 5)\\n\\nContinued\"}", "{\"title\": \"Response to Reviewer 1 - Part 1\", \"comment\": \"We thank the reviewer for the feedback. We have conducted additional experiments to address the concerns raised about the evaluation, and we clarify specific points below. We believe that these additions address all of your concerns about the work, though we would appreciate any additional comments or feedback that you might have. We acknowledge that the paper was certainly lacking polish and accept that this may have made the paper difficult to read in places. We have uploaded a revised version in which we have revised the problem statement and writing as per the reviewer's suggestions. We briefly summarize the key idea of the paper and then address the specific concerns.\\n\\nWhat is the idea?\\n=====================================\\n\\nOur goal is to provide a mechanism for the agent to lean a better dynamics model as well as more powerful policy by ensuring the consistency in their predictions (such that predictions from the model are grounded in the real environment). \\n\\nThis mechanism enables the agent to have a direct \\u201cinteraction\\u201d b/w the agent\\u2019s policy and its dynamics model. This interaction is different from the standard approaches in reinforcement learning where the agent uses the policy to sample trajectories over which the agent is trained, and then use these sampled trajectories to learn the dynamics model. In those cases, there is no (direct) mechanism for the dynamics model to affect the policy, and hence there is no \\u201cdirect interaction\\u201d between the policy and the dynamics model. In our case, both the policy and the model are trained jointly while making sure that the predictions from the dynamics model are consistent with the observation from the environment. This provides a mechanism where learning a model can itself change the policy (thus \\u201cinteracting\\u201d with the environment) instead of just training on the data coming from a policy which is trained independently of how well the model performs on the collected.\\n\\nA practical instantiation of this idea is the consistency loss where we ensure consistency between the predictions from the dynamics model and the actual observations from the environment and this simple baseline works surprisingly well compared to the state of the art methods (as demonstrated by our experiments) and that others have not tried it before. Applying consistency constraint means we have two learning signals for the policy: The one from the reinforcement learning loss (i.e maximize return) and the other due to consistency constraint. We show that adding the proposed consistency constraint helps the agent to learn better dynamics model and as well as better policy for both observation space models and state space models. We compare against strong baselines: \\n\\nHybrid model-based and model-free (Mb-Mf) algorithm (Nagabandi et al [1]) \\nLearning and Querying Fast Generative Models for Reinforcement Learning (Buesing et al [2]) - This is a state-of-the-art model for state space models.\\n\\nOur evaluation protocol considers a total of 7 environments and we show that using the consistency constraint leads to better generative models (in terms of log likelihood) and more powerful policy (average return) for all the cases. All the experiments are averaged over 3 random seeds and are plotted with 1 standard deviation interval.\\n\\nOur key contribution is the proposal of using the consistency loss which helps to learn more powerful policy and better dynamics model (as demonstrated over different tasks) while being very easy to integrate with existing model-based RL approaches. While our method is relatively simple, we are not aware of prior works that show something similar, and we believe such a simple baseline would be useful for anyone who\\u2019s working on model-based RL. Further, our experiment demonstrates the effectiveness of the approach. If we are mistaken regarding prior works, please let us know!\\n\\nWe would like to emphasize that our work presents an extensive comparative evaluation, and we believe that these results should be taken into consideration in evaluating our work. We compare multiple approaches across more than 5 simulated tasks to the state of the art methods. Hopefully, our clarifications are convincing in terms of explaining why the evaluation is fair and rigorous, and we would, of course, be happy to modify it as needed. But at a higher level, the fact that such simple model-based approaches work better than somewhat complex model-free approaches actually is the point of the paper to me.\\n\\nContinued...\"}", "{\"title\": \"A small idea, with poor comparisons\", \"review\": \"\", \"summary\": \"This paper presents a simple auxiliary loss term for model-based RL that attempts to enforce consistency between observed experience trajectories and hallucinated rollouts. Simple experiments demonstrate that the constraint slightly improves performance.\", \"quality\": \"While I think the idea of a consistency constraint is probably reasonable, I consider this a poorly executed exploration of the idea. The paper makes no serious effort to compare and contrast this idea with other efforts at model-based RL. The most glaring omission is comparison to very old ideas (such as dyna) and new ideas (such as imagination agents), both of which they cite.\", \"clarity\": \"The paper is reasonably clear, although there are some holes. For example, in the experimental section, it is unclear what model-based RL algorithm is being used, and how it was modified to support the consistency constraint. (I did not read the appendix).\", \"originality\": \"It is not clear how novel the central idea is.\", \"significance\": \"This idea is not significant.\", \"pros\": [\"A simple, straightforward idea\", \"A good topic - progress in model-based RL is always welcome\"], \"cons\": [\"Unclear how this is significantly different from other related work (such as imagination agents)\", \"Experimental setup is poorly executed.\", \"Statistical significance of improvements is unclear\", \"No attempt to relate to any other method in the field\", \"No explanation of what algorithms are being used\"], \"rating\": \"3: Clear rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}", "{\"title\": \"This paper presents the idea of learning models of the environment while interacting with it, in the form of performing the usual model-based or model-free reinforcement learning, while enforcing consistency between the real world (observations) and the model. The presented motivation is that agents, like people, can benefit through not just observing the environment and learning from it, but also by experimenting---trying actions specifically for learning\", \"review\": \"---Below is based on the original paper---\\nThis paper presents a framework that allows the agent to learn from its observations, but never follows through on the motivation of experimentation---taking actions mainly for the purpose of learning an improved dynamics model. All of their experiments merely take actions that are best according to the usual model-based or model-free methods, and show that their consistency constraint allows them to learn a better dynamics model, which is not at all surprising. They do not even allow for the type of experimentation that has been done in reinforcement learning for as long as it has been around, which is to allow exploration by artificially increasing the reward for the first few times that each state is visited. That would be a good baseline against which to compare their method.\", \"overall\": \"\", \"pros\": \"1. Clear writing\\n2. Good motivation description.\", \"cons\": \"1. Failed to connect presented work with the motivation.\\n2. No comparison against known methods for exploration.\\n\\n\\n----Below is based on the revision---\\n\\nThanks to the reviewers for making the paper much clearer. I have no particular issues on the items that are in the paper. However, subsections 7.2.1 and 7.2.2 are missing.\", \"rating\": \"2: Strong rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"8) \\\"generative models\\\" reminds of things like a VAE or GAN; however, I believe the authors mean \\\"dynamics models\\\" instead\\n9) \\\"by interaction\\\" is a bit vague as to what the contribution is (aren't policies and dynamic models in general trained by interacting with the environment?); the main idea of the paper is the consistency constraint\\nAbstract / Introduction:\\n10) The authors talk about humans carrying out \\\"experiments via interaction\\\" to help uncover \\\"true causal relationships\\\". This idea is not brought up again in the methods section, and I don't see evidence that with the proposed approach, the policy does targeted experiments to uncover causal relationships. It is not clear to me why this is the intuition that motivates the consistency constraint. \\n11) As the authors state in the introduction, the hope of model-based RL is better sample complexity. This is usually achieved by using the model in some way, for example by planning several steps ahead when choosing the current action. Could the authors comment on where they would place their proposed method - how does it address sample complexity?\\n12) In the introduction, the authors discuss the problem of compounding errors. These must be a problem in the proposed method as well, especially as k grows. Could the authors comment on that? How come that the performance is so good for k=20?\\n13) The authors write that in most model-based approaches, the dynamics model is \\\"learned with supervised learning techniques, i.e., just by observing the data\\\" and not via interaction. There's two things I don't understand: (1) in the existing model-based approaches the authors refer to, the policy also interacts with the world to get the data to do supervised learning - what exactly is the difference? (2) The auxiliary loss \\\"which explicitly seeks to match the generative behaviour to the observed behaviour\\\" is just a supervised learning loss as well, so how is this different?\\n\\nFor me, it would help the readability and understanding of the paper if some concepts were introduced more formally.\\n14) In Section 2, it would help me to see a formal definition of the MDP and what exactly is optimised. The authors write \\\"optimise a reward signal\\\" and \\\"maximise its expected reward\\\", however I believe it should be the expected cumulative reward (i.e., return). \\n15) The loss function for the dynamics model is not explicitly stated. From the text I assume that it is the mean squared error for the per-step loss, and a GAN loss for the trajectory-wise loss.\\n16) Could the authors explicitly state what the overall loss function is, and how the RL and supervised objective are combined? Is the dynamics model f trained only on the supervised loss, and the policy pi only on the RL loss?\\n17) In 2.3 the variable z_t is not formally introduced. What does it represent?\\n\\n------------------------\\nOther Comments\\n------------------------\\n18) I find it problematic to use words such as \\\"hallucination\\\" and \\\"imagination\\\" when talking about learning algorithms. I would much prefer to see formal/factual language (like saying that the dynamics model is used to do make predictions / do planning, rather than that the agent is hallucinating). \\n\\n-- edit (19.11.) ---\\n- updated score to 5\\n- corrected summary\", \"review\": \"-------------\\nSummary\\n-------------\\nThe authors propose to train a policy while concurrently learning a dynamics model. In particular, the policy is updated using both the RL loss (rewards from the environment) and the \\\"consistency constraint\\\", which the authors introduce. This consistency constraint is a supervised learning signal, which compares trajectories in the environment with trajectories in the imagined world (produced with the dynamics model). \\n\\n---------------------\\nMain Feedback\\n---------------------\\nI feel like there might be some interesting ideas in this work, and the results suggest that this approach performs well. However, I had a difficult time understanding how exactly the method works, and what its advantages are. These are my main questions:\\n\\n1) At the beginning of Section 4 the authors write \\\"The learning agent has two pathways for improving its behaviour: (...) (ii) the open loop path, where it imagines taking actions and hallucinates the state transitions that could happen\\\". Do you actually do this? This is not mentioned in anywhere. And as far as I understand, the reward function is not learned - hence there will be no training signal in the open loop path. Does the reward signal always come from the true environment?\\n2) Is the dynamics model used for anything else than action-selection during training? Planning? If not, I don't really understand the results and why this works at all (k=20 being better than k=5, for example).\\n3) Is the dynamics model pre-trained in any way? I find it surprising that the model-free method and the proposed method perform similar at the beginning (Figure 3). If the agent chooses its actions based on the state that is predicted by the dynamics model, this should throw off the learning of the policy at the beginning (when the dynamics model hasn't learned anything sensible yet).\\n\\n-----------------------\\nOther Questions\\n-----------------------\\n4) How exactly does training without the consistency constraint look? Is this the same as k=1?\\n5) Could the authors comment on the evaluation protocol in the experimental section? Are the results averages over multiple runs? If so, it would help to see confidence intervals to make a fair assessment of the results. \\n6) For the swimmer in Figure 2, the two lines (with consistency and without consistency) start at different initial returns, why is that so? If the same architecture and seed was used, shouldn't this be the same (or can you just not see it in the graph)?\\n\\n---------\\nClarity\\n---------\\nThe title and introduction initially gave me a slightly wrong impression on what the paper is going to be about, and several things were not followed up on later in the paper.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}" ] }
HJeOMhA5K7
Human-Guided Column Networks: Augmenting Deep Learning with Advice
[ "Mayukh Das", "Yang Yu", "Devendra Singh Dhami", "Gautam Kunapuli", "Sriraam Natarajan" ]
While extremely successful in several applications, especially with low-level representations; sparse, noisy samples and structured domains (with multiple objects and interactions) are some of the open challenges in most deep models. Column Networks, a deep architecture, can succinctly capture such domain structure and interactions, but may still be prone to sub-optimal learning from sparse and noisy samples. Inspired by the success of human-advice guided learning in AI, especially in data-scarce domains, we propose Knowledge-augmented Column Networks that leverage human advice/knowledge for better learning with noisy/sparse samples. Our experiments demonstrate how our approach leads to either superior overall performance or faster convergence.
[ "Knowledge-guided learning", "Human advice", "Column Networks", "Knowledge-based relational deep model", "Collective classification" ]
https://openreview.net/pdf?id=HJeOMhA5K7
https://openreview.net/forum?id=HJeOMhA5K7
ICLR.cc/2019/Conference
2019
{ "note_id": [ "Bkxlc8JHxN", "HJg7IAvk07", "SylpM0wk07", "Syxn0aP1CX", "rygtp4OAnX", "BylRe8iFhQ", "SkgQ8ZIdh7" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1545037447868, 1542581835363, 1542581780767, 1542581715646, 1541469377499, 1541154294438, 1541067083503 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1277/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1277/Authors" ], [ "ICLR.cc/2019/Conference/Paper1277/Authors" ], [ "ICLR.cc/2019/Conference/Paper1277/Authors" ], [ "ICLR.cc/2019/Conference/Paper1277/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1277/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1277/AnonReviewer3" ] ], "structured_content_str": [ "{\"metareview\": \"The paper considers the task of incorporating knowledge expressed as rules into column networks. The reviewers acknowledge the need for such techniques, like the flexibility of the proposed approach, and appreciate the improvements to convergence speed and accuracy afforded by the proposed work.\", \"the_reviewers_and_the_ac_note_the_following_as_the_primary_concerns_of_the_paper\": \"(1) The primary concerned raised by the reviewers was that the evaluation is focused on whether KCLN can beat one with the knowledge, instead of measuring the efficacy of incorporating the knowledge itself (e.g. by comparing with other forms of incorporating knowledge, or by varying the quality of the rules that were introduced), (2) Even otherwise, the empirical results are not significant, offering slight improvements over the vanilla CLN (reviewer 1), (3) There are concerns that the rule-based gates are introduced but gradients are only computed on the final layer, which might lead to instability, and (4) There are a number of issues in the presentation, where the space is used on redundant information and description of datasets, instead of focusing on the proposed model.\\n\\nThe comments by the authors address some of these concerns, in particular, clarifying that the forms of knowledge/rules are not limited, however, they focused on simple rules in the paper. However, the primary concerns in the evaluation still remain: (1) it seems to focus on comparing against Vanilla-CLN, instead of focusing on the source of the knowledge, or on the efficacy in incorporating it (see earlier work on examples of how to evaluate these), and (2) the results are not considerably better with the proposed work, making the reviewers doubtful about the significance of the proposed work.\\n\\nThe reviewers agree that the paper is not ready for publication.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Poor results, Evaluating knowledge or method to incorporate it?\"}", "{\"title\": \"Response to review comment #3\", \"comment\": \"We understand the reviewer's perspective about stronger motivation. However, justification behind our usage of Column networks is as follows - (1) Human advice/knowledge/guidance has been proven to extremely effective in cases of systematic noise in data (Odom et al. 2018). Systematic noise can be attributed to two primary aspects, (i) noise in observation (data recording) and (ii) non-representative sample due to sparsity. (2) The challenges are specifically crucial in structured domains with objects and relations (3) There is limited work w.r.t deep models for modeling in structured/relational noisy domains (4) Column networks have proven to be a successful approach (beating other baselines) for principled modeling in such domains especially for collective classification (5) We show that with noisy sparse samples in structured domains CLNs can perform sub-optimally and human guidance/advice/knowledge can help alleviate that challenge.\\n\\nWe will definitely expand our discussion on quality of advice (in the final version if accepted) highlighting how the model is to be made robust to low quality advice. Note, however, quality of advice is an open challenge in human-in-the-loop research. Also note that average performance of K-CLN never goes substantially below original CLN irrespective of the advice or sample size indicating implicit stability. \\n\\nWe will certainly survey and compare with the literature mentioned here. Thank you for the pointers.\"}", "{\"title\": \"Response to review comment #2\", \"comment\": \"The exponential gate is not theoretically unbounded. The advice gradient itself is bounded between -1 <= \\\\nabla <= +1. So exp(\\\\nabla) is bounded between (0.367, 2.718). However, we do appreciate the reviewer's concern that the effect of the multiplicative gates depend on the original value of the nodes on which the gates are applied. Higher the value of a node more is the effect (if exp(\\\\nabla)>1). While theoretically this needs validation, in practice, as our experiments show clearly, the effects are indeed quite stable. There has not been any scenario where K-CLN performs worse than original CLN. For a much longer paper, we will certainly explore the theoretical underpinnings.\\n\\nIn existing research on human-guided learning, human advice has been shown to be most effective in case of systematic noise. Such noise can be introduced due to 2 primary reasons (1) Noise in observation (data recording) (2) Noise due to lack of representative samples. Challenge 2 is especially crucial in structured data (objects and interactions), since most interactions in the world are false or unobserved. Our experimental question is directed at these 2 primary challenges. We believe our baseline, original CLN, is justified since they have already been proven to be a superior approach for collective classification in structured data. Our objective is to show that CLNs augmented with richer human inputs than mere labels in noisy sparse samples in structured domains can exhibit superior performance to only label-based learning. We will provide more examples of advice rules used in the experiments in the final version if accepted. Quality of advice is a concern in any human-in-the-loop system. However, reasonably good advice helps alleviate the challenges due to noise and sparsity. \\n\\nWe will shorten the data description. Our objective was to highlight the level of complexity of the different tasks and the impact of sparsity (in context of the dimensionality of the data). The only figures in our work are the original CLN framework and its modifications due to K-CLN. Rest are plots of 2 different types of experiments w.r.t. each data set/domain. While it is possible to condense all data sets into one set of plots, that will immensely hamper the clarity of the presented results. Hence all plots are kept distinct and separate. It will be great if you could guide us to the figures that you deem unnecessary.\"}", "{\"title\": \"Response to review comment #1\", \"comment\": \"Preference rules has been proven to be a powerful representation strategy for encoding human knowledge (Odom et al., Frontiers in Robotics and AI, 2018). Note how preference rules encoded in First Order Horn clause logic allows for compactness and generalization over multiple instances and objects. In this work, example advice rules shown are comparatively simple for clarity and brevity. Our formulation has no such limitation. Humans can essentially provide arbitrarily complex clauses (i.e. arbitrary number of literals in the body of the horn clauses).\"}", "{\"title\": \"A modified column network\", \"review\": \"This work proposes a variant of the column network based on the injection of human guidance. The method does not make major changes to the network structure, but by modifying the calculations in the network. Human knowledge is embodied in a defined rule formula. The method is flexible and different entities correspond to different rules. However, the form of knowledge is limited and simple. Experiments have shown that the convergence speed and results are improved, but not significant.\\n\\nMinor\\uff1a\", \"example_2\": \"\\\"A\\\" -> \\\"AI\\\".\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Interesting problem, not well executed idea\", \"review\": \"The paper introduces a method to incorporate human advises to deep learning by extending Column Network (CLN) - a powerful graph neural network for collective classification.\\n\\nThe problem is quite interesting and is practical in real-world. However, I have some concerns:\\n\\nCorrectness\\n==========\\nIn the main modification to the CLN in Eq (3), the rule-based gates are introduced to every hidden layer. However, the functional gradient with respect to the \\\"advise gradient\\\" is only computed for the last layer (at the end of Section 3). The exponential gates may cause some instability issue due to its unboundedness. \\n\\nEvaluation\\n=========\\nThe questions in experiment (Can K-CLNs learn efficiently/effectively with noisy sparse samples?) do not support the problem statement about human advice incorporation. Thus, all they did in the experiment is trying to compete against CLN.\\n\\nI would believe that the improvement (which I trust is real) depends critically on the quality and quantity of the human-crafted rules, much in the same way that feature engineering plays the major roles in the classical structured output prediction. Hence more details about the rules set used in experiments should be given.\\n\\nPresentation\\n===========\\nIn the experiment part, the authors need to describe their model configuration. The presentation of datasets consumes a lot of space and can be reduced (e.g., using a table). This paper displays many unnecessary figures that consumes a lot of space. The paper provides some unnecessary text highlights in bold.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"If your human guidance is totally wrong, how your model handle such extreme cases?\", \"review\": \"This paper formulates a new method called human-guided column networks to handle sparse and noisy samples. Their main idea is to introduce human knowledge to guide the previous column network for robust training.\", \"pros\": \"1. The authors find a fresh direction for learning with noisy samples. The human advice can be viewed as previledged information.\\n\\n2. The authors perform numerical experiments to demonstrate the efficacy of their framework. And their experimental result support their previous claims.\", \"cons\": \"We have three questions in the following.\\n\\n1. Motivation: The authors are encouraged to re-write their paper with more motivated storyline. The current version is okay but not very exciting for idea selling. For example, human guidance should be your selling point, and you may not restrict your general method into ColumnNet, which will limit the practical usage.\\n\\n2. Related works: In deep learning with noisy labels, there are three main directions, including small-loss trick [1], estimating noise transition matrix [2,3], and explicit and implicit regularization [4]. I would appreciate if the authors can survey and compare more baselines in their paper instead of listing some basic ones.\\n\\n3. Experiment: \\n3.1 Baselines: For noisy labels, the authors should add MentorNet [1] as a baseline https://github.com/google/mentornet From my own experience, this baseline is very strong.\\n3.2 Datasets: For datasets, I think the author should first compare their methods on symmetric and aysmmetric noisy data. Besides, the authors are encouraged to conduct 1 NLP dataset.\\n\\nBy the way, if your human guidance is totally wrong, how your model handle such extreme cases? Could you please discuss this important point in your paper?\", \"references\": \"[1] L. Jiang, Z. Zhou, T. Leung, L. Li, and L. Fei-Fei. Mentornet: Learning data-driven curriculum for very deep neural networks on corrupted labels. In ICML, 2018.\\n\\n[2] G. Patrini, A. Rozza, A. Menon, R. Nock, and L. Qu. Making deep neural networks robust to label noise: A loss correction approach. In CVPR, 2017.\\n\\n[3] J. Goldberger and E. Ben-Reuven. Training deep neural-networks using a noise adaptation layer. In ICLR, 2017.\\n\\n[4] T. Miyato, S. Maeda, M. Koyama, and S. Ishii. Virtual adversarial training: A regularization method for supervised and semi-supervised learning. ICLR, 2016.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}" ] }
SylPMnR9Ym
Learning what you can do before doing anything
[ "Oleh Rybkin", "Karl Pertsch", "Konstantinos G. Derpanis", "Kostas Daniilidis", "Andrew Jaegle" ]
Intelligent agents can learn to represent the action spaces of other agents simply by observing them act. Such representations help agents quickly learn to predict the effects of their own actions on the environment and to plan complex action sequences. In this work, we address the problem of learning an agent’s action space purely from visual observation. We use stochastic video prediction to learn a latent variable that captures the scene's dynamics while being minimally sensitive to the scene's static content. We introduce a loss term that encourages the network to capture the composability of visual sequences and show that it leads to representations that disentangle the structure of actions. We call the full model with composable action representations Composable Learned Action Space Predictor (CLASP). We show the applicability of our method to synthetic settings and its potential to capture action spaces in complex, realistic visual settings. When used in a semi-supervised setting, our learned representations perform comparably to existing fully supervised methods on tasks such as action-conditioned video prediction and planning in the learned action space, while requiring orders of magnitude fewer action labels. Project website: https://daniilidis-group.github.io/learned_action_spaces
[ "unsupervised learning", "vision", "motion", "action space", "video prediction", "variational models" ]
https://openreview.net/pdf?id=SylPMnR9Ym
https://openreview.net/forum?id=SylPMnR9Ym
ICLR.cc/2019/Conference
2019
{ "note_id": [ "B1l6OfrZxE", "Sye9RIDoJV", "HkxGHBNokN", "S1lxT5Z9Am", "HJlMwVQt67", "B1gUrEmKTQ", "HJxc1L8GTQ", "rJetUCDx67", "B1lyw7K02X", "SJeMWONjnQ" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review" ], "note_created": [ 1544798837267, 1544414930392, 1544402234083, 1543277240411, 1542169690448, 1542169661641, 1541723618316, 1541598801110, 1541473110788, 1541257210449 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1276/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1276/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1276/Authors" ], [ "ICLR.cc/2019/Conference/Paper1276/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1276/Authors" ], [ "ICLR.cc/2019/Conference/Paper1276/Authors" ], [ "ICLR.cc/2019/Conference/Paper1276/Authors" ], [ "ICLR.cc/2019/Conference/Paper1276/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1276/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1276/AnonReviewer1" ] ], "structured_content_str": [ "{\"metareview\": \"The reviewers had some concerns regarding clarity and evaluation but in general liked various aspects of the paper. The authors did a good job of addressing the reviewers' concerns so acceptance is recommended.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"boderline - but leaning to accept\"}", "{\"title\": \"Response to rebuttal\", \"comment\": \"Authors have addressed my concerns and I have increased my ratings to an accept.\", \"i_would_encourage_the_authors_to_include_further_experiments\": \"(a) Add dynamically changing background. Since the background motion will not be structured, it will be interesting to see if the model picks up on unstructured motion. \\n\\n(b) Please include details of 72 arm size parameters chosen. I would like to know if the parameters in the testing scenario are are an interpolation of these 72 parameters or is it required to extrapolate. \\n\\n(c) Please include the results of using additive composition in the appendix. Its a good baseline result to report.\"}", "{\"title\": \"Authors' response summary: improved intro and abstract, added experiments on new environment variations, added technical clarifications\", \"comment\": [\"We thank the reviewers for their helpful comments and suggestions: we have incorporated all proposed changes and performed all suggested experiments. We believe that the quality of the paper has been improved and our contribution is clearer. All reviewers agreed that the paper is well-motivated, well-written, and proposes an interesting and novel model. Reviewers 1 and 3 noted that the idea is relevant to the community and the experimental results are strong.\", \"Generalization experiments. Reviewers 2 and 3 requested experiments probing the ability of the method to generalize to domains with different visual characteristics. To address this, we conducted experiments where the passively observed input domain contained the same degrees of freedom as the target domain but differed in terms of (i) the appearance of the background or (ii) the size and visual properties of the robot. Our experiments showed that the model trained on different source domains successfully generalizes to a target domain that has never been observed at training time. These results are shown in Figure 5 (right) and Figures 8 to 10 (in the appendix). Reviewer 2\\u2019s main concern was whether the model can be applied when the source domain of passive learning and target domain for which action-labelled data is available are not the same: these experiments suggest that the model can be applied in such settings.\", \"Clarifications. The reviewers further requested a number of clarifications to the paper. We have addressed all of the reviewers\\u2019 points in individual responses. We made the following changes to the manuscript in response to reviewer comments:\", \"Revised the abstract to more clearly emphasize the problem addressed in the paper and to make it clearer that our primary technical contribution is the introduction of the composition loss.\", \"Added a sentence to paragraph 1 of the introduction reiterating the problem we are addressing (learning an agent\\u2019s action space from unlabeled visual observations) and connecting it to the motivating example of an infant learning to walk.\", \"Added sentences to the third paragraph of the introduction connecting the title of the paper, the problem at hand, how our method specifically addresses this problem, and what our method can be used for.\", \"Added a clarification to Section 3.3, line 4: \\u201cfrom the training data\\u201d to clarify that the bijection is also trained on training data (not test data).\", \"To incorporate the generalization experiments requested by the reviewers (see above):\", \"-- Added a final paragraph to Section 4.2 describing the experiments.\", \"-- Expanded Table 2 to include these results, and moved it from the appendix into Figures 5 (right).\", \"-- Added Figures 8, 9, and 10 (Appendix E) to display the results.\", \"Expanded table 1 to include data efficiency experiments suggested by reviewer 1.\", \"Added a sentence to footnote 2 to more clearly explain the choice to use a binary indicator to distinguish representations of single actions from representations of action sequences.\", \"Added a sentence to Appendix B mentioning that wider and deeper MLPs did not change performance in our experiments.\", \"Enlarged Figures 11, 13 and 14 (7, 9, and 10 in the original submission) by reducing the length of the displayed sequences. The full sequences are still available on the anonymized website associated with the paper.\"]}", "{\"title\": \"Good Response and Revision\", \"comment\": [\"\\\"We are not aware of prior work that uses composability as a loss, or that uses composition of this form in the context of video prediction...\\\"\", \"I agree with you, the comment was meant to convey that the overall concept of composing latent variables is not novel and has been applied in various fields. But it is indeed not used for video prediction and learning action dynamics tasks.\", \"While individual components in the paper (IB, latent composition, etc.) are not novel in itself, the combination of them for the task of video prediction and learning action dynamics will be interesting to the community. Further, the revision done by the authors has certainly improved the quality of the paper and responses are satisfactory. Hence, I support this paper for acceptance.\"]}", "{\"title\": \"Clarified model choices, added experiment with background changes,\", \"comment\": \"We thank the reviewer for the helpful comments and suggestions. We have updated the manuscript to address all points raised. In particular, we thank the reviewer for suggesting the experiment probing the model's invariance to background changes: we show that our model is robust to a variety of static backgrounds and believe this result strengthens the paper's argument. We address the reviewer's questions and comments as follows:\\n\\n1. Q: Justification and exposition of novelty in the paper; prior work on composition?\", \"a\": \"For better visibility, we have truncated the length of the sequences displayed in Figures 7, 9, and 10 of the appendix (now numbered Figures 11, 13 and 14 in the revised manuscript) and enlarged the displayed images. The full sequences are still included on the website. We have also added a comment to the manuscript to encourage readers to view figures on a screen, where the sequences are easier to examine in detail.\\n\\nIf you still feel that there are issues with the manuscript that would prevent you from raising your score, please point these out so that we can address them.\"}", "{\"title\": \"Added experiment with different source and target agents, clarified technical questions\", \"comment\": \"We thank the reviewer for the helpful comments and suggestions. We have updated the manuscript to address all points raised. In particular, the suggested experiment shows the applicability of our method to cases where training and test agents differ visually but share the same action space. We address the reviewer's questions and comments as follows:\\n\\n1. Q: Can the proposed model be applied on different source and target domains?\", \"a\": \"To train the compositional loss, we require a way of decoding trajectory representations (together with a sequence of previous frames) into images. One way to do this would be to include an additional LSTM that learns to operate on trajectory representations. Instead, we opted to let the existing LSTM predict with either the action or trajectory representation by using the binary indicator to distinguish between these two operation modes. This choice allows us to share weights between the networks conducting the two tasks. Importantly, when doing visual servoing, we have to always set the value to indicate z, as it would otherwise plan with trajectories instead of actions. We added a comment further clarifying this to footnote 2.\\n\\nIf you still feel that there are issues with the manuscript that would prevent you from raising your score, please point these out so that we can address them.\"}", "{\"title\": \"Added details of training setup, improved abstract and introduction\", \"comment\": \"We thank the reviewer for the helpful comments. We have updated the manuscript to address the points raised, and we believe the paper is now clearer. We address the reviewer's questions and comments as follows:\\n\\n1. Q: Which sequences are used to train the bijective mapping between the learned action representation and the true action labels?\", \"a\": \"We have updated the abstract and the 3rd and the 4rd paragraph of the introduction to include more detail about the problem addressed in the paper and the method we use to approach it. We believe the problem addressed in our paper - learning a representation that disentangles an agent's action space from unlabeled video - can now be much more clearly understood from the abstract and introduction. We have also changed these sections to emphasize that we add a new loss term that enforces composability and that our model learns a mapping between the latent action representation and the true action labels from a small number of action-annotated sequences. We emphasize that this approach enables (i) action-conditioned video prediction and (ii) trajectory planning for servoing in latent space, without requiring a large, action-annotated dataset.\\n\\nIf you still feel that there are issues with the manuscript that would prevent you from raising your score, please point these out so that we can address them.\\n\\n\\n------\\n\\nWe updated this response to reflect the numbering of figures in the revised manuscript\"}", "{\"title\": \"Interesting paper with some clarifications required\", \"review\": [\"The paper proposes a Variational IB based approach to learn action representations directly from video of actions being taken. The basic goal of the work is to disentangle the dynamic parts of the scene in the video from the static parts and only capture those dynamic parts in the representation. Further, a key property of these learned representations is that they contain compositional structure of actions so as to their cumulative effects. The outcome of such a method is better efficiency of the subsequent learning methods while requiring lesser amount of action label videos.\", \"To achieve this, the authors start with a previously proposed video prediction model that uses variational information bottleneck to learn minimal action representation. Next, this model is augmented with composability module where in latent samples across frames are composed into a single trajectory and is repeated in a iterative fashion and again the minimal representation for the composed action space is learned using IB based objective. The two objectives are learned in a joint fashion. Finally, they use a simple MLP based bijection to learn the correspondence between actions and their latent representations. Experiments are done on two datasets - reacher and BAIR - and evaluation is reported for action conditioned video prediction and visual servoing.\", \"The paper is well written and provides adequate details to understand the flow of the material.\", \"The idea of learning disentangled representation is being adopted in many domains and hence this contribution is timely and very interesting to the community.\", \"The overall motivation of the paper to emulate how humans learn by looking at other's action is very well taken. Being able to learn from only videos is a nice property especially when the actual real world environment is not accessible.\", \"High Performance in terms of error and number of required action labeled videos demonstrates the effectiveness of the approach.\", \"However, there are some concerns with the overall novelty and some technical details in the paper:\", \"It seems the key contribution of the paper is to add the L_comp part to the already available L_pred part in Denton and Fergus 2018. The trick use to compose the latent variables is not novel and considering that variational IB is also available, the paper lacks overall novelty. A better justification and exposition of novelty in this paper is required.\", \"Two simple MLP layers for bijection seems very adhoc. I am not able to see why such a simple bijection would be able to map the disentangled composed action representations to the actual actions. It seems it is working from the experiments but a better analysis is required on how such a bijection is learned and if there are any specific properties of such bijection such that it will work only in some setting. Will the use of better network improve the learned bijection?\", \"While videos are available, Figures in the paper itself are highly unreadable. I understand the small figures in main paper but it should not be an issue to use full pages for the figure on appendix.\", \"Finally, it looks like one can learn the composed actions (Right + UP) representation while being not sensitive to static environment. If that is the case, does it work on the environment where except the dynamic part everything else is completely different? For example, it would be interesting to see if a model is trained where the only change in environment is a robot's hand moving in 4 direction while everything else remaining same. Now would this work, if the background scene is completely changed while keeping the same robot arm?\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"I like the idea, but concerns with experimental evaluation\", \"review\": \"The authors propose a way to learn models that predict what will happen next in scenarios where action-labels are not available in abundance. The agents extend previous work by proposing a compositional latent-variable model. Results are shown on BAIR (robot pushing objects) and simulated reacher datasets. The results indicate that it is possible to learn a bijective mapping between the latent variables inferred from a pair of images and the action executed between the observations of the two images.\\n\\nI like the proposed model and the fact that it is possible to learn a bijection between the latent variables and actions is cute. I have following questions/comments: \\n\\n(a) The authors have to learn a predictive model from passive data (i.e. without having access to actions). Such models are useful, if for example an agent can observe other agents or internet videos and learn from them. In such scenarios, while it would be possible to learn \\u201ca\\u201d model using the proposed method, it is unclear how the bijective mapping would be learnt, which would enable the agent to actually use the model to perform a task that it is provided with. \\nIn the current setup, the source domain of passive learning and target domain from which action-labelled data is available are the same. In such setups, the scarcity of action-labelled data is not a real concern. When an agent acts, it trivially has access to its own actions. So collecting observation, action trajectories is a completely self-supervised process without requiring any external supervision. \\n\\n(b) How is the model of Agrawal 2016 used for visual serving? Does it used the forward model in the feature space of the inverse model or something else? \\n\\n(c) In the current method, a neural network is used for composition. How much worse would a model perform if we simply compose by adding the feature vectors instead of using a neural network. It seems like a reasonable baseline to me. Also, how critical is including binary indicator for v/z in the compositional model? \\n\\nOverall, I like the technical contribution of the paper. The authors have a very nice introduction on how humans learn from passive data. However, the experiments make a critical assumption that domains that are used for passive and action-based learning are exactly the same. In such scenarios, action-labeled data is abundantly available. I would love to see some results and/or hear the authors thoughts on how their method can be used to learn by observing a different agent/domain and transfer the model to act in the agent\\u2019s current domain. I am inclined to vote for accepting the paper if authors provide a convincing rebuttal.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Good paper, needs some clarification in the experimental section and the introduction\", \"review\": \"PAPER SUMMARY\\n-------------\\nThis paper proposes an approach to video prediction which autonomously finds an action space encoding differences between subsequent frames. This approach can be used for action-conditioned video prediction and visual servoing. \\nUnlike related work, the proposed method is initially trained on video sequences without ground-truth actions. A representation for the action at each time step is inferred in an unsupervised manner. This is achieved by imposing that the representation of this action be as small as possible, while also being composable, i.e. that that several actions can be composed to predict several frames ahead.\\nOnce such a representation is found, a bijective mapping to ground truth actions can be found using only few action-annotated samples. Therefore the proposed approach needs much less annotated data than approaches which directly learn a prediction model using actions and images as inputs.\\n\\nThe approach is evaluated on action-conditioned video prediction and visual servoing. The paper shows that the learned action-space is meaningful in the sense that applying the same action in different initial condition indeed changes the scenes in the same manner, as one would intuitively expect. Furthermore, the paper shows that the approach achieves state of the art results on a action-conditioned video prediction dataset and on a visual servoing task.\\n\\nPOSITIVE POINTS\\n---------------\\nThe idea of inferring the action space from unlabelled videos is very interesting and relevant.\\n\\nThe paper is well written.\\n\\nThe experimental results are very interesting, it is impressive that the proposed approach manages to learn meaningful actions in an unsupervised manner (see e.g. Figure 3).\\n\\nNEGATIVE POINTS\\n---------------\\nIt is not exactly clear to me how the model is trained for the quantitative evaluation. On which sequences is the bijective mapping between inferred actions and true actions learned? Is is a subset of the training set? If yes, how many sequences are used? Or is this mapping directly learned on the test set? This, however, would be an unfair comparison in my opinion, since then the actions would be optimized in order to correctly predict on the tested sequences.\\n\\nThe abstract and introduction are too vague and general. It only becomes clear in the technical and experimental section what problem is addressed in this paper.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}" ] }
Hyewf3AqYX
A Frank-Wolfe Framework for Efficient and Effective Adversarial Attacks
[ "Jinghui Chen", "Jinfeng Yi", "Quanquan Gu" ]
Depending on how much information an adversary can access to, adversarial attacks can be classified as white-box attack and black-box attack. In both cases, optimization-based attack algorithms can achieve relatively low distortions and high attack success rates. However, they usually suffer from poor time and query complexities, thereby limiting their practical usefulness. In this work, we focus on the problem of developing efficient and effective optimization-based adversarial attack algorithms. In particular, we propose a novel adversarial attack framework for both white-box and black-box settings based on the non-convex Frank-Wolfe algorithm. We show in theory that the proposed attack algorithms are efficient with an $O(1/\sqrt{T})$ convergence rate. The empirical results of attacking Inception V3 model and ResNet V2 model on the ImageNet dataset also verify the efficiency and effectiveness of the proposed algorithms. More specific, our proposed algorithms attain the highest attack success rate in both white-box and black-box attacks among all baselines, and are more time and query efficient than the state-of-the-art.
[ "efficient", "framework", "effective adversarial attacks", "attack", "attack algorithms", "algorithms", "much information", "adversary", "access", "adversarial attacks" ]
https://openreview.net/pdf?id=Hyewf3AqYX
https://openreview.net/forum?id=Hyewf3AqYX
ICLR.cc/2019/Conference
2019
{ "note_id": [ "Hyg0TAqNlE", "HyeaQklByN", "rygDlUyHkV", "BJli6m1bJE", "rJgGFXJWk4", "H1etHF6ykE", "ryge7ywCAQ", "HygnCHRpR7", "HJlvq-NoRQ", "S1lJZSCqCQ", "HyxXl7Z5RX", "ByxEFM-c0Q", "BJxWfM-c0X", "H1lALLCRhX", "SyllU9Iq37", "rJefA1rv3X", "S1xm6X3TcX", "B1edUenZcX" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review", "official_comment", "comment" ], "note_created": [ 1545019078035, 1543991076946, 1543988718590, 1543726018964, 1543725945779, 1543653697230, 1543560983815, 1543525844139, 1543352718820, 1543329014855, 1543275243014, 1543275132394, 1543275017233, 1541494358406, 1541200456043, 1540997065918, 1539322810937, 1538535503561 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1275/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1275/Authors" ], [ "ICLR.cc/2019/Conference/Paper1275/Authors" ], [ "ICLR.cc/2019/Conference/Paper1275/Authors" ], [ "ICLR.cc/2019/Conference/Paper1275/Authors" ], [ "ICLR.cc/2019/Conference/Paper1275/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1275/Authors" ], [ "ICLR.cc/2019/Conference/Paper1275/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1275/Authors" ], [ "ICLR.cc/2019/Conference/Paper1275/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1275/Authors" ], [ "ICLR.cc/2019/Conference/Paper1275/Authors" ], [ "ICLR.cc/2019/Conference/Paper1275/Authors" ], [ "ICLR.cc/2019/Conference/Paper1275/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1275/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1275/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1275/Authors" ], [ "(anonymous)" ] ], "structured_content_str": [ "{\"metareview\": \"While there was some support for the ideas presented, the majority of the reviewers did not think the submission is ready for publication at ICLR. Significant concerns were raised about clarity of the exposition.\", \"confidence\": \"4: The area chair is confident but not absolutely certain\", \"recommendation\": \"Reject\", \"title\": \"Not ready for publication at ICLR\"}", "{\"title\": \"Follow up with Reviewer 2\", \"comment\": \"Dear Reviewer 2, thank you for reading our response and increasing your score. At the end of your updated review, you said that \\u201cthe authors answered some of my questions\\u201d. We apologize if we missed any of your questions. We wonder could you let us know what are the questions that we did not answer? We will answer them asap. Thank you!\\n\\nBest,\\nAuthors\"}", "{\"title\": \"Follow up with Reviewer 3\", \"comment\": \"Dear Reviewer 3, we have posted our latest response on OpenReview to address your question. We\\u2019d like to follow up with you to see whether you had a chance to look it over? We hope our responses could clear your concerns. Thank you!\\n\\nBest,\\nAuthors\"}", "{\"title\": \"Response to Reviewer3's new comments\", \"comment\": \"Thank you for reading our response and revision.\\n\\nYes, we performed grid search to tune the step size for all baselines algorithm to ensure a fair comparison. We are sure the chosen step size is not at the boundary. In detail, we first tune the step size by searching the grid {0.0001, 0.001, 0.01, 0.1, 1,10}, and find the best step size over this grid. It turns out that the boundary points 0.0001 and 10 were never selected to be the best step size. Given the selected best step size in the previous grid (e.g., 0.1), we evenly splitting the intervals before and after this step size (e.g., 0.01-0.1, and 0.1-1) into 10 subintervals respectively to construct a refined grid (e.g., {0.01,0.02,..., 0.09, 0.1, 0.2, \\u2026,0.9,1}), and fine-tune the step size by search this refined grid. We will elaborate it in the final version of our paper.\"}", "{\"title\": \"[UPDATE] The experimental results of standard PGD baseline\", \"comment\": \"[UPDATE] We have performed the additional baseline \\u201cstandard PGD\\u201d for white-box attack. All test show that \\u201cstandard PGD\\u201d can also achieve 100% attack success rate in white-box attack, for both $L_2$ norm and $L_\\\\infty$ norm cases, and for both Inception and ResNet networks.\\n\\nThe new tables containing \\u201cstandard PGD\\u201d results can be found in the following anonymous link: https://www.dropbox.com/sh/p2uj97yixobnku2/AAC0o9dJDtt-cvQg7pcjQXila?dl=0\\n\\nIt can be seen that \\u201cstandard PGD\\u201d has similar performance with \\u201cadversarial PGD\\u201d in our experiments. It does not change any of our conclusions. We will update these tables in the final version of our paper (We are not allowed to revise our submission at this point). Please let us know if you have any other suggestion.\"}", "{\"title\": \"How was the PGD step size selected?\", \"comment\": \"I would like to thank the authors for their detailed reply.\\n\\nDid you also tune the step size of PGD for the experiments in Figure 1 for instance? To make a fair comparison, both FW and PGD should be tuned appropriately. In particular, the authors should ensure that the chosen step size is not at the boundary of the explored range of hyperparameters.\"}", "{\"title\": \"Response to Reviewer2's new comments\", \"comment\": \"Thank you for reading our response and revision.\\n\\n2. We apologize we misunderstood your original comment on PGD. Thank you for your clarification. Now we know where the misunderstanding comes from: we are actually talking about different \\u201cPGD\\u201d methods. The fact you describe is indeed correct, and you are referring to the standard PGD method in constrained optimization, let us call it \\u201cstandard PGD\\u201d. However, what we refer to, is the \\u201cadversarial PGD\\u201d proposed by [Madry et al.]. The only difference is that in \\u201cadversarial PGD\\u201d, the gradient should be first \\u201cnormalized\\u201d before one can use it. In $L_\\\\infty$ norm case, the \\u201cnormalized\\u201d gradient is the sign of the gradient and in $L_2$ norm case, the \\u201cnormalized\\u201d gradient is the gradient normalized by its $L_2$ norm. In this way, \\u201cadversarial PGD\\u201d is the same as FGSM^k in $L_\\\\infty$ norm case as pointed out in [Madry et al.]. \\n\\nWe will add \\u201cstandard PGD\\u201d as a baseline in our white-box attack experiments to address your question. Running the experiments could take a while and we will try to get the new results posted here within the next 48 hours. We hope this could clear your concern.\\n\\n4. You are right we do not have access to f(x^*). However, in [Lacoste-julien] (Option II), their step size also depends on C > C_f, where C_f, by definition, is also intractable to compute. In fact, both our theorem and [Lacoste-julien]\\u2019s theorem rely on the smoothness assumption which is also not satisfied in real deep learning tasks due to the use of RELU activations and max-pooling layers. Therefore, in our experiments, we did not use the \\u201ctheoretical\\u201d step size as suggested by the theorem. Actually, we use $\\\\sqrt(c/(T))$ as the step size and performed grid search to tune c. We found that constant step size is more than enough to give us good performance in white-box attacks. \\n\\nIn the end, we are not arguing whose step size is better, rather, we just want to provide a theorem describing the convergence behavior of Frank-Wolfe white-box attack algorithm, for completeness.\"}", "{\"title\": \"Comments on authors' response\", \"comment\": \"2. FGSM^k is **not** projected gradient descent on the objective you are optimizing because the signal considered is the **sign** of the gradient instead of the gradient itself.\\n\\nConsidering the sign of the gradient provides a method that may diverge for any step-size $\\\\eta$:\\nTake for instance $\\\\L(\\\\theta) = \\\\theta^2, \\\\theta_0= \\\\eta/2$ and the constraint to clip in $[-1,1]$ then since $sign(\\\\nabla \\\\L (\\\\theta) = sign(\\\\theta)$ we have that $\\\\theta_1 = \\\\theta_0 - \\\\eta \\\\sign(\\\\nabla \\\\L(\\\\theta_0) = \\\\eta/2 - \\\\eta = - \\\\eta/2$ and then $\\\\theta_2 = \\\\eta/2$ providing a sequence that oscillate between $\\\\eta/2$ and $-\\\\eta/2$.\\n\\n4. The step-size you are proposing might be challenging to compute (since you may not have access to f(x^*) ) whereas the step-size proposed by Lacoste-julien (Option II) depends on the same constants as yours an the gap function with is very cheap to compute (since you have already computed $d_t$)\"}", "{\"title\": \"Thank you\", \"comment\": \"Thank you very much for reading our response and revision, and increasing your score.\"}", "{\"title\": \"Thank you for the revision\", \"comment\": \"With the updates and mostly with new experiments where not all attacks reach 100% accuracy, I think this paper should be accepted.\"}", "{\"title\": \"Response to Reviewer1\", \"comment\": \"1. Thank you for your suggestion, we have addressed this in the revision as you suggested. This is indeed a good motivation.\\n\\n2. Thank you for your suggestion. We have further added experiments using even stronger query limit (previously 500000, now 50000) for the additional experiments on ResNet V2 model in the supplemental material. (We did not choose to use smaller epsilon because first, we already used a quite standard choice of epsilon, second, as you said, going for extremely small distortion does not really mean anything in adversarial context.) As you can see, in this even harder setting our proposed algorithm still maintain a performance lead over other baselines. Also, we have revised the statement in the abstract as you suggested.\\n\\n3. You are right, it is a quite weak attack and we have removed it from the table (just mention it in the text).\\n\\n4. Yes, we could just remove the distortion column in our result. We choose to include it because we do not want others to think that we actually trade a lot of distortions (to make problem easy) for speedup in runtime. \\n\\n5. We have added further empirical evidence to show that in the revision. From an intuitive perspective, using lambda>1 is essentially a \\u201crelax and tighten\\u201d step by first relax the constraint to make the problem easier, and then tighten it back to the real constraint. The \\u201crelax and tighten\\u201d idea has been widely used in constrained optimization, and we adapted this idea to Frank-Wolfe algorithm to make it even faster.\\n\\n6. As mentioned in an anonymous comment, there is one paper which proposed a similar but different zeroth-order non-convex FW algorithm as well as convergence rate analysis ahead of us. We were not aware of this paper when we prepared our ICLR submission, since it was posted only ten days before the ICLR deadline. We have cited this paper and modify our claim correspondingly in the revision. Nevertheless, it does not affect the main contribution of our paper: a novel Frank-Wolfe based adversarial attack framework for both white-box and black-box attacks, which is much more efficient than existing white-box/black-box adversarial attacks in both query complexity and runtime. \\n\\n7. Thank you for your suggestion and we have explicitly written down the update for a better comparison in the supplemental materials (Section A) in the revision. \\n\\n8. It means it is invariant to an affine transformation of the constraint set, i.e., if we choose to re-parameterize of the constraint with some linear or affine transformation M, the original and the new optimization problem will looks the same to the Frank-Wolfe algorithm. Please refer to [Jaggi (2013)], [Lacoste-Julien (2016)] for more details.\\n\\n9. In white-box setting, we perform grid search / binary search for parameter epsilon (or c for CW) for all algorithms. This will lead to better/ closer distortions for all methods. In black-box setting, we care more about query complexity and thus did not perform the grid search/binary search steps to avoid extra queries in finding the best epsilon/lambda.\\n\\n10. Thank you for pointing these typos out, we have addressed it in the revision.\"}", "{\"title\": \"Response to Reviewer2\", \"comment\": \"Thank you for your helpful comments!\\n\\n1. You are right that Frank-Wolfe would be advantageous over PGD when the constraints are more complicated and adversarial attack may not be such a case. Yet it is also well-known that Frank-Wolfe has quite different optimization behavior compared with PGD even though they have the same order of convergence rate. Therefore, it is interesting and important to examine the performance of Frank-Wolfe algorithm for adversarial attack, given the fact that PGD has been shown to be a very effective for adversarial attack. In fact, from our work, we found that Frank-Wolfe based methods are generally more efficient than PGD method.\\nFrom another perspective, Frank-Wolfe solves the problem by calling Linear Minimization Oracle (LMO) over the constraint set at each iteration. This LMO shares the same intuition as FGSM, which also tries to linearize the neural network loss function to find the adversarial examples. In this sense, it is a quite natural attempt to revisit FGSM under the Frank-Wolfe framework. \\n\\n2. We are sorry maybe we didn\\u2019t explain it very well in the paper, but this is a misunderstanding. We indeed compared our method with generalized I-FGSM/BIM, which is exactly the same as PGD (In [Madry et al.] they also mentioned this in Section 2.1 and they refer it as FGSM^k). We decide to just call it PGD in the revision to avoid confusion. We hope this remove your concern.\\n\\n3. Indeed, theoretically we can only prove for $\\\\lambda$ = 1 case. Yet we found that larger \\\\lambda brings us more speedup. We have added further empirical evidence (performance comparison of our method with different \\\\lambda in Figure 1 in the revised paper) to justify it. Intuitively speaking, using lambda>1 is essentially a \\u201crelax and tighten\\u201d step by first relax the constraint to make the problem easier, and then tighten it back to the real constraint. The \\u201crelax and tighten\\u201d idea has been widely used in constrained optimization, and in this paper we adapted this idea into Frank-Wolfe algorithm to make it even faster.\\n\\n4. [Lacoste-Julien 2016] considered the general first-order Frank-Wolfe algorithm for nonconvex smooth optimization. The result of Theorem 4.3 in our paper is almost the same as the result in (Lacoste-Julien 2016), except that the choices the learning rate in these two papers are different though. We have made it clear in the revision. \\n\\n5. We have added detailed hyperparameter settings for CW and EAD in the revision in the supplemental materials.\\n\\n6. While Theorem 4.7 is new and may be of independent interest in the optimization community, it is not the main contribution in this paper. We would like to emphasize that our major contribution in this paper is a Frank-Wolfe based algorithm for adversarial attack, which is more efficient than PGD based adversarial attack algorithm and other baselines.\\n\\n7. Sorry about the confusion. $y$ should be replace by $y_{tar}$. It is a simplified notation we mentioned in the proof in the appendix. Thank you for your suggestion and we have revised the notation $f(x,y_{tar})$ to $f(x)$.\\n\\n8. Thank you for pointing out several typos. We have fixed all of them in the revision.\"}", "{\"title\": \"Response to Reviewer3\", \"comment\": \"Thank you for your constructive comments!\\n\\n1. We fully understand your concern and we have added detailed description in the supplemental materials to show the hyperparameters we use for baseline methods in the revision. \\n\\n2. We would like to argue that constrained optimization based formulation itself is not designed to achieve better distortion compared with regularized optimization based formulation. So there is no surprise that our algorithm\\u2019s distortion is not the best. On the other hand, as mentioned by the other reviewer, distortion is usually not that essential in adversarial attacks as long as it is maintained in a reasonable range. We could actually remove the distortion column, instead, we chose to include it just to show that we did not trade a lot of distortions (to make problem much easier) and thus gains speedup. From our experimental results, you can see that our proposed method achieves significant speedup while keeping the distortion around the same level as the best baselines. \\n\\n3. Thank you for your suggestion. We have further added success rate vs queries plot (for black-box case) and loss vs iterations plot (for white-box case) in the revision. As you can see, in terms of number of iterations / queries, our method still outperforms the other baselines by a large margin.\\n\\n4. Thank you for your suggestion. We have further added experiments on ResNet V2 model and averaging over 500 correctly classified pictures to strengthen our result. Again, this additional experiments show that our method outperforms the other baselines for both white-box attack and black-box attack.\\n\\n5. Regarding poor time complexity in practice, first, as you mentioned, adversarial training currently is quite slow due to the slow adversarial attack steps. Better time complexity of adversarial attack could significantly speed up adversarial training algorithms. Second, it is worth noting that the running time complexity of adversarial attack also highly depends on the input size. For example, if you attack a CIFAR-10 classifier or an MNIST classifier, it could take only seconds per attack even for the slowest algorithm since the input size is only 32 by 32 (or 28 by 28). However, if you attack a ImageNet classifier or even higher dimensional data classifier, it could take significantly longer time (minutes). That is why reducing the runtime of adversarial attack is very important.\\n\\n6. We apologize for this confusion. Regarding \\u201cgradient-based\\u201d / \\u201coptimization based\\u201d methods and coordinate-wise black-box attacks, we have changed our description to avoid confusion. Thank you for pointing it out.\"}", "{\"title\": \"Promising results but some questions about experiments\", \"review\": \"The paper investigates the Frank-Wolfe (FW) algorithm for constructing adversarial examples both in a white-box and black-box setting. The authors provide both a theoretical analysis (convergence to a stationary point) and experiments for an InceptionV3 network on ImageNet. The main claim is that the proposed algorithm can construct adversarial examples faster than various baselines (PGD, I-FGSM, CW, etc.), and from fewer queries in a black-box setting.\\n\\nThe FW algorithm is a classical method in optimization, but (to the best of my knowledge) has not yet been evaluated yet for constructing adversarial examples. Hence it is a natural question to understand whether FW performs significantly better than current algorithms in this context. Indeed, the authors find that FW is 6x - 20x faster for constructing white-box adversarial examples than a range of relevant baseline, which is a significant speed-up. However, there are several points about the experiments that are unclear to me:\\n\\n- It is well known that the running times of optimization algorithms are highly dependent on various hyperparameters such as the step size. But the authors do not seem to describe how they chose the hyperparameters for the baselines algorithms. Hence it is unclear how large the running time improvement is compared to a well-tuned baseline.\\n\\n- Other algorithms in the comparison achieve a better distortion (smaller perturbation). Since finding an adversarial with smaller perturbation is a harder problem, it is unclear how the algorithms compare for finding adversarial examples with similar distortion. Instead of reporting a single time-vs-distortion data point, the authors could show the full trade-off curve.\\n\\n- The authors only provide running times, not the number of iterations. In principle all the algorithms should have a similar bottleneck in each iteration (computing a gradient for the input image), but it would be good to verify this with an iteration count vs success rate (or distortion) plot. This would also allow the authors to compare their theoretical iteration bound with experimental data.\\n\\nIn addition to these three main points, the authors could strengthen their results by providing experiments on another dataset (e.g., CIFAR-10) or model architecture (e.g., a ResNet), and by averaging over a larger number of test data points (currently 200).\\n\\nOverall, I find the paper a promising contribution. But until the authors provide a more thorough experimental evaluation, I hesitate to recommend acceptance.\", \"additional_comments\": \"\", \"the_introduction_contains_a_few_statements_that_may_paint_an_incomplete_or_confusing_picture_of_the_current_literature_in_adversarial_attacks_on_neural_networks\": [\"The abstract claims that the poor time complexity of adversarial attacks limits their practical usefulness. However, the running time of attacks is typically measured in seconds and should not be the limiting element in real-world attacks on deep learning systems. I am not aware of a setting where the running time of an attack is the main computational bottleneck (outside adversarial training).\", \"The introduction distinguishes between \\\"gradient-based methods\\\" and \\\"optimization-based methods\\\". This distinction is potentially confusing to a reader since the gradient-based methods can be seen as optimization algorithms, and the optimization-based methods rely on gradients.\", \"The introduction claims that black-box attacks need to estimate gradients coordinate-wise. However, this is not the case already in some of the prior work that uses random directions for estimating gradients (e.g., the cited paper by Ilyas et al.)\", \"I encourage the authors to clarify these points in an updated version of their paper.\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"A method to produce adversarial attack using a Frank-Wolfe inspired method\", \"review\": \"This paper provide a method to produce adversarial attack using a Frank-Wolfe inspired method.\", \"i_have_some_concerns_about_the_motivation_of_this_method\": [\"What are the motivations to use Frank-Wolfe ? Usually this algorithm is used when the constraints are to complicated to have a tractable projection (which is not the case for the L_2 and L_\\\\infty balls) or when one wants to have sparse iterates which do not seem to be the case here.\", \"Consequently why did not you compare simple projected gradient method ? (BIM) is not equivalent to the projected gradient method since the direction chosen is the sign of the gradient and not the gradient itself (the first iteration is actually equivalent because we start at the center of the box but after both methods are no longer equivalent).\", \"There is no motivations for the use of $\\\\lambda >1$ neither practical or theoretical since the results are only proven for $\\\\lambda =1$ whereas the experiments are done with \\\\lambda = 5,20 or 30.\", \"What is the difference between the result of Theorem 4.3 and the result from (Lacoste-Julien 2016)?\", \"Depending on the answer to these questions I'm planning to move up or down my grade.\", \"In the experiment there is no details on how you set the hyperparameters of CW and EAD. They use a penalized formulation instead of a constrained one. Consequently the regularization hyperparameters have to be set differently.\", \"The only new result seem to be Theorem 4.7 which is a natural extension to theorem 4.3 to zeroth-order methods.\"], \"comment\": [\"in the whole paper there is $y$ which is not defined. I guess it is the $y_{tar}$ fixed in the problem formulation Sec 3.2. In don't see why there is a need to work on any $y$. If it is true, case assumption 4.5 do not make any sense since $y = y_{tar}$ (we just need to note $\\\\|\\\\nabla f(O,y_{tar})\\\\| = C_g$) and some notation could be simplified setting for instance $f(x,y_{tar}) = f(x)$.\", \"In Theorem 4.7 an expectation on g(x_a) is missing\"], \"minor_comments\": \"- Sec 3.1 theta_i -> x_i\\n- Sec 3.3 the argmin is a set, then it is LMO $\\\\in$ argmin.\\n\\n===== After rebuttal ======\\nThe authors answered some of my questions but I still think it is a borderline submission.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Interesting paper, a bit problematic experimental set-up\", \"review\": \"The paper proposes using the Frank-Wolfe algorithm for fast adversarial attacks. They prove upper bounds on the Frank-Wolfe gap and show experimentally that they can attack successfully much faster than other algorithms. In general I find the paper novel (to the best of my somewhat limited knowledge), interesting and well written. However I find the white-box experiments lacking as almost every method has 100% success rate. Fixing this would significantly improve the paper.\", \"main_remarks\": [\"Need more motivation for faster white-box attack. One good motivation for example is adversarial training, e.g. Kurakin et al 2017 \\u2018ADVERSARIAL MACHINE LEARNING AT SCALE\\u2019 that would benefit greatly from faster attacks\", \"White-box attack experiments don\\u2019t really prove the strength of the method, even with imagenet experiments, as almost all attacks get 100% success rate making it hard to compare. Need to compare in more challenging settings where the success rate is meaningful, e.g. smaller epsilon or a more robust NN using some defence. Also stating the 100% success rate in the abstract is a bit misleading for the this reason.\", \"-Something is a bit weird with the FGM results. While it is a weaker attack, a 0%/100% disparity between it and every other\", \"attack seems odd.\", \"-The average distortion metric (that\\u2019s unfavourable to your method anyway) doesn\\u2019t really mean anything as the constraint optimization has no incentive to find a value smaller than the constraint.\", \"Regarding lambda>1, you write that \\u201cwe argue this modification makes our algorithm more general, and gives rise to better attack results\\u201d. I did not see any theoretical or empirical support for this in the paper. Also, it seems quite strange to me that making the FW overshot and then projecting back would be beneficial. Some intuitive explanation on why this should help and/or empirical comparison would be a great addition.\", \"The authors claim that this is the first zeroth-order non-convex FW convergence rate, I am not familiar enough with the field to evaluate this claim and its significance.\", \"Alg. 1 for T>1 is very similar to I-FGM, but also \\u2018pulls\\u2019 x_t towards x_orig. It would be very useful to write the update more explicitly and compare and contrast this 2 very similar updates. This gives nice insight into why this should intuitively work better.\", \"I am not sure what the authors mean by \\u201cthe Frank-Wolfe gap is affine invariant\\u201d. If we scale the input space by a, the gap should be scaled by a^2 - how/why is it invariant?\", \"I am not sure what you mean in 5.4 \\u201cwe omit all grid search/ binary search steps\\u2026\\u201d\"], \"minor_remarks\": [\"In remark 4.8 in the end option I and II are inverted by mistake\", \"In 5.1, imagenet results are normally top-5 error rate not top-1 acc, would be better to report that more familiar number.\", \"In the proof you wrongfully use the term telescope sum twice, there is nothing telescopic about the sum it is just bound by the max value times the length.\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Reply to \\u201crelated paper on zeroth-order non-convex Frank-Wolfe type algorithm\\u201d\", \"comment\": \"We were not aware of this paper when we prepared our ICLR submission, since it was posted only ten days before the ICLR deadline. We will cite this paper and modify our claim correspondingly in a later version. Nevertheless, it does not affect the main contribution of our paper: a novel Frank-Wolfe based adversarial attack framework for both white-box and black-box attacks, which is much more efficient than the existing white-box/black-box adversarial attacks in both query complexity and runtime.\\n\\nIn addition, we also would like to emphasize that the zeroth-order Frank-Wolfe algorithm we proposed is different from the algorithm proposed in the paper you pointed out. More specific, they use one-side finite difference zeroth-order gradient estimator with standard Gaussian sensing vectors, while we use the two-side symmetric finite difference zeroth-order gradient estimator with sensing vectors sampled from the unit sphere.\"}", "{\"comment\": \"https://arxiv.org/abs/1809.06474 - this paper in nips already provides rates for zeroth-order non-convex Frank-Wolfe type algorithm.\", \"title\": \"related paper\"}" ] }
rylDfnCqF7
Lagging Inference Networks and Posterior Collapse in Variational Autoencoders
[ "Junxian He", "Daniel Spokoyny", "Graham Neubig", "Taylor Berg-Kirkpatrick" ]
The variational autoencoder (VAE) is a popular combination of deep latent variable model and accompanying variational learning technique. By using a neural inference network to approximate the model's posterior on latent variables, VAEs efficiently parameterize a lower bound on marginal data likelihood that can be optimized directly via gradient methods. In practice, however, VAE training often results in a degenerate local optimum known as "posterior collapse" where the model learns to ignore the latent variable and the approximate posterior mimics the prior. In this paper, we investigate posterior collapse from the perspective of training dynamics. We find that during the initial stages of training the inference network fails to approximate the model's true posterior, which is a moving target. As a result, the model is encouraged to ignore the latent encoding and posterior collapse occurs. Based on this observation, we propose an extremely simple modification to VAE training to reduce inference lag: depending on the model's current mutual information between latent variable and observation, we aggressively optimize the inference network before performing each model update. Despite introducing neither new model components nor significant complexity over basic VAE, our approach is able to avoid the problem of collapse that has plagued a large amount of previous work. Empirically, our approach outperforms strong autoregressive baselines on text and image benchmarks in terms of held-out likelihood, and is competitive with more complex techniques for avoiding collapse while being substantially faster.
[ "variational autoencoders", "posterior collapse", "generative models" ]
https://openreview.net/pdf?id=rylDfnCqF7
https://openreview.net/forum?id=rylDfnCqF7
ICLR.cc/2019/Conference
2019
{ "note_id": [ "HylFc7YoX4", "H1esYBzqxN", "BketaFLkeV", "ryx9hh-q0Q", "rkeg4nWqCX", "Byxwn5bcRQ", "BJgoUVFd0m", "B1x-gqLOAX", "BkljeVLuCm", "rke9-oK5TX", "Skxyv5K5pQ", "BJeOQ5YqTX", "BJgDt_Yq67", "HkxE8ueK6m", "HklrzgzJTQ", "HJlErUNt3Q", "HylWdCAO2X", "Byl9JZ0u27", "rJgNLcqA97", "r1l0hU_pcQ", "SJe0psx69Q" ], "note_type": [ "official_comment", "comment", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "comment", "official_review", "official_review", "official_review", "comment", "official_comment", "comment" ], "note_created": [ 1548616592682, 1545377155147, 1544673729124, 1543277745982, 1543277607892, 1543277230998, 1543177299267, 1543166441227, 1543164915206, 1542261505976, 1542261334564, 1542261280247, 1542260863014, 1542158412405, 1541509132580, 1541125691531, 1541103209172, 1541099746486, 1539381836481, 1539307190376, 1539275718408 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1274/Authors" ], [ "~Jaemin_Cho1" ], [ "ICLR.cc/2019/Conference/Paper1274/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1274/Authors" ], [ "ICLR.cc/2019/Conference/Paper1274/Authors" ], [ "ICLR.cc/2019/Conference/Paper1274/Authors" ], [ "ICLR.cc/2019/Conference/Paper1274/Authors" ], [ "ICLR.cc/2019/Conference/Paper1274/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1274/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1274/Authors" ], [ "ICLR.cc/2019/Conference/Paper1274/Authors" ], [ "ICLR.cc/2019/Conference/Paper1274/Authors" ], [ "ICLR.cc/2019/Conference/Paper1274/Authors" ], [ "ICLR.cc/2019/Conference/Paper1274/Authors" ], [ "~Artem_Sobolev1" ], [ "ICLR.cc/2019/Conference/Paper1274/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1274/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1274/AnonReviewer1" ], [ "~Yoon_Kim1" ], [ "ICLR.cc/2019/Conference/Paper1274/Authors" ], [ "~Yoon_Kim1" ] ], "structured_content_str": [ "{\"title\": \"Reply\", \"comment\": \"Thanks for pointing out this related work, we have cited it appropriately.\"}", "{\"comment\": \"Nice work & Congrats for acceptance!\", \"i_would_like_to_point_our_work_which_also_mitigates_posterior_collapse\": \")\", \"https\": \"//arxiv.org/abs/1804.03424\", \"title\": \"Missing reference for posterior collapse mitigation\"}", "{\"metareview\": \"This paper introduces a method that aims to solve the problem of 'posterior collapse' in variational autoencoders (VAEs). The problem of posterior collapse is well-documented in the VAE literature, and various solutions have been proposed. Existing proposed solutions, however, aim to solve the problem by either changing the objective function (e.g. beta-VAE) or by changing the prior and/or approximate posterior models. The proposed method, in contrast, aims to solve the problem by bringing the VAE optimization procedure closer to the EM optimization procedure. Every iteration in optimization consists of SGD updates to the inference model (E-step), performed until the approximate posterior converges. This is followed by a single SGD update of the generative model. The multi-update E-step makes sure that the M-step optimizes something closer to the marginal log-likelihood, compared to what we would normaly do in VAEs (joint optimization of both inference model and generative model).\\n\\nThe experiments are relatively small-scale, but convincing.\\n\\nThe reviewers agree that the method is clearly described, and that the proposed technique is well supported by the experiments. We think that this work will probably be of high interest to the ICLR community.\", \"confidence\": \"5: The area chair is absolutely certain\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Meta-review\"}", "{\"title\": \"Revision Submitted\", \"comment\": \"We have submitted a revised manuscript and made the following modifications to address the reviewers' major concerns:\\n\\n-- Included number of active units as an additional metric in all results tables\\n-- Reported mean and standard deviation across different random seeds in the main results tables\\n-- Added Figure 4 to show each value obtained by separate runs of each method to visualize uncertainties\\n-- Added more thorough comparison with KL annealing baseline (see Appendix E, Table 9)\\n-- Added experiment to discuss separate learning rates of encoder and decoder (Appendix F, Table 10)\\n-- Added results with a fixed budget of encoder updates (see Section 6.5, Table 4)\\n-- Added experiment to explore the setting where the model is not initialized at origin (see Appendix G, Figure 6).\\n\\nWhile limited by time in the response period, we do still plan to address *all* the reviewer\\u2019s comments including discussion of the stopping criterion and latent variable interpretation in future revisions. We also welcome any further feedbacks to improve this paper !\"}", "{\"title\": \"Revision Submitted\", \"comment\": \"We have completed additional experiments and submitted a revised manuscript to address the reviewer\\u2019s comments.\\n\\n## Q1: Estimator of MI\\n\\nWe have made it clear to the reader that the MI estimator is biased in Section 4.2.\\n\\n\\n## Q2: Active units\\n\\nWe have computed the number of active units (AU) for all training procedures, and included AU as an additional metric in all the results tables, including the main results in Tables 1 and 2. Additionally, we plot NLL vs. AU for all training procedures in the new Figure 4. \\n\\n\\n## Q3: Robustness to the effects of randomness\\n\\nWe have run all models with 5 different random seeds and report mean and standard deviations for all metrics in Table 1 and Table 2. Since outlier runs might influence mean and standard deviation a lot, we also plot each value obtained by separate runs of each method as a point in Figure 4 to visualize the uncertainties. \\n\\n\\n## Q4: Has this approach truly completely solved posterior collapse? (e.g. can you show that the mutual information between z and x is maximal or the number of inactive units is zero?) \\n\\nAs shown in Tables 1 and 2, both our approach and SA-VAE behave similarly, with roughly 20-to-50% of units active. For all datasets (both language and vision), beta-VAE achieves the highest number of AU, but yields poor likelihood. Apart from beta-VAE, our approach and SA-VAE yield the highest AU in comparison with all other training approaches, and achieve the highest likelihoods overall. This matches our hypothesis since SA-VAE and our approach are the two training procedures that address inference network lag.\\n\\nWhile these results indicate that our proposed training procedure does mitigate the effects of posterior collapse, it is difficult to say whether the issue has been completely solved. For example, a large proportion of units being inactive (as we still see with our approach and SA-VAE) does not necessarily indicate posterior collapse. The objective function is highly non-convex and inactive units may be a result of other local optima that have nothing to do with collapse. This is supported by the fact that the \\u201cdying units effect\\u201d is also commonly observed in neural networks and latent variable models where posterior collapse is not a concern, e.g. [1] also observed that half of units are inactive with the IWAE objective. \\n\\nFurther, while we agree that miniscule mutual information is an effect of posterior collapse which is a poor local optimum of the training objective, we do not think it necessarily true that the global optimum of the training objective has maximal mutual information -- or even that the model with best generalization will necessarily have maximal MI. For example, beta-VAE can have larger mutual information and more active units, but it actually fits the data poorly. During learning the model choses a subset of latent units to activate, which corresponds to some local optimum; it is unknown whether there is a better local optimum that actives more or all latent units. \\n\\nWhen looking at this as a representation learning problem, it is more intuitive that maximal MI is inherently valuable -- but, in this paper, we primarily view VAE as a probabilistic model for which good generalization to unseen data is of highest concern. \\n\\n[1] Burda, Yuri, Roger Grosse, and Ruslan Salakhutdinov. \\\"Importance weighted Autoencoders.\\\"\"}", "{\"title\": \"Revision Submitted\", \"comment\": \"We have submitted the revised manuscript and made the following modifications to address the reviewer\\u2019s comments.\\n\\n## Q1: Uncertainty quantification for random initialization or random minibatch traversal\\n\\nWe have run all methods with 5 different random seeds and report mean and standard deviations for all metrics in Table 1 and Table 2. Following the reviewer\\u2019s suggestion we also plotted each value obtained by separate runs of each method as a point in Figure 4 to visualize the uncertainties. \\n\\n## Q2: Uncertainty quantification of NLL approximation\\n\\nWe repeated the evaluation process 10 times with different random seeds, and report variance in Appendix D (Table 7 and Table 8). For a trained VAE model the variance of our NLL estimation is smaller than 0.001 on all datasets.\\n\\n\\n## Q3: Baseline comparison to KL annealing\\n\\nWe started KL weight from exactly 0.0 and linearly increased it to 1.0 in the first X iterations. We tried different X and report the results in Appendix E (Table 9). KL-annealing method does not experience posterior collapse if the annealing procedure is sufficiently slow, but it does not produce superior predictive log likelihood to our approach.\\n\\n\\n## Q4: Separate learning rates of encoder and decoder\\n\\nWe varied the learning rate of encoder and discussed the results in Appendix F (Table 10).\\n\\n\\n## Q5: Is it necessary to update until convergence ?\\n\\nAs a follow-up to our earlier response to this question, we experiment with a fixed budget of encoder updates and report results in Section 6.5 (Table 4).\\n\\n\\n## Q6: Initialization of encoder\\n\\nWe explored the setting where the model is not initialized at origin, and discussed it in Appendix G (Figure 6).\"}", "{\"title\": \"Author Response\", \"comment\": \"Thank you for your update and experiments suggestion! We followed our promises earlier and all the planned experiments are almost ready now. We will make sure to submit the revised version before the revision ddl.\"}", "{\"title\": \"Idea for a simple \\\"adaptive\\\" KL annealing procedure\", \"comment\": \"I agree that if we force KL annealing to pick only one annealing schedule up front, the presented approach is perhaps simpler because you don't need to pick the tuning parameters of the schedule or try many different values (though should clarify what hyperparameter choices are implicit in the test for convergence in the presented approach).\\n\\nHowever, I think there's a simple \\\"adaptive\\\" KL strategy that could be explored:\\n* start with KL weight equal to 0.0\\n* gradually increase at linear rate to KL weight of 1.0 over X iterations\\n* if during the run any posterior collapse is detected, immediately reset the weight to 0.0 and make the rate of increase Y times slower\\n\\nI'd think this schedule has to work eventually, and it shouldn't be too hard to find reasonable values of X and Y.\"}", "{\"title\": \"Suggestion for uncertainty quantification\", \"comment\": \"RE quantifying the uncertainty of performance metrics across different runs of different methods: I would suggest that with small sample size there are better ways to report an estimate and its uncertainty than just the mean and standard deviation. If you have any one outlier run, it can impact both measures a lot. Also, reporting std. dev. assumes symmetric errors, but the asymmetry of errors could be important.\", \"recommended_way\": \"Graphically in a figure (not a table). Show each value obtained by separate runs of each method as a point. Readers can compare visually the spread within-runs and between-runs to make a judgement about ranking of methods.\", \"second_best_way\": \"Table with medians and min/max (or low/high percentile). These values should be less sensitive to outliers than mean/std dev.\"}", "{\"title\": \"Author Response\", \"comment\": \"Thank you for your comments! Your review is helpful and we are currently running additional experiments based on some of your suggestions. This will take some time and we will submit a revised version once we collect the results. For now we will quickly answer some of your questions, and describe our revision plan given your concerns.\\n\\n\\n## Q1: Estimator of MI\\n\\nThe estimator we used for MI is biased because the estimator for the log of the aggregate posterior is biased. More specifically, it is a Monte Carlo estimate of an upper bound on MI. In future revisions we will be sure to provide more details and point to related work that uses the same estimate of MI. Thanks for catching the lack of detail here -- this was an oversight on our part. \\n\\n## Q2: Active units\\n\\nGreat idea! We are currently re-running experiments and keeping track of active units. In the revised version, we will include this measure for all models in Table 1.\\n\\n\\n## Q3: Robustness to the effects of randomness\\n\\nWe agree that quantifying robustness to initialization is important. We are currently re-running all the models with different random seeds. Once these experiments complete, we will update the draft with mean and variance across random restarts. In our implementation, different random seeds lead to different initialization and minibatch traversal.\\n\\n## Q4: Presentation suggestions\\n\\n(1) Thanks for pointing out this related paper [1]. We will be sure cite and include it in the discussion of related work.\\n\\n(2) Actually, the optimum in this cartoon might be anywhere on the dashed x=y line depending on the data and specific shape of the objective. We intended Figure 1(b) to convey that the global optimum is not located at origin, that the origin is a local optimum, and that the global optimum is somewhere on the dashed x=y line. In Figure 3 we arbitrarily chose to show a point that happens to move to top right, which must had added to the confusion. Thanks for catching this ambiguity. We will clarify the meaning of these figures in future revisions.\\n\\n[1] Hoffman, Matthew D., and Matthew J. Johnson. \\\"Elbo surgery: yet another way to carve up the variational evidence lower bound.\\\"\"}", "{\"title\": \"Author Response [1/2]\", \"comment\": \"We appreciate your thorough review and detailed comments! Your suggestions will be helpful in improving the paper. We are currently running additional experiments to address some of your questions and comments. This is taking some time and we will submit a revised version once we collect all the results. For now, we will quickly answer some questions, and describe our revision plan to address your concerns.\\n\\n## Q1: Uncertainty quantification for random initialization or random minibatch traversal\\n\\nThe reported results in the submitted paper are from single runs. We agree that measuring robustness to initialization is important. We are currently re-running all the models with multiple random seeds. After these experiments finish, we will report the mean and variance across different runs. In our implementation, different random seeds lead to different initialization and minibatch traversal.\\n\\n\\n## Q2: NLL approximation and its uncertainty quantification\\n\\nWe approximated log likelihood with 500 importance weighted samples as in [1], which does yield (an Monte Carlo estimate of) a lower bound on marginal likelihood. We will revise to make this more clear to readers. This lower bound is tighter than ELBO as shown in [1] (we also reported both NLL and ELBO values in Appendix C). To measure the uncertainty in these evaluation metrics due to their Monte Carlo estimates, in the revised version we report variance from repeating evaluation multiple times on each trained model.\\n\\n[1] Burda, Yuri, Roger Grosse, and Ruslan Salakhutdinov. \\\"Importance weighted autoencoders.\\\"\\n\\n\\n## Q3: Baseline comparison to KL annealing\\n\\nWe agree that a more thorough comparison with KL annealing should be included. Past experience with unsuccessful attempts at KL annealing on several practical problems was actually one motivation for the current work. However, in many cases KL annealing does work well when tuned properly. We will be sure to include a more complete comparison with various KL annealing strategies in the revised version -- we have some practical experience here and will describe the tuning procedures in detail in revision. It is worth noting that one strength of the proposed approach in comparison with KL annealing is that it requires far less tuning in practice because it has fewer hyperparameters than annealing strategies do.\"}", "{\"title\": \"Author Response [2/2]\", \"comment\": \"## Q4: Separate learning rates of encoder and decoder\\n\\nThis is a good point! When we first observed the \\\"lagging\\\" behaviour we also found that the gradient of the encoder and decoder had very different magnitudes. We tried doing exactly what you propose: tuning the learning rates for the encoder and decoder separately, as well as experimenting with alternative optimization methods as potential solutions -- but nothing worked. We realize that readers might be curious about this matter, thus we will include further discussion in the paper and additional negative experimental results as support.\\n\\n\\n## Q5: Is it necessary to update until convergence ?\\n\\nThis is a good question! In practice, of course, we never reach *exact* convergence, thus the question is really about how close to convergence is required in the inner loop update. In our current implementation, we break the inner loop when the ELBO objective stays the same or decreases across 10 iterations. Note that we don't perform separate learning rate decay in the inner loop so this convergence condition is not strict, but empirically we found it to be sufficient. Across all four datasets (on synthetic, Yahoo, Yelp, and OMNIGLOT) in practice this yields roughly 30 - 100 updates per inner loop update. We also want to clarify that Fig.2 doesn't imply our approach takes 2000 updates to converge in one single inner loop, the notation \\\"iter\\\" in Figure 2 represents outer loop iterations instead of inner loop iterations (we will clarify this in future revisions -- thank you for pointing out the ambiguity).\\n\\nIn preliminary experiments we tried using a fixed budget of encoder updates, similar to the approach you suggest. While not reported in the current revision, our takeaway from these experiments was the following: (1) Generally speaking, the final model fit is better when the encoder update is near convergence. (2) Performing a sufficient number of updates above some threshold in the inner loop is *critical* for avoiding posterior collapse -- we found that this \\\"sufficient number\\\" is sensitive to dataset and model architecture. (3) We found, empirically, that the minimal fixed budget of inner loop iterations required to avoid posterior collapse was not meaningfully smaller than the number of updates resulting from our proposed approach and implementation. Therefore, we concluded that the fixed budget approach would not lead to worthwhile speedups in practice, and that our simpler proposed approach represents a good tradeoff between performance and speed. We will include this discussion in future revisions.\\n\\n## Q6: Initialization of encoder\\n\\nWe had also considered whether a different initialization for the encoder might help avoid posterior collapse, but did not conduct experiments to test this hypothesis. Considering your concern, we plan to at least conduct experiments where we initialize all the encoder parameters to positive values (so that the approximate posterior mean is not located at origin upon initialization). We will discuss this point in future revisions and include experimental results if they are interesting.\"}", "{\"title\": \"Author Response\", \"comment\": \"Thanks for your encouraging comments and advice! Currently we are running additional experiments to address some of the reviewer comments. This is taking some time and we will submit a revised version once we have collected all the results. For now, we will quickly answer some of your questions.\\n\\n## Q1: Latent variable interpretation\\n\\nWe agree that providing samples would be informative. We plan to add these experiments, along with additional analysis aimed at uncovering how the latent codes are used by the generative model.\\n\\n## Q2: Choice of stopping criterion\\n\\nIn addition to the presented stopping criterion, we first tried switching back to traditional VAE training after a fixed number of epochs -- i.e. early stopping. We found that this approach can also work well, but introduces an additional hyperparameter (number of epochs) that is sensitive to datasets and model architectures. We found that stopping too early hurts the performance, and stopping too late of course hurts speed. This tradeoff needs to be tuned if epochs are specified explicitly.\\n\\nIntuitively, posterior collapse (and a \\u201clagging\\u201d encoder) correspond to a lack of \\u201cdependence\\u201d between posterior samples and observed data. Based on its use in related literature, we experimented with mutual information (MI) as a simple quantitative surrogate for \\u201cdependence\\u201d. Stopping the aggressive training phase after MI stops increasing monotonically worked well in practice and avoided the need for data or model dependent tuning. Across multiple settings we found the proposed stopping criterion doesn't sacrifice performance and maintains fast training. We agree that further analysis would be interesting and suspect that similar measurements of dependence and related stopping criteria might also strike a successful balance.\\n\\n## Q3: Different prior\\n\\nThe effect of our approach under different priors would certainly be interesting to see, but is a bit beyond the scope of the current paper. We may explore this direction in future work.\\n\\n\\n## Q4: Hole problem\\n\\nOur analysis and empirical results were focused specifically on the problem of posterior collapse. We agree, however, that it would be interesting to explore how the proposed procedure (and related modifications to optimization) might affect other known issues with VAE. We hope to explore this in the future welcome any suggestions for how to do so!\\n\\nRegarding the \\u201chole\\u201d problem: We were not aware of this paper, thank you for sharing it with us. Our current experimental results demonstrate that the proposed approach is able to maintain a relatively small KL(q(z) | p(z)), but this real-valued quantity is hard to interpret. We think that it is necessary to visualize the aggregated posterior and prior (or use another more direct metric) to check if the proposed approach helps solve the \\\"hole problem\\\". \\n\\n## Q5: Connection to the wake-sleep algorithm\\n\\nGood point! The proposed algorithm is similar to the wake-sleep algorithm in the sense that we split encoder and decoder optimization into separate phases. Essentially, both the proposed algorithm and the wake-sleep algorithm are instances of block-coordinate ascent. The decoder update in the proposed method is analogous to the wake phase: the ELBO objective corresponds to the wake phase objective with an additional regularization term from the prior on code z. The encoder update in the proposed method is analogous to the sleep phase where decoder is fixed -- though, here, the ELBO objective is somewhat different from the sleep phase objective which aims to recover hidden code z instead of observations x.\"}", "{\"title\": \"Author Response\", \"comment\": \"This is a good point, and thanks for sharing the paper with us. In practice we found that using more Monte Carlo samples in our training algorithm helps improve the performance (this is expected because more Monte Carlo samples lead to low-variance gradient estimator that makes the inner-loop encoder optimization better), but using more Monte Carlo samples in standard VAE training is not sufficient to mitigate \\\"lagging\\\", which means lagging may not be explained by the high variance of gradients alone.\\n\\nWhile we do think that high variances of the gradients might partially contribute to the lagging of encoders, at this stage we don't have experimental evidence about if a better gradient estimator (as you mentioned) would suffice to address this problem.\"}", "{\"comment\": \"In \\\"Sticking the Landing: Simple, Lower-Variance Gradient Estimators for Variational Inference\\\" [1] it was shown that a naive differentiation of the ELBO w.r.t. \\u03c6 (q's parameters) leads to a \\u2207 log q(z) term which has zero expectation, but contributes significant variance to the gradient estimate. Could the lagging you observed be explained by high variance of the gradients?\\n\\n[1]: https://arxiv.org/abs/1703.09194\", \"title\": \"Could the variance of the inference network gradients be a problem?\"}", "{\"title\": \"Reasonable solution to posterior collapse but needs uncertainty quantification and more effort on baselines and debunking alternative explanations\", \"review\": \"Response to Authors\\n-------------\\nI've read all other reviews and the author responses. Most responses to my issues seem to be \\\"we will run more experiments\\\", so my review scores haven't changed. I'm glad the authors are planning many revised experiments, and I understand that these take time. It's too bad revised results won't be available before the review revision deadline (tomorrow 11/26). I guess I'm willing to take the author's promises to update in good faith. Thus, I think this is an \\\"accept\\\", but only if the authors really do follow through on promises to add uncertainty quantification and include some complete comparisons to KL annealing strategies. \\n\\nReview Summary\\n--------------\\n\\nOverall, I think the paper offers a reasonable story for why its proposed innovation -- an alternative scheduling of parameter-specific updates where encoder parameters are always trained to convergence during early iterations -- might offer a reliable way to avoid posterior collapse that is far faster and easier-to-implement than other options that require some per-example iterations (e.g. semi-amortized VAE). My biggest concerns are that relative performance gains (in bound quality) over alternatives are not too large and hard to judge as significant because no uncertainty in these estimates is quantified. Additionally, I'd like to see more careful evaluation of the KL annealing baseline and more attention to within-model comparisons (do you really need to update until convergence?).\\n\\nGiven the method's simplicity and speed, I think with a satisfactory rebuttal and plan for revision I would lean towards acceptance.\\n\\nPaper Summary\\n-------------\\nThe paper investigates a common problem known as \\\"posterior collapse\\\" observed when training generative models such as VAEs (Kingma & Welling 2014) with high-capacity neural networks. Posterior collapse occurs when the encoder distribution q(z|x) (parameterized by a NN) becomes indistinguishable from the generative prior on codes p(z), which is often a local optima of the VI ELBO objective. While other better fixed points exist, once this one is reached during optimization it is hard to escape using the typical local gradient steps for VAEs that jointly update the parameters of an encoder and a decoder with each gradient step. \\n\\nThe proposed solution (presented in Alg. 1) is to avoid joint gradient updates early in training, and instead use an alternating update scheme where after each single-gradient-step decoder parameter update, the encoder is updated with as many gradient steps as are needed to reach convergence. This proposed scheme, which the paper terms \\\"aggressive updates\\\", forces the encoder to better approximate the true posterior p(z|x) at each step.\\n\\nExperiments study a synthetic task where visualizing the evolution of true posterior mean of p(z|x) side-by-side with approximate q(z|x) is possible in 2D, as well as benchmark comparisons to several other methods that address posterior collapse on text modeling (Yahoo, Yelp15) and image modeling (Omniglot). Studied baselines include annealing the KL term in the VI objective, the \\\\beta VAE (which keeps the KL term fixed with a weight \\\\beta), and semi-amortized VAEs (SA-VAEs, Kim et al. 2018). The presented approach is said to reach better values of the log likelihood while also being ~10x faster to train than the Kim et al. approach on large datasets.\\n\\nSignificance and Originality\\n----------------------------\\nThere exists strong interest in deploying amortized VI to fit sophisticated models efficiently while avoiding posterior collapse, so the topic is definitely relevant to ICLR. Certainly solutions to this issue are welcome, though I worry with the crowded field that performance is starting to saturate and it is becoming hard to identify significant vs. marginal contributions. Thus it's important to interpret results across multiple axes (e.g. speed and heldout likelihood).\\n\\nThe paper does a nice job of highlighting related work on this problem, and I'd rate its methodological contributions as clearly distinct from prior work, even though the eventual procedure is simple.\", \"the_closest_related_works_in_my_view_are\": \"* Krishnan et al. AISTATS 2018, where VAE joint-training algorithms for nonlinear factor analysis problems are shown to be improved by an algorithm that uses the encoder NN as an *initialization* and then doing several standard SVI updates to refine per-example parameters. Encoder parameters are updated via gradient updates, *after* the decoder parameters are updated (not jointly).\\n\\n* SA-VAEs (Kim et al. ICML 2018) which studies VAEs for deep text models and develops an algorithm that at each a new batch uses the encoder to initialize per-example parameters, updates these via several iterations of SVI, then *backpropagates* through those updates to compute a gradient update of the encoder NN.\\n\\nCompared to these, the detailed algorithm presented in this work is both distinct and simpler. It does not require any per-example parameter updates, instead it only requires a different scheduling of when encoder and decoder NN updates occur. \\n\\n\\nConcerns about Technical Quality (prioritized)\\n----------------------------------------------\\n\\n## C1: Without error bars in Table 1 and 3, hard to know which gaps are significant\\n\\nAre 500 Monte Carlo samples enough to be sure that the numbers reported in Table 1 are precise estimates and not too noisy? How much error is there in the estimation of various quantities like the NLL or the KL if we repeated 500-MC samples 5x or 10x or 25x? My experience is that even with 100 or more samples, evaluations of the ELBO bound for classic VAEs can differ non-trivally. I'd like to see evidence that these quantities are estimated with certainty, or (even better) some direct reporting of the uncertainties across several estimates.\\n\\n\\n## C2: Baseline comparison to KL annealing needs to be more thorough\\n\\nThe current paper dismisses the strategy that annealing the KL term as ineffective in addressing posterior collapse (e.g. VAE + anneal has a 0.0 KL term in Table 1). However, it's not clear that a reasonable annealing schedule was used, or even that any reasonable effort was made to try more than one schedule. For example, if we set the KL term to exactly 0.0 weight, the optimization has no incentive to push q towards the prior, and thus posterior collapse *cannot* occur. It may be that this leads to other problems, but it's unclear to me why a schedule that keeps the KL term weight exactly at 0 for a few updates and then gradually increases the weight should lead to collapse. To me, the KL annealing story is much simpler than the presented approach and I think as a community we should invest in giving it a fair shot. If the answer is that annealing takes too long or the schedule is tough to tune, that's sensible, but I think the claim that annealing still leads to collapse just means the schedule probably wasn't set right.\\n\\nNotice that \\\"Ours\\\" is improved by \\\"Ours+Annealing\\\" for 2 datasets in Table 1. So annealing *can* be effective. Krishnan et al. 2018's Supplementary Fig. 10 suggests that if annealing is slow enough (unfolding over 100000 updates instead of 10000 updates), then KL annealing will get close to pure SVI in effective, non-collapsed posterior approximation. The present paper's Sec. B.3 indicates that the attempted annealing schedule was 0.1 to 1.0 linearly over 10 epochs with batch size 32 and train set size 100k, which sounds like only 30k updates of annealing were performed. I'd suggest comparing against KL annealing that both starts with a smaller weight (perhaps exactly at 0.0) and grows much slower.\\n\\n\\n## C3: Results do not analyze variability due to random initialization or random minibatch traversal\\n\\nMany factors can impact the final performance values of a model trained via VI, including the random initialization of its parameters and the random order of minibatches used during gradient updates. Due to local optima, often best practice is to take the best of many separate initializations (see several figures in Bishop's PRML textbook). The present paper doesn't make clear whether it's reporting single runs or the best of many runs. I suggest a revision is needed to clarify. Quantifying robustness to initialization is important.\\n\\n\\n## C4: Results do not analyze relative sensitivity of encoder and decoder to using the same learning rate\\n\\nOne possible explanation for \\\"lagging\\\" might be that the gradient vectors of the encoder and the decoder have different magnitudes, and thus using the same fixed learning rate for both (as seems to be done from a skim of Sec. B) might not be optimal. Perhaps a quick experiment that separately tunes learning rates of encoder and decoder is necessary? If the learning rate for encoder is too small, this could easily explain the lagging when using joint updates.\\n\\n\\n## C5: Is it necessary to update until convergence? Or would a fixed budget of 25 or 100 updates to the encoder suffice?\\n\\nIn Alg. 1, during the \\\"aggressive\\\" phase the encoder is updated until convergence. I'd like to see some coverage of how long this typically takes (10 updates? 100 updates?). I'd also like to know if there are significant time-savings to be had by not going *all* the way to convergence. It's concerning that in Fig. 1 convergence on a toy dataset takes more than 2000 iterations.\\n\\n\\n## C6: Sensitivity to the initialization of the encoder is not discussed and could matter\\n\\nIn the synthetic example figure, it seems the encoder is initialized so that across many examples, the typical encoding will be near the origin and thus favored under the prior. Thus, the *initialization* is in some ways setting optimization up for posterior collapse. I wonder if some more diverse initialization might avoid the problem.\\n\\n\\n\\nPresentation comments\\n---------------------\\n\\nOverall the paper reads reasonably. I'd suggest mentioning the KL annealing comparison a bit earlier, but otherwise I have few complaints.\\n\\nI'm not sure I like the chosen terminology of \\\"aggressive\\\" update. The procedure is more accurately a \\\"repeat-until-convergence\\\" update. There's nothing aggressive about it, it's just repeated.\\n\\n\\nLine-by-line Detailed comments\\n------------------------------\\n\\nCitations for \\\"traditional\\\" VI with per-example parameters should go much further back than 2013. For example, Matthew Beal's thesis, work by Blei in 2003 on LDA, or work by MacKay or M.I. Jordan or others even further back.\", \"alg_1_line_12\": \"This update should be to \\\\theta (model parameters), not \\\\phi (approx posterior parameters).\", \"alg_1\": \"Might consider using notation like g_\\\\theta to denote the grad. of specific parameters, rather than have the same symbol \\\"g\\\" overloaded as the gradient of \\\\theta, \\\\phi, and both in the same Algo.\\n\\n\\nFig. 3: This is interesting, but I think it's missing something as a visualization of the algorithm. There's nothing obvious visually that indicates the encoder update involves *many* steps, but the decoder update is only one step. I'd suggest at least turning each vertical arrow into *many* short arrows stacked end-to-end, indicating many steps. Also use a different color (not green for both).\\n\\nFig. 4: Shows various quantities like KL(q, prior) traced over optimization. This figure would be more illuminating if it also showed the complete ELBO objective and the expected log likelihood term. Then it would be clear why annealing is failing to avoid posterior collapse.\", \"table_1\": \"How exactly is the negative log likelihood (NLL) computed? Is it the expected value of the data likelihood: -1 * E_q[log p(x|z)]? Or is it the variational lower bound on marginal likelihood?\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Review\", \"review\": \"This work looks into the phenomenon of posterior collapse, and shows that training the inference network more can reduce this problem, and lead to better optima. The exposition is clear. The proposed training procedure is simple and effective. Experiments were carried out in multiple settings, though I would've liked to see more analysis. Overall, I think this is a nice contribution. I have some concerns which I hope the authors can address.\", \"comments\": [\"I think [1] should be cited as they first mentioned Eq 5 and also performed similar analysis.\", \"Were you able to form an unbiased estimate for the log of the aggregate posterior which is used extensively in this paper (e.g. MI)? Some recents works also estimate this but they use biased estimators. If your estimator is biased, please add a sentence clarifying this so readers aren't mislead.\", \"Apart from KL and (biased?) MI, a metric I really would've liked to see is the number of active/inactive units as measured in [2]. I think this is a more reliable and very explainable metric for posterior collapse, whereas real-valued information-theoretic quantites can be hard to interpret.\"], \"questions\": \"- Has this approach truly completely solved posterior collapse? (e.g. can you show that the mutual information between z and x is maximal or the number of inactive units is zero?) \\n - How robust is this approach to the effects of randomness during training such as initialization and use of minibatches? (e.g. can you show some standard deviations of the metrics you report in Table 1?)\\n - (minor) I wasn't able to understand why the top right is optimal, as opposed to anywhere on the dashed line, in Figures 1(b) and 3?\\n\\n[1] Hoffman, Matthew D., and Matthew J. Johnson. \\\"Elbo surgery: yet another way to carve up the variational evidence lower bound.\\\" \\n[2] Burda, Yuri, Roger Grosse, and Ruslan Salakhutdinov. \\\"Importance weighted autoencoders.\\\"\\n\\n--REVISION--\\n\\nThe paper has significantly improved since the revision and I am happy to increase my score. I do still think that the claim of \\\"preventing\\\" or \\\"avoiding\\\" posterior collapse is too strong, as I agree with the authors that \\\"it is unknown whether there is a better local optimum that [activates] more or all latent units\\\". I would suggest not to emphasize it too strongly (ie. in the abstract) or using words like \\\"reducing\\\" or \\\"mitigate\\\" instead.\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Neither the objective nor the model but the optimization procedure may be the key to training VAE\", \"review\": \"General:\\nThe paper tackles one of the most important problems of learning VAEs, namely, the posterior collapse. Typically, this problem is attacked by either proposing a new model or modifying the objective. Interestingly, the authors considered a third option, i.e., changing the training procedure only, leaving the model and the objective untouched. Moreover, they show that in fact the modified objective (beta-VAE) could drastically harm training a VAE.\\n\\nI find the idea very interesting and promising. The proposed algorithm is very easy to be applied, thus, it could be easily reproduced. I believe the paper should be presented at the ICLR 2019.\", \"pros\": [\"The paper is written in a lucid manner. All ideas are clearly presented. I find the toy problem (Figure 2) very illuminating.\", \"It might seem that the idea follows from simple if not even trivial remarks. But this impression is fully due to the fashion the authors presented their idea. I am truly impressed by the writing style of the authors.\", \"I find the proposed approach very appealing because it requires changes only in the optimization procedure while the model and the objective remain the same. Moreover, the paper formalizes some intuition that could be found in other papers (e.g., (Alemi et al., 2018)).\", \"The presented results are fully convincing.\"], \"cons\": [\"It would be beneficial to see samples for the same latent variables to verify whether the model utilizes the latent code. Additionally, a latent space interpolation could be also presented.\", \"The choice of the stopping criterion seems to be rather arbitrary. Did the authors try other methods? If yes, what were they? If not, why the current stopping criterion is so unique?\", \"The proposed approach was applied to the case when the prior is a standard Normal. What would happen if a different prior is considered?\"], \"neutral_remark\": \"* Another problem, next to the posterior collapse, is the \\u201chole problem\\u201d (see Rezende & Viola, \\u201cTaming VAEs\\u201d, 2018). A natural question is whether the proposed approach also helps to solve this issue? One possible solution to that problem is to take the aggregated posterior as the prior (e.g., (Tomczak & Welling, 2018)) or to ensure that the KL between the aggregated posterior and the prior is small. In Figure 4 it seems it is the case, however, I am really curious about the authors\\u2019 opinion on this matter.\\n* Can the authors relate the proposed algorithm to the wake-sleep algorithm? Obviously, the motivation is different, however, I find these two approaches a bit similar in spirit.\\n\\n--REVISION--\\nI would like to thank the authors for their comments. In my opinion the paper is very interesting and opens new directions for further research (as discussed by the authors in their reply). I strongly believe the paper should be accepted and presented at the ICLR.\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"comment\": \"Ah yes, that's a good point. The diagonals in Figure 2 are certainly nice to see :). Thanks for the quick answer!\\n\\n(Maybe another way to visualize would be to plot KL(q(z | x), p(z)), KL(q(z | x), p(z |x)), and KL(p(z|x), p(z)) averaged over some batch of x's as training progresses. But it seems like Figure 2 shows the phenomenon pretty clearly regardless)\", \"title\": \"good point!\"}", "{\"title\": \"Thanks for your comments and advice, and the KL plots may not be a better choice than the first moments plots\", \"comment\": \"Thanks for your encouraging comments and advice !\\n\\nI think you are right that KL(p(z|x), p(z)) is estimable with the sampling method you gave. Also, we agree that plotting with KL terms is able to generalize the plots to consider more than the first moments and even higher dimensions of latent variables. \\n\\nHowever, I think the KL value plots miss to include the distance information between q(z|x) and p(z|x), which is crucial for the analysis in this paper. Two distributions that have similar KL divergence to prior might have very different moments. This distance information is important to convey in posterior space plots since we are emphasizing LAGGING inference distribution compared with true model posterior and the aggressive training of inference net is to make q(z|x) and p(z|x) closer. We cannot really say that q(z|x) and p(z|x) is close given KL(q(z|x), p(z)) and KL(p(z|x), p(z)) are close, which makes the diagonal line in these plots meaningless.\\n\\nThrough Figure 2 and Figure 3 we want to show the moving trajectory of q(z|x) and p(z|x), not only their relationship with prior p(z) (which KL plots can reflect), but also the relationship between themselves (which KL plots cannot reflect). Due to challenge of accurate visualization, we compromized to characterize distribution with the first moments, which we believe is a reasonable approximation given the plots and quantitative results on real dataset in experiments.\\n\\nIt might be worth visualizing both the first moments and KL values. I guess the plots for basic VAE and our approach might remain roughly unchanged, but the plots for other regularization methods like beta-VAE may be very different (we didn\\u2019t show this in the paper though). We will consider adding KL plots in future revisions.\"}", "{\"comment\": \"Hi, thanks for this great paper! Addressing posterior collapse in VAEs is an important issue in the field.\", \"i_particularly_liked_the_breakdown_of_posterior_collapse_into_two_failure_modes\": \"inference collapse (where KL(p(z | x), p(z)) > 0 but KL(q(z | x), p(z)) = 0) and model collapse (where KL(p(z | x), p(z)) = 0). This paper shows that inference collapse happens first during optimization, and proposes a simple yet robust way to mitigate this (update the inference network more aggressively when collapse is happening).\", \"i_had_a_small_question\": \"for Figure 2, I wonder if it is possible to directly estimate the KL's instead of the means?\\ni.e. replace the vertical axis with KL(q(z | x), p(z)) and the horizontal axis with KL(p(z | x), p(z)). I could be wrong, but it seems like KL(p(z | x), p(z)) should be estimable by obtaining samples from p(z|x) with MCMC and calculating \\n\\n1/M \\\\sum_{m=1}^M log [p(z_m | x)/p(z_m)] = 1/M \\\\sum_{m=1}^M log [p(x|z_m) / p(x)], \\n\\nwhere z_m are samples from p(z|x) and p(x) is estimated with importance samples (from the prior). This estimator would be biased but seems like it would converge a.s. to KL(p(z|x), p(z)) under mild conditions. I would image the plots would remain roughly unchanged, but this might generalize existing plots to consider more than just the first moments.\", \"title\": \"a very nice paper that proposes a simple solution to address posterior collapse\"}" ] }
rkevMnRqYQ
Preferences Implicit in the State of the World
[ "Rohin Shah", "Dmitrii Krasheninnikov", "Jordan Alexander", "Pieter Abbeel", "Anca Dragan" ]
Reinforcement learning (RL) agents optimize only the features specified in a reward function and are indifferent to anything left out inadvertently. This means that we must not only specify what to do, but also the much larger space of what not to do. It is easy to forget these preferences, since these preferences are already satisfied in our environment. This motivates our key insight: when a robot is deployed in an environment that humans act in, the state of the environment is already optimized for what humans want. We can therefore use this implicit preference information from the state to fill in the blanks. We develop an algorithm based on Maximum Causal Entropy IRL and use it to evaluate the idea in a suite of proof-of-concept environments designed to show its properties. We find that information from the initial state can be used to infer both side effects that should be avoided as well as preferences for how the environment should be organized. Our code can be found at https://github.com/HumanCompatibleAI/rlsp.
[ "Preference learning", "Inverse reinforcement learning", "Inverse optimal stochastic control", "Maximum entropy reinforcement learning", "Apprenticeship learning" ]
https://openreview.net/pdf?id=rkevMnRqYQ
https://openreview.net/forum?id=rkevMnRqYQ
ICLR.cc/2019/Conference
2019
{ "note_id": [ "r1eINIUbe4", "BkxVrkYTyE", "BylXVFpH0X", "r1xFA4bEC7", "SJgkSEbVCm", "Byxpl4-NC7", "S1lERfWEAX", "rJlEVG-4RX", "r1xfyOLGTm", "Byxr6ix6nX", "BJliWzYh3m", "rkl-7cZuo7" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_review", "official_review", "official_review" ], "note_created": [ 1544803886433, 1544552251652, 1542998314705, 1542882513446, 1542882359012, 1542882292878, 1542881995798, 1542881836392, 1541724122129, 1541372861196, 1541341699369, 1540000281071 ], "note_signatures": [ [ "ICLR.cc/2019/Conference/Paper1273/Area_Chair1" ], [ "ICLR.cc/2019/Conference/Paper1273/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1273/AnonReviewer1" ], [ "ICLR.cc/2019/Conference/Paper1273/Authors" ], [ "ICLR.cc/2019/Conference/Paper1273/Authors" ], [ "ICLR.cc/2019/Conference/Paper1273/Authors" ], [ "ICLR.cc/2019/Conference/Paper1273/Authors" ], [ "ICLR.cc/2019/Conference/Paper1273/Authors" ], [ "ICLR.cc/2019/Conference/Paper1273/AnonReviewer4" ], [ "ICLR.cc/2019/Conference/Paper1273/AnonReviewer2" ], [ "ICLR.cc/2019/Conference/Paper1273/AnonReviewer3" ], [ "ICLR.cc/2019/Conference/Paper1273/AnonReviewer1" ] ], "structured_content_str": [ "{\"metareview\": \"The paper proposes to take advantage of implicit preferential information in a single state, to design auxiliary reward functions that can be combined with the standard RL reward function. The motivation is to use the implicit information to infer signals that might not have been included in the reward function. The paper has some nice ideas and is quite novel. A new algorithm is developed, and is supported by proof-of-concept experiments.\\n\\nOverall, the paper is a nice and novel contribution. But reviewers point out several limitations. The biggest one seems to be related to the problem setup: how to combine inferred reward and the given reward, especially when they are in conflict with each other. A discussion of multi-objective RL might be in place.\", \"confidence\": \"3: The area chair is somewhat confident\", \"recommendation\": \"Accept (Poster)\", \"title\": \"Interesting idea and setup, although technical contribution is somewhat limited\"}", "{\"title\": \"Also satisfied with the rebuttal\", \"comment\": \"I appreciated the authors' rebuttal and paper modifications and have updated my review to a weak accept.\"}", "{\"title\": \"Still advocating for acceptance of the paper\", \"comment\": \"I have carefully gone through all the other reviews and the authors' response to them. I have also gone through some of the revisions made to the paper. The authors have added numerical experiments to address one of my main technical concerns (choosing the time horizon T) and have also added a large amount of useful discussion regarding combination of the specified rewards and inferred rewards (which other reviewers pointed out as well).\\n\\nOverall, the paper introduces a novel and interesting idea (inferring preferences from the initial state of an environment), which I see as the primary contribution of the paper. The paper proposes algorithms that implement this idea and a number of experiments that support the idea. The authors are very clear about the limitations of the work and have a significant amount of discussion on how these may be addressed. I believe that the ideas in the paper will lead to significant follow-up work (both from the authors themselves and others). Overall, I still believe that this paper makes a strong contribution and am happy to advocate for its acceptance.\"}", "{\"title\": \"Both methods of combining rewards are not very justifiable; and RLSP behaves reasonably with a misspecified horizon T\", \"comment\": \"Thanks for the thorough review! We\\u2019re glad that you were impressed by the novelty and readability of the paper, and the usefulness of our evaluation. We respond to each of your concerns individually below.\\n\\nCOMBINING REWARDS\\n\\nWe actually find both the Bayesian and Additive methods to be unjustifiable. The general problem of combining \\u03b8_Alice and \\u03b8_spec is very difficult, and we expect that we will need a different formalism to solve it in a principled manner. The issue is that we have two sources of information about the best reward function for our robot -- \\u03b8_Alice inferred from the initial state s_0, and the reward specified by the designer \\u03b8_spec. \\u03b8_Alice will typically recommend keeping close to s_0, while the specified reward will recommend a change to s_0 (since we typically want our robots to change the environment somehow). So, these two sources are extremely likely to provide conflicting preference information. We are not sure how best to combine the two sources of information -- our best guess is that we should identify areas in which the two conflict, and ask Alice for clarification.\\n\\nIn the case of the Bayesian method, this conflict arises in the prior P(\\u03b8_Alice | \\u03b8_spec). We have described this in more detail in the new Appendix D. We have also added a discussion of the general problem in Section 6, under the heading \\u201cConflicts between \\u03b8_spec and \\u03b8_Alice\\u201d.\\n\\nOverall, we think this was a mistake in how we organized the paper. Our main contribution is our insight that the initial state contains preference information, as well as an example algorithm that can extract that preference information in the form of a reward function. We view both the Bayesian and Additive methods as unprincipled ways of combining rewards that were necessary for an evaluation, and have changed the paper to present them as such.\\n\\n(We also discuss related issues in our response to Reviewer 3.)\\n\\nCHOOSING THE TIME HORIZON\\n\\nThe time horizon T is a fairly important parameter. However, even when T is very misspecified, RLSP ends up being uncertain about the reward function, and so we optimize something close to the specified reward \\u03b8_spec, so we are not any worse off. This also suggests that we could choose T by seeing which value of T leads to a distribution with minimal entropy. We have added an experiment with different values of T with more details; it is now Section 5.4.\\n\\nThat said, this is only applicable to our simple gridworlds. In the real world, we often make long term hierarchical plans, and if we don\\u2019t observe the entire plan (corresponding to a choice of T that is too small) it seems possible that we infer bad rewards, especially if we have an uninformative prior over s_{-T}. We do not know whether this will be a problem, and if so how bad it will be, and hope to investigate it in future work with more realistic environments. We have added a discussion on this issue to the limitations section.\\n\\nRELATIVE REACHABILITY\\n\\nIt is possible to define a version of relative reachability that operates in feature space instead of state space, effectively collapsing states with the same features into a single entity. This has the benefit of not capturing any of the irreversibilities that the featurization does not capture (which presumably humans don\\u2019t care about), but this doesn\\u2019t matter for our gridworlds, so it doesn\\u2019t make a difference to our experiments.\\n\\nCLARITY\\n\\n(Krakovna et al. 2018) and (Turner, 2018) are both impact measures -- they penalize any high impact action, including ones that we actually want. In contrast, we can distinguish between impact that humans do and don\\u2019t care about. We have clarified this in the introduction and related work.\\n\\nThanks for pointing out the typos, we have fixed them now. We also appreciate your pointing out that the title was a bit uninformative, and have now added the word \\u201cpreferences\\u201d to the title, such that the new one is \\u201cThe Implicit Preference Information in an Initial State\\u201d. We want to avoid terms like \\u201cInferring rewards\\u201d in order to keep the emphasis on the idea that the initial state has preference information (which we hope researchers will build on), instead of the particular algorithm that we propose (which we hope will be superseded by one that makes fewer assumptions).\"}", "{\"title\": \"Quantitative metrics are not appropriate for this domain, but we agree that we should show trajectories\", \"comment\": \"EXPERIMENTAL EVALUATION\\n\\nThanks for the suggestion to show the paths and rewards taken by the various agents -- we have updated the paper with a figure showing this.\\n\\nWe chose not to present quantitative results for that section because the most reasonable quantitative metric in our setting seemed dishonest to us. We\\u2019re referring to the metric of the fraction of max reward obtained when replanning using the inferred reward. Intuitively, this seems like it captures our notion of \\u201cdid we infer the right reward\\u201d. Unfortunately, while the optimal behavior for a reward function is invariant to reward shaping such as adding a constant, this quantitative metric is not. This means that we could change the specified reward function and the form of the prior to get the results we wanted. For example, by adding constants to the specified reward, we could get our baselines to show a fraction of either 10% or 90%, while our method remains at 100%. (RLSP almost always finds the optimal trajectory, except in the room with far away vase environment, and so would usually be at 100%.)\\n\\nUltimately, what we actually care about is the behavior incentivized by the reward function. If we didn\\u2019t have any better metric, we would use the fraction of max reward metric. However, in the simple setting of gridworlds with deterministic planning, it is actually feasible to describe or show the behavior itself, so we decided to do that, since it is more informative without being overwhelming.\\n\\nFor the quantitative results comparing the Additive and Bayesian methods of combining rewards, the reward percentages could be negative because it was possible for the policies to get negative reward (eg. by breaking vases or trains). We decided to add a constant to the reward function to force it to be non-negative, which doesn't change the optimal trajectory or inferred reward, but does allow our reward percentages to be non-negative and so less confusing.\\n\\nWe\\u2019re not quite sure what you mean by the results being non-monotonic. Note that the x-axis in that figure is the standard deviation of the prior over \\u03b8 -- essentially this is meant to vary the parameter that controls the tradeoff between \\u03b8_spec and \\u03b8_Alice. The optimal reward will come at some intermediate value of tradeoff between the two rewards, and so we shouldn\\u2019t expect a monotonic curve.\\n\\nTECHNICAL PRESENTATION\\n\\nWe appreciate the points on technical presentation. The policy pi in Eq. (1) is computed using value iteration, which is dependent on the reward parameters theta. We have updated the equations in that section to show the dependence on theta.\\n\\nWe are playing around with ways to integrate more detail in the main text, but are unsure what to take out. Currently, our thinking is that any future work in more realistic environments is more likely to draw on the ideas in this paper rather than the particular technical details, and so we have focused on the ideas in the main paper. We have already moved the details about the Bayesian method of combining rewards and the experiments on it to the appendix, since as we mentioned above, we view the combination of rewards more as a necessity for evaluation than one of our contributions. However, we are still over eight pages currently. Do you have suggestions on things to remove?\"}", "{\"title\": \"We agree that combining the two rewards is tricky, we don't claim to solve it and have reorganized the paper to make that clear\", \"comment\": \"Thanks for the detailed and thoughtful review! We\\u2019re happy that you appreciated the proposal to infer implicit rewards by integrating over trajectories that could have led to the initial state under various reward functions. While we very much agree with the technical content of your review (in hindsight we would have had the same reaction to reading the paper ourselves, and it has helped us present the paper better), we disagree on its implication on the merit of the paper.\\n\\nCOMBINING INFERRED AND SPECIFIED REWARD\\n\\nWe agree that figuring out the right way to combine the inferred and specified rewards is tricky. It is indeed the rule for them to conflict -- the intended meaning of the sentence you quoted was that in almost all environments the rewards will conflict in some states, but this was ambiguous, and we have updated the paper to fix this. But what we ask you to consider is that getting access to the implicit reward to begin with is important, and that is our contribution. Indeed, much work will need to happen to sort out exactly how to use it, but we see our main contribution as the inference of that implicit reward itself, and we think that is a really important step that opens up a rich area of investigation.\\n\\nFrom that perspective, we do not claim that our heuristics for the combination are the final solution in any way. There are many methods that could make this work in our simplified gridworlds, which would give results nominally better than the Bayesian and Additive methods. For example, as you mentioned we could have the inferred reward only affect features that were unspecified in the explicitly specified reward. Another possibility is to try to decouple frame conditions (things that should remain the same) from the task that the human is trying to perform -- for example, we could consider all states from which the current state is reachable, treat all of those states as goals, and take the average inferred reward from each of them. This will end up \\u201caveraging out\\u201d the goal that the human was aiming for, while still inferring the negative rewards on irreversible actions that the human didn\\u2019t take (like breaking vases). We wrote a quick implementation of this and it does improve the inferred reward on environments meant to test frame conditions (room with vase and toy train).\\n\\nHowever, our key aim with this work is to show conceptually that the initial state contains preference information that can be learned, which we are quite confident will generalize to realistic environments as well (as we discuss briefly in Section 6). Approaches like \\u201caveraging\\u201d over all possible states from which s_0 is reachable are much more speculative -- while they may improve results on these gridworlds, we would not bet on it working in more complex environments. Similarly, we may not have the luxury of a hand-designed feature space in order to have the inferred reward only affect features that were unspecified.\\n\\nCombining inferred and explicit rewards might require some new formalism -- for example, we may model the human as pursuing multiple different subgoals, or perhaps we model them as acting randomly subject to some constraints (and we infer the constraints from the initial state), or we could model the environment as being created by multiple humans with similar but not identical goals. Ultimately, our honest take on this is that combination is not really the answer -- instead, the robot ought to use the inferred reward to actively query the human for more information. We plan to investigate this in future work, and have incorporated some of this discussion in the paper, in Section 6.\\n\\nThe reason we had a way to combine rewards at all was because we were faced with the task of how to evaluate this proposal. We could have inspected the learned rewards qualitatively, but this seemed quite error-prone and subjective. A better evaluation would look at the behavior incentivized by the inferred reward function, but in most cases an inferred reward function would take no action and leave the state as it is. So, we needed to evaluate by considering an explicit, misspecified reward function, and seeing whether the inferred reward function could correct it to get the right behavior. We have moved the description of the combination methods to the experiments section to emphasize this.\"}", "{\"title\": \"Miscellaneous answers\", \"comment\": \"Thanks for the positive review! We\\u2019re glad that you found the work original and the empirical evaluation rich and interesting. We respond to individual points below:\\n\\nKEY QUESTIONS/REMARKS:\\n\\nFor the section on combining rewards, certainly for our first approach (the one we call \\u201cBayesian\\u201d) we agree with your characterization of it as a Bayesian approach. We prefer to think of the specified reward as the prior and the initial state analysis as the likelihood, but of course you could view it the other way as well.\\n\\nIt does seem possible that the Additive approach that we ultimately use could also be reformulated as a Bayesian approach, where we use a Laplace prior (which is equivalent to Lasso L1 regularization). This would explain why the two methods perform so similarly. However, we have not investigated this in detail (because we view these techniques as unprincipled methods that were necessary for an evaluation, see our response to Reviewers 1 and 3 for more details), and so this should be taken as speculation on our part.\\n\\nWe\\u2019re happy you like the organization of our paper, but we\\u2019re not sure exactly what you mean by \\u201ca big figure in the form of a map as a central contribution of your work\\u201d, could you expand more on this? We were hoping that Figure 1 would serve as the main explanation of our work (though of course it does not capture everything).\\n\\nSMALL REMARKS\\n\\nWe have updated the paper to address the first, second, fourth and fifth points. (We\\u2019re not sure what in particular about the abstract would make an easier reading experience, so please do tell us if there\\u2019s something else we can improve.)\\n\\nFor your third remark, we\\u2019re not sure we understand what you mean here. We do not think that our key assumption is strong: certainly we humans have been changing the world to meet our preferences for the past thousands of years. We agree that the robot could move the vase to an acceptable location or put it back, but don\\u2019t see the relevance to our assumption. Our best guess is that you think that our assumption means that the state of the world should never change at all, and so we would never allow the robot to move the vase. However, if the robot considers the two possible human reward functions, \\u201cdon\\u2019t break the vase\\u201d and \\u201cdon\\u2019t move the vase from this particular location\\u201d, both of these make the observed state quite likely, and so the robot will be uncertain between these two reward functions. In addition, in a realistic environment there are likely to be many vases that are all unbroken, and the reward function \\u201cdon\\u2019t break the vase\\u201d will be a much simpler explanation of the initial state than the reward function \\u201cThe first vase must be at this particular location, and the second vase must be at this location, and \\u2026\\u201d.\\n\\nFor your sixth remark about \\u201caccess to a simulator\\u201d, are you asking what can be done in the case where we don\\u2019t have a simulator? Nearly all applications of deep RL require access to a simulator -- even applications that work on real robots will typically train in simulation and then transfer to the real world (though some exceptions do exist). If we do not have access to a simulator, and only have access to the initial state, and do not know dynamics, we don\\u2019t know how to make our method work -- but nearly all existing methods do not work in such a setting.\\n\\nFor the seventh point, we have updated the paper slightly to make it clearer, but we provide a longer explanation here. The key insight of our paper is that the world is already optimized for human preferences. Our algorithm makes another assumption -- that the reason that the world satisfies human preferences is because a human made it that way by acting over the past T timesteps. However, there are some preferences that are automatically satisfied independent of human action, and we won\\u2019t be able to infer these preferences. For example, in order for us to stay alive, we need the atmosphere to contain oxygen. However, we humans are not responsible for the atmosphere containing oxygen -- in fact, regardless of what we could have done in our history so far, we could not have prevented the atmosphere from containing oxygen. As a result, RLSP will note that no matter what reward function we have, the atmosphere will always contain oxygen, and so the fact that it observes an initial state with oxygen is unsurprising, and tells it nothing about whether humans like or dislike oxygen.\"}", "{\"title\": \"Experiments are meant to evaluate the conceptual idea, not prove that it can be applied immediately; RLSP is somewhat robust to hyperparameter choice\", \"comment\": \"Thanks for the cogent review! We\\u2019re pleased that you found our approach impressive and the intuition novel and easy to understand. We respond to your two main concerns below.\\n\\nSIMPLISTIC EXPERIMENTS\\n\\nWe agree that a convincing demonstration of the effectiveness of the method would require experiments on more realistic environments. However, with realistic environments come many problems that are unrelated to the conceptual core of the idea. If we run an experiment and it fails to learn preferences, is this because of a problem with the idea, or with the implementation? Deep RL is notoriously hard to implement correctly, and even when it is working, it is hard to tell how much different components are helping. We are proposing a new problem to solve, where we aim to learn preferences from a single state. We believe that it is worthwhile to see how far we can get in a simplified domain to establish what is conceptually possible, before delving into the problem of getting the method to work in more realistic settings.\\n\\nDISTRIBUTION OVER s_{-T}\\n\\nAfter we submitted this paper, we continued to investigate the results with a uniform prior over s_{-T}, because they were very counterintuitive to us. We found that the gradient from Ziebart (2010) that we were using was an approximation that works well with large amounts of data, but leads to significant errors when our \\u201cdataset\\u201d consists of a single state. We derived an exact formula for the gradient (now in Appendix A), and with this new gradient, the results with a uniform prior are more in line with what we expected: the inferred rewards are qualitatively the same as with known s_{-T}, and they still lead to good behavior. Now, in the apple collection case, a uniform prior over s_{-T} (without the knowledge that there are no apples in the basket) does lead to good apple harvesting behavior.\\n\\nOf course, more generally there certainly is a dependence on the distribution over s_{-T}. In fact, we would go further and say that there is a dependence on the dynamics of the environment, the featurization that we get to use, and the horizon T. Sometimes the initial state won\\u2019t have all of the preference information, and we won\\u2019t infer perfect preferences. For example, suppose it were possible to put apples from the basket back on the trees. Then RLSP would consider both the case where we start with four apples and put apples back on the tree, and the case where we start with zero apples and put apples in the basket, and these would balance out, leaving it uncertain about the reward function. We expect that when the initial state doesn\\u2019t have enough information to infer preferences, RLSP will be more uncertain about the reward. Note that even when RLSP is maximally uncertain about the reward, the overall behavior will be to optimize the specified reward, which is what we were going to do anyway, so our method degrades gracefully to the performance we would have gotten anyway.\\n\\nWe do believe that the results in this paper suggest that our method can be fairly robust to the choice of prior over s_{-T}, and it can be much easier to design a prior over s_{-T} than it is to write down a correct reward function or to provide demonstrations. Any environment with a simulator (i.e. nearly everything considered in deep RL currently) typically also comes with an initial state for that simulator where humans have not yet acted -- we could consider that as our s_{-T}. For example, in Minecraft, humans have built many structures within the Minecraft world, but we can still easily initialize a fresh Minecraft world and simulate from there.\\n\\nCHOOSING THE HORIZON T\\n\\nWe have added an experiment on this (Section 5.4) that suggests that choosing T is important for inferring things to do (as in the apples environment) but not as important for inferring what not to do (such as not breaking vases).\"}", "{\"title\": \"An interesting idea but less convincing experimental results\", \"review\": \"This work proposes a way to infer the implicit information in the initial state using IRL and combine the inferred reward with a specified reward to achieve better performance in a few simulated environments where the specified reward is not sufficient to solve the task. The main novelty of this work is to reformulate the Maximum Causal Entropy IRL objective using just the initial state as the end state of an expert trajectory to infer the underlying preference. Overall the proposed approach is impressive and the intuition behind the paper is novel and easy to understand.\", \"my_main_concerns_are_the_following\": [\"All the simulated experiments are able to demonstrate the effectiveness of the method, though they seem to be a bit too simplistic, e.g. known dynamics. As mentioned in Section 7, more real-environment experiments would make this method a lot stronger.\", \"The way of choosing the distribution s_{-T} seems to require some sort of human preference, e.g. in the apple collection case, s_{-T} has to be sampled from the distribution where there's no apple in the basket in order to make the algorithm to work. This assumption seems to make the implicit information of the initial state not so *implicit*. Besides, it's unclear how to choose the horizon T. It would be interesting to see how the value of T affects the performance.\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}", "{\"title\": \"Original formulation of initial state exploration for robot action optimisation - Reinforcement Learning\", \"review\": \"The framework of this work is Reinforcement Learning (RL) optimisation. The data consists of states of the space where the action takes place. Actions are possible, and they lead to possible transitions in the state space. A reward function assesses how adequate a state space is.\\nThe main originality of the work is to use the initial state as a key information about the features that translate many desired state of background objects in a scene. An algorithm is built to make use of this information to build an ad hoc reward function, which specifies a good landscape of desired vs non-desired states of the space. An empirical evaluation of the introduced method is presented. It is rich and interesting, although hard to fully grasp for a non-expert.\\n\\nKey questions/remarks:\\n - how different is your approach to a Bayesian approach with the combination of a likelihood (~reward) and prior (~initial state analysis) into a posterior distribution of the space? This seems to be the case in Section 5, where your alternative formulation clearly resembles a Lasso approach (which can be cast in a Bayesian framework).\\n - I quite like your decomposition of your ideas into many titled paragraphs. The drawback is that there is sometimes a lack of connections between the many ideas you combine. A would see a big figure in the form of a map as a central contribution of your work to explain the different bits. Still, I appreciate the effort to have a synthetic contribution!\", \"small_remarks\": [\"the abstract could be improved to provide an easier reading experience\", \"first time IRL on p2 is mentioned, without a prior explanation of the acronym\", \"the world is already optimised for human preferences: yes and no, this is one of your (strong?) assumptions. The robot could well move the vase to a location which is acceptable. Or put it back.\", \"on p3, beg. of Section 3, explain the decomposition of r(s) = \\\\theta^{T}f(s).\", \"in IRL paragraph: say the elements of \\\\tau_{i} are s.t. the transitions need be possible.\", \"p8 'access to a simulator': what can be simulated if very little is know about the background, but via an initial state?\", \"past point of the discussion: I simply don't get it!?!\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"An interesting take on combining explicit and inferred reward functions, but limited by unresolved questions and few quantitative results\", \"review\": \"The authors propose to augment the explicitly stated reward function of an RL agent with auxiliary rewards/costs inferred from the initial state and a model of the state dynamics. Intuitively, the fact that a vase precariously placed in the center of the room remains intact suggests that it is a precious object that should be handled with care, even though the reward function may not explicitly say so. Technically, implicit rewards like these are inferred via inverse reinforcement learning: the agent (e.g. robot) first estimates the most likely reward functions to have guided existing agents (e.g. humans) by integrating over all possible state-action paths that could have led to the initial condition and evaluating their probability under different rewards (and hence different optimal policies). The proposal is clever, but there are some philosophical hurdles to overcome and the experimental results offer little quantitative evidence to support this idea.\\n\\nIn my view, the biggest challenge is how to balance explicitly stated rewards with those inferred from the initial condition. Section 5 briefly addresses this question, but essentially capitulates by saying, \\\"This trade-off is inevitable given our problem formulation, since we have two sources of information...and they will conflict in some cases.\\\" I fear this conflict may be the rule rather than the exception. For example, when I deploy my brand new dish-washing robot on my sink full of dirty dishes, my instructions to clean up will be in direct conflict with my past self's actions (or lack thereof). How is the agent to know how strongly to adhere to the stated goals and when to deviate? One possible solution is to only allow the inferred reward to affect features that are not explicitly included in the specified reward. Neither the Additive nor the Bayesian combination methods have this property though. \\n\\nThe technical presentation could use some improvement. The preliminaries in Section 3 do a decent job of introducing MDPs and IRL, but stop short of saying how the objective function for MCEIRL is actually computed. Specifically, theta does not appear on the right hand side of Eq (1); implicitly, pi is a function of theta that is estimated, presumably, via value or policy iteration. The marginal probability of the initial state and its gradients presented in Section 4.1 are the main technical contribution of the paper, but most of the key details are deferred to the appendix or referenced to Ziebart (2010). For example, the dynamic programming algorithm for computing Eq (3) and the expectations over state-action paths in Eq (5) could use more discussion in the main text, as could some elements of the derivation of Eq (5). \\n\\nThe experimental results are presented primarily in words (e.g. \\\"\\\\pi_spec walks over the vase while \\\\pi_deviation and \\\\pi_reachability both avoid it.\\\"). It would be helpful to see the resulting paths taken by the various agents, or even better, to see their learned reward functions alongside the true reward functions. The only quantitative results are those in Figure 3, and unfortunately they are a bit confusing. Why would we expect non-monotonic rewards at some temperatures? Moreover, why are some reward \\\"percentages\\\" negative? \\n\\nThe idea of leveraging the initial state for augmenting the reward function is clever, but there are a few shortcomings of the current paper. There are basic concerns about how implicit and explicit rewards can be combined, and the technical presentation needs some improvement. Most importantly, the experimental results do not show enough quantitative evidence of how the proposed method performs. \\n\\n[UPDATE] I appreciate the authors' detailed response and revisions to the paper. I've updated my score accordingly.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}", "{\"title\": \"Strong paper; minor concerns\", \"review\": \"This paper considers the problem of inferring unspecified costs in an RL problem (e.g., inferring that vases in a room should not be broken). The primary insight is that the initial state of the environment conveys rich information about such unspecified costs since environments are often optimized for humans. The paper frames the problem of inferring unspecified costs from the initial condition as an inverse reinforcement learning (IRL) problem and applies the Maximum Causal Entropy IRL framework to solve this problem. Two methods are proposed for combining the inferred unspecified costs with specified costs. The efficacy of the proposed approach is demonstrated on a number of simulated examples.\\n\\nOverall, I was impressed by this paper and I believe that it makes a strong contribution. The paper presents an interesting perspective on a relatively old problem (the frame problem in AI). The primary intuition of the paper (that the initial state conveys information about unspecified costs) and the framing of this problem in terms of IRL is novel. The simulated examples (while relatively simple in terms of the number of states and actions) are informative and demonstrate the strengths of the approach (and also some of the weaknesses; the paper is explicit about the current challenges). The paper is very clearly written and is easy to read.\", \"my_concerns_are_relatively_minor\": [\"Perhaps the weakest bit of the paper is Section 5 (combining the specified reward with the inferred reward). As presented, the Additive method is somewhat hard to justify. However, the simulated results suggest that the Additive method performs slightly better than the Bayesian method. I would suggest either presenting a bit more intuition and justification for the Additive method or getting rid of this method altogether (since the results are not too different from the Bayesian method, which seems a bit more justifiable).\", \"One practical (and potentially important) question that the paper does not directly address is the problem of choosing the time horizon T (i.e., the time horizon for the past). In the standard IRL setting, it is reasonable to assume that the time horizon is given (since the demonstrations have an associated horizon). However, it is not entirely clear how to choose T in the setting considered in this paper. It is possible that if one chooses T to be too small, the inferred rewards will not be accurate (and one may have to look further back in the past to correctly infer rewards). A discussion of this issue and possible ways to choose T would be helpful.\", \"In Section 6.1 (baselines), the paper mentions that \\\"while relative reachability makes use of known dynamics, it does not benefit from our handcoded featurization\\\". Is it possible to modify the relative reachability method to also take advantage of the handcoded features, perhaps by considering dynamics over the feature space? If not, a sentence explaining that this is not straightforward would be helpful.\", \"In the related work section (and also in the introduction), I would recommend being more explicit about precisely what the differences are between the presented work and the approaches presented in (Krakovna et al. 2018) and (Turner, 2018). The paper is currently slightly vague about the differences.\", \"Currently, the title of the paper is a bit uninformative. On first reading the title, I expected a paper on control theory; the title makes no mention of unspecified costs, or reinforcement learning, or humans, etc. I believe that this is a good paper and that the paper would have more readers if the title was more inline with the content of the paper. Of course, this is at the discretion of the authors. My suggestion would be something along the lines of \\\"Inferring Unspecified Rewards in RL from the Initial State\\\".\"], \"typos\": [\"Pg. 1, second paragraph, 3rd line: there is a placeholder for citations.\", \"Periods are missing at the end of equations.\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}" ] }